This repository provides a framework for analysis based on visual data. The current implementation is set up for sentiment analysis from facial images, assigning emotion scores (percentages for Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral) using the FER library by Justin Shenk. The tool outputs an Excel file with the image name, emotion percentages, and the dominant emotion detected in each image.
This tool is designed to be used with data collected from the RPi Data Collection repository, also available on GitHub.
Why we use FER:
- FER is simple to use and requires minimal setup
- Works well for detecting emotions in clear facial images
- Actively maintained and well-documented
- Good balance between accuracy and performance
Alternatives considered:
- DeepFace: More accurate face detection, better for multiple faces, but heavier dependency
- MediaPipe: Google's lightweight solution, excellent for real-time processing, less emotion-focused
Visual_Analysis_Tool
├── main.py # Entry point of the application
├── requirements.txt # Project dependencies
├── .gitignore # Files and directories to ignore in Git
├── example_images # Folder with example images
├── example_results.xlsx # Example results from running the example images
└── README.md # Project documentation
-
Clone the repository:
git clone https://github.com/henrylevesque/Visual_Analysis_Tool cd Visual_Analysis_Tool -
Create a virtual environment:
python -m venv .venv -
Activate the virtual environment:
On Windows (PowerShell):
.\.venv\Scripts\Activate.ps1
On Windows (Command Prompt):
.venv\Scripts\activate
On macOS and Linux:
source .venv/bin/activate -
Install the required dependencies:
pip install -r requirements.txtNote: The first run may take a few minutes as TensorFlow downloads required models.
Option 1: Using config.yaml (Recommended - Easiest)
Edit config.yaml in the project root:
image_folder: /path/to/your/images
output_file: ./sentiment_results.xlsxThen run:
python main.pyOption 2: Using command-line arguments
python main.py --image_folder "C:\path\to\your\images" --output_file "results.xlsx"Option 3: Run with defaults (example images)
python main.pyThe output file will be created in the current working directory (where you ran the command).
The easiest way to configure the tool is to edit config.yaml:
# config.yaml - Set your paths here
image_folder: ./my_images
output_file: ./my_results.xlsxThen simply run:
python main.pyAdvantages:
- No need to type long paths every time
- Easily switch between different image folders
- Config is saved in the project
Command-line arguments override config.yaml values:
python main.py --image_folder "C:\my\images" --output_file "results.xlsx"If you have multiple config files (e.g., config_session1.yaml, config_session2.yaml):
python main.py --config config_session1.yaml-
Make sure the virtual environment is activated:
.\.venv\Scripts\Activate.ps1
-
Run the script with your image folder:
python main.py --image_folder "C:\path\to\your\images" --output_file "sentiment_results.xlsx"
-
Check the output: The Excel file will be created in the current directory. The file contains:
- Image: Filename of the analyzed image
- Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral: Emotion detection percentages (0-1 scale)
- Highest Emotion: The dominant emotion detected, or "No face detected" if no face was found.
Note: If the script fails to run, ensure you're in the correct directory and the virtual environment is activated (you should see (.venv) in your terminal prompt). If the file already exists, a new file with an incremented name will be created (e.g., session1_results_1.xlsx).
Solution: Make sure the virtual environment is activated. You should see (.venv) in your terminal prompt.
# Activate the virtual environment
.\.venv\Scripts\Activate.ps1Then run the script again.
Solution: Double-check the image folder path. Paths with spaces must be quoted:
# Correct (with spaces)
python main.py --image_folder "C:\path\to\your\images\image folder" --output_file "results.xlsx"
# Correct (no spaces)
python main.py --image_folder C:\path\to\your\images\image_folder --output_file results.xlsxSolution: This usually means:
- No
.jpg,.png, or.jpegfiles in the folder (check file extensions) - Images are corrupted or unreadable
- Images have different extensions (e.g.,
.JPGvs.jpg)
Check your folder:
Get-ChildItem "C:\your\image\folder" -Include *.jpg, *.png, *.jpegSolution: On the first run, TensorFlow downloads required models (~500MB). This is normal and happens only once. Subsequent runs will be faster.
Solution: Use absolute paths (full paths) instead of relative paths, and always quote paths with spaces:
# Good - absolute path with quotes
python main.py --image_folder "C:\path\to\your\images" --output_file "results.xlsx"
# Avoid - relative paths can be unreliable
python main.py --image_folder ./images --output_file results.xlsxEach row in the output Excel file represents one image analyzed:
| Column | Meaning | Range |
|---|---|---|
| Image | Filename | Text |
| Angry | Anger emotion score | 0.0 - 1.0 |
| Disgust | Disgust emotion score | 0.0 - 1.0 |
| Fear | Fear emotion score | 0.0 - 1.0 |
| Happy | Happiness emotion score | 0.0 - 1.0 |
| Sad | Sadness emotion score | 0.0 - 1.0 |
| Surprise | Surprise emotion score | 0.0 - 1.0 |
| Neutral | Neutral emotion score | 0.0 - 1.0 |
| Highest Emotion | Dominant emotion detected | angry, disgust, fear, happy, sad, surprise, neutral, or "No face detected" |
Feel free to submit issues or pull requests for improvements or bug fixes.
This project is licensed under the MIT License. Details can be found in the LICENSE file.