Skip to content

henrylevesque/Visual_Analysis_Tool

Repository files navigation

Visual Analysis Tool

This repository provides a framework for analysis based on visual data. The current implementation is set up for sentiment analysis from facial images, assigning emotion scores (percentages for Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral) using the FER library by Justin Shenk. The tool outputs an Excel file with the image name, emotion percentages, and the dominant emotion detected in each image.

This tool is designed to be used with data collected from the RPi Data Collection repository, also available on GitHub.

About FER vs Alternatives

Why we use FER:

  • FER is simple to use and requires minimal setup
  • Works well for detecting emotions in clear facial images
  • Actively maintained and well-documented
  • Good balance between accuracy and performance

Alternatives considered:

  • DeepFace: More accurate face detection, better for multiple faces, but heavier dependency
  • MediaPipe: Google's lightweight solution, excellent for real-time processing, less emotion-focused

Project Structure

Visual_Analysis_Tool
├── main.py                # Entry point of the application
├── requirements.txt       # Project dependencies
├── .gitignore             # Files and directories to ignore in Git
├── example_images         # Folder with example images
├── example_results.xlsx   # Example results from running the example images
└── README.md              # Project documentation

Installation

  1. Clone the repository:

    git clone https://github.com/henrylevesque/Visual_Analysis_Tool
    cd Visual_Analysis_Tool
    
  2. Create a virtual environment:

    python -m venv .venv
    
  3. Activate the virtual environment:

    On Windows (PowerShell):

    .\.venv\Scripts\Activate.ps1

    On Windows (Command Prompt):

    .venv\Scripts\activate

    On macOS and Linux:

    source .venv/bin/activate
  4. Install the required dependencies:

    pip install -r requirements.txt
    

    Note: The first run may take a few minutes as TensorFlow downloads required models.

Quick Start

Option 1: Using config.yaml (Recommended - Easiest)

Edit config.yaml in the project root:

image_folder: /path/to/your/images
output_file: ./sentiment_results.xlsx

Then run:

python main.py

Option 2: Using command-line arguments

python main.py --image_folder "C:\path\to\your\images" --output_file "results.xlsx"

Option 3: Run with defaults (example images)

python main.py

The output file will be created in the current working directory (where you ran the command).

Configuration

Using config.yaml (Recommended)

The easiest way to configure the tool is to edit config.yaml:

# config.yaml - Set your paths here
image_folder: ./my_images
output_file: ./my_results.xlsx

Then simply run:

python main.py

Advantages:

  • No need to type long paths every time
  • Easily switch between different image folders
  • Config is saved in the project

Using Command-Line Arguments

Command-line arguments override config.yaml values:

python main.py --image_folder "C:\my\images" --output_file "results.xlsx"

Using a Custom Config File

If you have multiple config files (e.g., config_session1.yaml, config_session2.yaml):

python main.py --config config_session1.yaml

Usage

Basic Usage

  1. Make sure the virtual environment is activated:

    .\.venv\Scripts\Activate.ps1
  2. Run the script with your image folder:

    python main.py --image_folder "C:\path\to\your\images" --output_file "sentiment_results.xlsx"
  3. Check the output: The Excel file will be created in the current directory. The file contains:

    • Image: Filename of the analyzed image
    • Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral: Emotion detection percentages (0-1 scale)
    • Highest Emotion: The dominant emotion detected, or "No face detected" if no face was found.

Note: If the script fails to run, ensure you're in the correct directory and the virtual environment is activated (you should see (.venv) in your terminal prompt). If the file already exists, a new file with an incremented name will be created (e.g., session1_results_1.xlsx).

Troubleshooting

Problem: "ModuleNotFoundError: No module named 'fer'"

Solution: Make sure the virtual environment is activated. You should see (.venv) in your terminal prompt.

# Activate the virtual environment
.\.venv\Scripts\Activate.ps1

Then run the script again.

Problem: "Error: Image folder does not exist"

Solution: Double-check the image folder path. Paths with spaces must be quoted:

# Correct (with spaces)
python main.py --image_folder "C:\path\to\your\images\image folder" --output_file "results.xlsx"

# Correct (no spaces)
python main.py --image_folder C:\path\to\your\images\image_folder --output_file results.xlsx

Problem: "No valid images processed. No results to save."

Solution: This usually means:

  • No .jpg, .png, or .jpeg files in the folder (check file extensions)
  • Images are corrupted or unreadable
  • Images have different extensions (e.g., .JPG vs .jpg)

Check your folder:

Get-ChildItem "C:\your\image\folder" -Include *.jpg, *.png, *.jpeg

Problem: Script takes a long time to start

Solution: On the first run, TensorFlow downloads required models (~500MB). This is normal and happens only once. Subsequent runs will be faster.

Problem: "Cannot find path..." or path-related errors

Solution: Use absolute paths (full paths) instead of relative paths, and always quote paths with spaces:

# Good - absolute path with quotes
python main.py --image_folder "C:\path\to\your\images" --output_file "results.xlsx"

# Avoid - relative paths can be unreliable
python main.py --image_folder ./images --output_file results.xlsx

Output Format

Each row in the output Excel file represents one image analyzed:

Column Meaning Range
Image Filename Text
Angry Anger emotion score 0.0 - 1.0
Disgust Disgust emotion score 0.0 - 1.0
Fear Fear emotion score 0.0 - 1.0
Happy Happiness emotion score 0.0 - 1.0
Sad Sadness emotion score 0.0 - 1.0
Surprise Surprise emotion score 0.0 - 1.0
Neutral Neutral emotion score 0.0 - 1.0
Highest Emotion Dominant emotion detected angry, disgust, fear, happy, sad, surprise, neutral, or "No face detected"

Contributing

Feel free to submit issues or pull requests for improvements or bug fixes.

License

This project is licensed under the MIT License. Details can be found in the LICENSE file.

About

An open source visual analysis tool based on FER

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages