**Developed by the Edge AI Team
This repository contains a machine learning pipeline for classifying hand gestures using 8-channel forearm EMG signals. The project focuses on taking a model from raw data exploration to optimized hardware deployment.
We trained a Multilayer Perceptron (MLP) on time-domain engineered features and applied Static 8-bit Quantization (INT8) to the final model. The quantized model is exported to ONNX format for efficient, low-latency inference on edge devices like the Raspberry Pi Zero 2 W.
The dataset used for training the model. You can download it directly here: Download EMG Dataset (.csv files)
- Notebooks: Full pipeline including EDA, preprocessing, feature extraction, model training, Optuna hyperparameter tuning, and ONNX export.
- Deployment:
run_inference.pyscript to benchmark and run the quantized ONNX model locally on a Raspberry Pi. - Models: Exported FP32 and INT8
.onnxmodels for edge evaluation.
The following steps were used to deploy and test the model locally on the Raspberry Pi.
First, clone this GitHub repository to your Raspberry Pi to get the most up-to-date deployment scripts (run_inference.py and MLResourceUse.py):
git clone https://github.com/LonghornNeurotech/Edge_EMG_Classification.git
cd Edge_EMG_ClassificationNext, download your heavy model binaries (.onnx files) and the evaluation dataset (.npy files) from Google Drive directly into the folder you just cloned:
pip install gdown --break-system-packages
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrcgdown --folder "https://drive.google.com/drive/folders/YOUR_DRIVE_LINK_HERE" -O ./To avoid externally-managed-environment (PEP 668) restrictions on the latest Raspberry Pi OS, create and activate a virtual environment:
python3 -m venv emg_env
source emg_env/bin/activateInstall the required ONNX Runtime and NumPy packages inside the virtual environment:
pip install onnxruntime numpyNavigate to the downloaded folder, ensure run_inference.py is present, and execute the benchmark:
cd Edge_EMG_Classification
python run_inference.pyTip: If you push new code to GitHub and want to instantaneously overwrite your Pi's
run_inference.pywithout dealing with git merges or re-cloning, simply pull the raw file directly:wget -O run_inference.py https://raw.githubusercontent.com/LonghornNeurotech/Edge_EMG_Classification/main/run_inference.py
Running inference on the test set (4,497 samples) yielded the following performance metrics:
| Model Version | Accuracy | Total Time | Avg Latency/Sample |
|---|---|---|---|
FP32 Baseline (emg_mlp_model.onnx) |
90.50% | 4.68 seconds | 1.04 ms |
INT8 Quantized (emg_mlp_model_quantized.onnx) |
90.37% | 1.94 seconds | 0.43 ms |
Static 8-bit quantization achieved a 2.4x speedup (cutting latency from 1.04ms to 0.43ms) with a negligible accuracy drop of only 0.13%.