Skip to content

DannyM125/ASL_Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ASL Detection

The objective of this project is to detect and recognize American Sign Language (ASL) signs in real time using a webcam. This is accomplished using computer vision and machine learning techniques, with the help of libraries such as Numpy, TensorFlow, MediaPipe, and OpenCV.

During runtime, the webcam is used to capture live video data, which is processed by the OpenCV library. OpenCV provides the necessary tools to preprocess and analyze video data, enabling the detection and recognition of ASL signs in real-time. Once a sign is detected, the corresponding letter is output to the screen.

The object training process is carried out using the Teachable Machine image model maker, which is a web-based tool for creating custom machine learning models. My model was trained to identify ASL signs by using a dataset of images representing the signs A, B, and C.

NOTES:

I trained my OWN model! I put a TensorFlow Keras model and its labels in a folder named "Model" in my actual project. Also, keep in mind the algorithm for saving files to certain folders with the corresponding letter in the data collection stage was made for Windows, therefore needs some tweaking to work on other OS.

Train your own model to improve its accuracy and versatility.

demo1.mp4

Using CVZONE

Using the CVZONE library allows us to create bounding boxes around the detected areas. This library also includes many other conveniences that would allow for more efficient and reliable data collection.

Untitled Untitled2 Untitled3

image4 image5 image6

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages