Skip to content

hesomelo/3D-Hand-Tracking

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D-Hand-Tracking

This project is being developed under QMIND as a design team part of the DAIR division. DAIR @ QMIND

We aim to develop a Tensorflow model to predict the 3D shape and pose of two-hands through high hand-to-hand and hand-to-object contact, using just a monocular RGB input. This project is still under development - for anything regarding this project, see the project roadmap.

Preliminary Results

Below we present our preliminary results, that being an implementation of UNET to predict the segmentation mask of images from the RHD dataset.

Pictured from Left -> Right

Input image, Ground truth segmentation mask, Model prediction

Screen Shot 2021-12-08 at 4 51 07 PM Screen Shot 2021-12-08 at 4 51 15 PM Screen Shot 2021-12-08 at 4 51 33 PM

Steps for Using

Simply clone the repo and run all code blocks in src/HandTracking.ipynb. Pay careful attention to any comments at the top of the code blocks, as some are only meant to run when using the project from within Google Colab.

About

ML model written in Tensorflow to predict the 3D shape and pose of two-hands from an RGB image

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 80.8%
  • Jupyter Notebook 17.2%
  • MATLAB 1.8%
  • Other 0.2%