Want to take your sign language model a little further?
In this video, you'll learn how to leverage action detection to do so!
You'll be able to leverage a keypoint detection model to build a sequence of keypoints which can then be passed to an action detection model to decode sign language! As part of the model building process you'll be able to leverage Tensorflow and Keras to build a deep neural network that leverages LSTM layers to handle the sequence of keypoints.
In this video you'll learn how to:
1. Extract MediaPipe Holistic Keypoints
2. Build a Sign Language model using a Action Detection powered by LSTM layers
3. Predict sign language in real time using video sequences
Get the code:
https://github.com/nicknochnack/ActionDetectionforSignLanguage
Chapters
0:00 - Start
0:38 - Gameplan
1:38 - How it Works
2:13 - Tutorial Start
3:53 - 1. Install and Import Dependencies
8:17 - 2. Detect Face, Hand and Pose Landmarks
40:29 - 3. Extract Keypoints
57:35 - 4. Setup Folders for Data Collection
1:06:00 - 5. Collect Keypoint Sequences
1:25:17 - 6. Preprocess Data and Create Labels
1:34:38 - 7. Build and Train an LSTM Deep Learning Model
1:50:11 - 8. Make Sign Language Predictions
1:52:40 - 9. Save Model Weights
1:53:45 - 10. Evaluation using a Confusion Matrix
1:57:40 - 11. Test in Real Time
2:20:46 - BONUS: Improving Performance
2:26:52 - Wrap Up
Oh, and don't forget to connect with me!
LinkedIn: https://bit.ly/324Epgo
Facebook: https://bit.ly/3mB1sZD
GitHub: https://bit.ly/3mDJllD
Patreon: https://bit.ly/2OCn3UW
Join the Discussion on Discord: https://bit.ly/3dQiZsV
Happy coding!
Nick
P.s. Let me know how you go and drop a comment if you need a hand!