mahekupadhye31/Deep-learning-enabled-smart-glove
This project involves a smart glove designed for real-time sign language translation. Equipped with flex sensors and an MPU-6050 module, it captures hand gestures and translates them into text and speech using Bi-LSTM.
๐๏ธ Gesture Vocalizer โ Bridging Communication Through Technology
๐ Overview
The Gesture Vocalizer project empowers individuals with speech and hearing impairments by translating hand gestures into speech or text output.
By combining flex sensors, accelerometers, and deep learning models, this wearable system bridges the communication gap โ enabling real-time gesture-to-voice translation.
๐ง Proposed Design
The design phase focuses on translating the identified problem into a functional, wearable prototype that captures, processes, and vocalizes gestures.
๐งฉ Hardware Design
1. Arduino Mega Microcontroller
- Acts as the computational hub with multiple I/O pins.
- Handles real-time sensor integration and data transmission.
2. Flex Sensors
- Five sensors attached to a glove detect finger bending through resistance changes.
- Provide continuous analog data reflecting hand posture.
3. MPU-6050 Accelerometer
- Measures acceleration and angular velocity to capture spatial orientation.
- Enhances motion precision and dynamic gesture recognition.
โ๏ธ Hardware Design Workflow
- Sensor Integration: Connect flex sensors to Arduino analog pins.
- MPU-6050 Integration: Interface via I2C to capture acceleration and rotation.
- Power Supply: Powered through USB to ensure stable operation.
- Encapsulation: Mounted on a glove to maintain sensor stability and comfort.
๐ป Software Design
1. Neural Network
- Implements a Bi-directional LSTM for gesture recognition.
- Processes sequences of sensor readings to identify dynamic and static gestures.
- Trained using a custom dataset for enhanced accuracy.
2. Custom Dataset
- Captures 3-second intervals of sensor data per gesture.
- Combines flex sensor and MPU-6050 outputs into a single input vector.
- Stored in CSV format for efficient model training and testing.
3. User Interface
- A simple Python-based UI displays recognized gestures in real time.
- Enables recording of new gesture data and live model inference.
โ๏ธ Implementation
๐ง Algorithm Overview
Initialization
- Define analog pins for flex sensors (A0โA4).
- Initialize MPU6050 and set up I2C communication.
Setup Function
- Begin serial communication and initialize all sensors.
- Configure pin modes for flex sensors.
Loop Function
- Read analog flex sensor data.
- Capture accelerometer and gyroscope readings.
- Print all data to the serial monitor for real-time observation.
#include <Wire.h>
#include <MPU6050.h>
MPU6050 mpu;
int flexPins[5] = {A0, A1, A2, A3, A4};
void setup() {
Serial.begin(9600);
Wire.begin();
mpu.initialize();
for (int i = 0; i < 5; i++) pinMode(flexPins[i], INPUT);
}
void loop() {
for (int i = 0; i < 5; i++) {
int flexVal = analogRead(flexPins[i]);
Serial.print(flexVal);
Serial.print("\t");
}
int16_t ax, ay, az, gx, gy, gz;
mpu.getMotion6(&ax, &ay, &az, &gx, &gy, &gz);
Serial.println();
delay(100);
}๐งพ Results and Readings
| Gesture | Flex1 | Flex2 | Flex3 | Flex4 | Flex5 | Ax | Ay | Az | Predicted Output |
|---|---|---|---|---|---|---|---|---|---|
| Hello | 612 | 589 | 578 | 560 | 533 | 112 | 85 | 75 | Hello |
| Thank You | 604 | 591 | 576 | 569 | 540 | 120 | 90 | 80 | Thank You |
| Help | 620 | 580 | 570 | 562 | 530 | 118 | 86 | 78 | Help |
Model Performance Comparison Across Algorithms
๐ ๏ธ Tech Stack
| Category | Technologies |
|---|---|
| Hardware | Arduino Mega, Flex Sensors, MPU-6050 |
| Programming | Python, C++ |
| Machine Learning | TensorFlow, LSTM |
| Data Handling | Pandas, NumPy |
| UI / Visualization | HTML/CSS |
โก Installation & Setup
1. Clone the Repository
git clone https://github.com/username/gesture-vocalizer.git
cd gesture-vocalizer2. Install Python Dependencies
pip install -r requirements.txt3. Upload Arduino Script
Use the Arduino IDE to upload the .ino file to your Arduino Mega board.
4. Run the Application
python app.py
5. Interact
Perform gestures wearing the glove โ recognized gestures will appear in the UI and be vocalized.
๐ Performance Summary
| Metric | Value |
|---|---|
| Model accuracy | 84.59% |
| Detection Latency | < 0.5 seconds |
| Dataset Scale | 3-second windows ร 5 sensors ร N gestures |
๐ Achievements
- Our research was published in the Journal of Electrical Systems, Vol. 20, No. 10s (2024).
- We obtained a copyright for our novel dataset.
๐ค Contributors
- Mahek Upadhye
- Aasmi Thadhani
- Urav Dalal
- Shreya Shah





