Skip to content

Final Project Proposal: Vision Voice

The Concept Vision Voice is a wearable device that helps Deaf and Mute people communicate with others. Most people don’t understand Sign Language, so this device translates American Sign Language (ASL) gestures into speech you can hear and text you can see in real time.

image

## AI Generated Image of Vision Voice.

image *Image generated with Gemini

The "Why" Communication is a basic human right. Most current solutions use cameras, so the user has to stay in front of a laptop or phone. Vision Voice is a small wearable that works anywhere, letting the user communicate naturally with their hands.

image

Prior Art & Research What has been done before?

Some similar ideas already exist, which give inspiration and help improve the design:

How my project is different: My glove does all processing on the wrist using the XIAO ESP32-S3. It doesn’t need a phone or laptop, so it works anywhere on its own.

Digital Prototyping & Simulation Before making the actual glove, I will test the logic and connections using an online circuit simulators.

Tools: I will use Wokwi and tinkercad.

Goal: To check the pins and working of the components.

Benefit: This creates a “Digital Twin” of the project, so I can test and debug the translation code in software. It lowers the risk of damaging parts during assembly

Components

Components Function
XIAO ESP32-S3 The Brain. Controls everything.
Flex Sensor (4.5") Finger Sensing. Detects bending.
MPU6050 Hand Tilt. Detects rotation/gravity.
OLED Display (0.96") Visual Output. Shows text.
DFPlayer Mini Audio Player. Plays MP3 files.
Speaker (3W 4Ω) Sound Output. Voices the translation.
Push Button Controls. (Calibrate,Speak).
LiPo Battery Power Source.
Slide Switch On/Off.
Resistors Circuit Logic.
MicroSD Card Memory. Stores MP3 files.
Velcro Strap To hold the case.

Here is the link to details about the components I will be using for my Final Project

Visualize

To clearly visualize my final project, I have used canva to jot down the ideas and features of my project.

image

Here is the link to my canva for reference

Circuit

image

Here is the link to detailed connection.

*So, that’s it! I want to learn and build my project Vision Voice this year. Good luck to me!

Progress so far. ☻☻☻

I have been working on building the online simulation of project vision voice and I have made some progress. The platform I used to make the simulation is known as Wokwi. I used wokwi because it had almost all components that I needed.

I used LDR sensor module to mimic the Flex sensor and get the analog data, because wokwi doesn't have flex sensor and I researched the type of data I get from flex sensor and I found out that LDR sensors can be used to get the same analog data.

image

This is the circuit diagram I made using my reference to the original circuit.

I have used Claude AI to debug and write the program for my simulation and here is the link to view the chat history.

Here is the link to the simulation in wokwi.

Right now I can change the values of LDR sensor and the MPU6050 to display a specific word such as " HELLO "

image