NeuroPreter is a sign-language interpreter powered by the ever-improving Neural Network algorithms. The product/service is designed to facilitate a conversation between people who are hearing-impaired and enabled.

Developed as part of the machine learning class, NeuroPreter uses Wekinator and OpenFrameworks to classify various sign-gestures. A video camera is used to capture the live feed of the person talking in American Sign Language. A Processing Script is then used to convert the interpreted gestures into an audio output to mimic conversational speech. On the other side, a javascript enabled browser is employed to convert speech to text for the hearing-impaired user to read.

NeuroPreter is set-up as a communication-point/station where people who are hearing impaired and enabled could come and talk with each other one on one, much like in a coffee-shop. The product eliminates the need for people to learn ASL and still be able to communicate and develop closer relationships with people who are hearing-impaired.

The project was a great way to be able to apply our machine learning tools for universal design. One of the main learnings was the importance of higher fidelity while making a prototype for this project, since the strength of the concept depended on how well two people could communicate using the prototype. As better STT (speech to text) and TTS (text to speech) systems develop, this product/service could see itself easily manifested in the real world.