An innovative project bridging accessibility gaps by converting speech and text into Braille.
Our system utilizes advanced machine learning for real-time speech-to-text conversion and Optical Character Recognition (OCR) for text recognition. An Arduino UNO R4 Wi-Fi Plus microcontroller controls six solenoids that generate tactile Braille feedback. This provides blind users with an effective tool to access written materials independently.
The speech-to-text model was developed using a combination of Convolutional Neural Networks (CNNs) for feature extraction and Recurrent Neural Networks (RNNs) with Gated Recurrent Units (GRUs) for sequence modeling. The training process included:
The hardware system consists of:
Text is converted into Braille patterns using a predefined Braille dictionary.
Braille patterns are mapped to their binary equivalent and sent via serial communication to the device.
Speech is first converted to text, then mapped to Braille. The Braille representation is converted into binary bits and sent to the Arduino for tactile feedback using solenoids.
Images are processed through OCR to extract text. The extracted text is converted to Braille, mapped to binary, and transmitted to the Arduino for tactile feedback using solenoids.
The project achieved outstanding results:
Future work includes integrating a mobile application, improving the hardware for better portability, and implementing real-time camera OCR.
Zaid Osama Saif
Email: z.saif1@gju.edu.jo
Email: zaidsaif1me@gmail.com
LinkedIn: linkedin.com/in/zaidsaif/