Created during the MakeUofT Hackathon - Checkout the project submission here!
The web app allows fluent and uninterrupted conversation to occur between a person who communicates through sign language with individuals who do not understand sign language. In one direction, sli.ai converts sign language from camera input into text to be displayed to the user. Inversely, the speech from the user is converted to text to be displayed to the person who mainly speaks sign language.
The entire front-end was built using VueJS, with speech recognition being done using Chrome's Web Speech API. The two machine learning models were built. The first model is a frozen pre-trained model which works with a convolutional neural network. The second model is one that was built used Microsoft's custom vision, with manual images being taken and fed.
- Andrew Drury - Developer - AndrewDrury
- Andy Wang - Developer - AndyWang99
- Harsh Gupta - Developer - harsh2204