This project aims to detect and recognize American Sign Language (ASL) alphabet signs in images and real-time video using deep learning and computer vision techniques. It leverages the Ultralytics YOLOv8 object detection framework and Streamlit for building an interactive web application.
- Object detection and recognition of ASL alphabet signs
- Support for both image-based and real-time webcam-based detection
- User-friendly web interface for easy interaction
- Display of bounding boxes and class labels on detected signs
- Python 3.11
- Streamlit
- Ultralytics YOLO
- OpenCV
- PIL
- Docker (optional)
- Clone the repository:
[email protected]:neevaiti/US_sign_language_vision.git
- Install the required Python packages:
pip install -r requirements.txt
- Run the Streamlit app:
streamlit run main.py
-
Access the application via the provided URL in the terminal.
-
Select the desired option ("Image" or "Camera") from the sidebar.
-
Follow the instructions for image-based or webcam-based detection.
Alternatively, you can run the application using Docker for easier setup and deployment. Here's how:
- Pull the Docker image from the Docker Hub:
docker pull neevaiti/asl-detect-app:v1.0
- Run the Docker container:
docker run -p 80:80 neevaiti/asl-detect-app:v1.0
- Access the application by opening your web browser and navigating to
http://localhost:8501
.
- You can adjust the confidence threshold and IOU values in the code to change the detection accuracy and speed.
- To modify the appearance of bounding boxes and class labels, you can change the colors and text properties in the
transform
anddraw_preds
methods of theVideoTransformer
class.
This project is licensed under the MIT License.