Imagine a world where doors become more than just obstacles for the visually impaired. Our YOLOv8 project revolutionizes accessibility by using cutting-edge computer vision technology to detect doors.
The dataset has been collected from New York and annotated through a combination of auto-annotation and manual enhancement.
Dataset Download: Download Dataset
- Install the required library:
pip install ultralytics
-
Load the model:
from ultralytics import YOLO model = YOLO("yolov8s.pt") # Load the pretrained model
-
Training the model:
model.train(data='/content/data.yaml', epochs=47, imgsz=1280, batch=8)
-
Load the best model:
model = YOLO("/content/runs/detect/train2/weights/best.pt") # Load the best model
-
Making predictions:
res = model.predict("/content/test.png", save=True, conf=0.3)
-
Visualizing results:
import matplotlib.pyplot as plt import matplotlib.image as mpimg # Load your images image1 = mpimg.imread('/content/test.png') image2 = mpimg.imread('/content/runs/detect/predict2/test.png') # Plotting the images side by side plt.figure(figsize=(20, 20)) plt.subplot(1, 2, 1) plt.imshow(image1) plt.title('Source Image') plt.subplot(1, 2, 2) plt.imshow(image2) plt.title('Predicted Image') plt.tight_layout() plt.show()
Here's a checklist of key points for YOLOv8 door detection project:
-
Data Annotation:
- Auto-annotate dataset using a cutting-edge solution.
- Enhance annotations manually for improved accuracy.
-
Model Training:
- Load the pre-trained YOLOv8 model.
- Train the model using the annotated dataset.
- Evaluate model performance .
-
Share Weights:
- Save model weights after training.
- Share trained model weights with the community.
-
Share Data:
- Provide access to the annotated dataset for transparency.
-
Hugging Face deployment:
- convert model into onnx
- Deploy the trained YOLOv8 model on Hugging Face.
-
API deployment:
- convert model into onnx
- FastAPI code for deployment.
- docker image
- deploymernt into AWS
-
Mobile deployment:
- convert model into tflite
- flutter code.