Advancing Environmental Monitoring: YOLO Algorithm For Real- Time Detection Of Greater One-Horned Rhinos
DOI:
https://doi.org/10.64252/hg0c1n40Keywords:
Computer Vision, Machine Learning, Deep Learning, Convolutional Neural Network, YOLO Algorithm, Object Detection, Wildlife Monitoring, Human-Animal ConflictsAbstract
Object detection is a key challenge in computer vision, with applications that span security, surveillance, autonomous driving, and wildlife conservation. You Only Look Once (YOLO) has emerged as a state-of-the-art framework for real-time object detection, and its models are typically trained on standard datasets such as Common Objects in Context (COCO) and PASCAL Visual Object Classes (VOC), which often lack the diversity needed for real-world applications. Conservation efforts for endangered species, such as the Greater One-horned Rhino, require specialized solutions due to their ecological importance and vulnerability.
Assam and Northeast India, home to the largest population of the endangered Greater One-horned Rhino, faces severe environmental threats including annual monsoon floods, habitat encroachment, and human-wildlife conflicts. These challenges highlight the urgent need for AI-powered monitoring solutions that can aid in conservation efforts. This work evaluated the performance of various You Only Look Once (YOLO) models, from YOLOv5 to YOLOv9, to detect one-horned rhinos. It also enhances the existing rhino dataset by improving data quality, diversity, and relevance and tunes hyper-parameters for optimal performance. The best-performing model achieved a mean average precision (mAP) of 98.9% and an F1 score of 98%.
Our findings underline the potential of tailored deep-learning models for wildlife monitoring, offering a scalable and effective approach to mitigating human-animal conflicts. By integrating AI into conservation practices, we can enhance real-time tracking, improve habitat protection strategies, and contribute to the long-term survival of endangered species.