Volume - 7 | Issue - 1 | march 2025
Published
24 April, 2025
This study presents an smart vehicle infotainment system designed to enhance driving safety through real-time object detection, depth estimation, and motion tracking. The system utilizes a pre-trained YOLOv9 deep learning model, specifically customized to identify objects such as pedestrians, motorcyclists, and vehicles in the Indian road context. The model is deployed on a Raspberry Pi, enabling low-cost, on-device inference without the need for high-end computing hardware. To determine the object proximity, a stereo vision-based depth estimation technique is implemented, allowing the system to calculate the distance of detected objects from the vehicle. Motion detection algorithms are integrated to track the movement of objects across frames, thereby improving detection accuracy and reliability in dynamic environments. The system also features a real-time voice alert module, which notifies the driver when an object is detected within a one-meter range. These alerts can be activated or deactivated based on user preference. A key capability of the system is blind spot detection, which helps the vehicles to identify the areas that are not directly visible to the driver, thereby reducing the risk of side collisions. Overall, the proposed solution demonstrates the feasibility and effectiveness of combining deep learning, embedded vision, and smart alert systems on affordable edge devices for safer driving and potential applications in AI-based autonomous vehicles.
KeywordsYOLOv9 Real-Time Object Detection Depth Estimation Motion Detection Voice Alerts Raspberry Pi Blind Spot Detection Autonomous Systems