Object-Recognition-for-Autonomous-Driving-System-MATLAB-project
The goal of this project is to provide object detection and information on environment model on traffic activity which helps autonomous vehicles or surveillance systems. Computer vision is an essential component for autonomous scars. Accurate detection of vehicles, street buildings, pedestrians, and road signs could assist self-driving cars the drive as safely as humans. However, object detection has been a challenging task for years since images of objects in the real-world environment are affected by illumination, rotation, scale, and occlusion. A unified object detection model, You Only Look Once (YOLO), is used which could directly regress from the input image to object class scores and positions. In this project, we applied YOLO to two different datasets to test its general applicability. We fully analyzed its performance from various aspects on KITTI data set which is specialized for autonomous driving. We proposed a novel technique called memory map, which considers inter-frame information, to strengthen YOLO's detection ability in the driving scene. We broadened the model's applicability scope by applying it to a new orientation estimation task. For this project objective is to provide a information of quality and environmental model on traffic activity and to signal potentially anomalous situation and also apply various machine learning models for object detection such as SVM and CNN and compare and contrast the results with YOLO .