Deep Learning based Semantic Annotation for improved perception and better path planning

Various sensors like radars and cameras, installed on the vehicle are used to build the perception of its surrounding. This perception is used in path planning for Autonomous Vehicles. Perception building is very complex and time consuming due to various road scenarios and the numerous objects (and their densities) that are present on the road like cars, pedestrians, light poles, trees etc.
Semantic Annotation is used to build the perception of the road where each frame is annotated from the captured video at a pixel level and the image is segmented in accordance with the various objects . This approach is necessary for Urban Scenarios as the precision required for maneuvering the self-driving vehicle is higher as compared to highways

Our approach to Semantic Annotation:

Images from the video are manually annotated and this manual output is used to train a Deep Learning Model that automates batch processing of images without any manual intervention and generates the Semantic Annotated frames.
KPIT’s Automated Semantic Annotation tool chain can annotate more than 25 objects per frame with the tolerance level of 3 pixels in complex scenarios. This tool reduces the overall time and effort required for annotation and improves accuracy over time.