×

Real-time understanding of outdoor environments

Research / Pillars & Groups / Visual Sciences / VCG / Internships / Real-time understanding of outdoor environments
16 January. 2021
One of the main research challenges in the area of autonomous navigation (e.g. self-driving cars), is the real-time processing and understanding of 3D point clouds captured by LiDAR (Light Detection And Ranging) sensors in outdoor environments. As a consequence, many types of deep neural network architectures have been proposed for processing such data (e.g. PointNet++, PointCNN, FrustumNet etc.), which offer good accuracy on benchmarks but rarely offer real-time performance. The goal of this project is to develop architectures for real-time understanding of raw 3D point clouds of outdoor scenes, building upon existing top-performing neural network architectures. 
Required Skills
Basic knowledge of computer vision and deep learning. Programming skills: python, TensorFlow (optional), PyTorch (optional) 
Skills Level
Intermediate
Objectives
The project involves evaluating existing methods on real-world autonomous vehicle benchmarks e.g. KITTI, collecting synthetic or real data for challenging scenarios, and developing novel architectures for real-time outdoor scene understanding. 

Expected deliverables: Final report, trained networks, code base