Supervised City Mesh texturing via deep neural networks

Research / Pillars & Groups / Visual Sciences / VCG / Internships / Supervised City Mesh texturing via deep neural networks
16 January. 2021
Supervised deep learning approaches sometimes is limited due to the lack of data for specific tasks and researchers flee in unsupervised approaches as sometimes to create datasets that contain the target for each input is too time-consuming or even not possible. However, for real scenarios, datasets as cityscapes and kitti were developed in order to provide the capability for the research community to work on various computer vision tasks as semantic scene segmentation, autonomous car navigation, pedestrian detection and so forth. In this project, we would like to investigate whether we can use transfer learning from real to virtual data. More precisely, we intend to use deep networks as Pix2Pix or Pix2PixHD to translate panoramic street-view data from semantic segmentation street view to a real street-view image 
Required Skills
Basic knowledge of computer vision and deep learning. Programming skills: python, TensorFlow (optional), PyTorch (optional) 
Skills Level
This project is separated into two phases, the first is the virtual data collection and the second phase is to experiment on whether a deep image-to-image learning approach can generalize from real to virtual data. 

Expected deliverables: Final report, trained networks, code base, dataset