Image Interpretation From Wearable Cameras

Research / Pillars & Groups / Visual Sciences / BIO-SCENT / Internships / Image Interpretation From Wearable Cameras
16 January. 2021
Wearable cameras are small and light devices which can be fastened at human body covering the point of view of the wearer. They provide the capability to seamlessly record visual data in a passive way, in a first-person perspective, while the wearer is performing her/his activities. Visual lifelogging is the seamless collection of images and/or videos using wearable cameras and involves the continuous recording of the daily life of the wearer for a long period of time. The new field of the computer vision which deals with the content analysis of data collected by wearable cameras, is called Egocentric Vision or First-person Vision. The analysis of such visual data can be successfully used to study everyday life and draw useful conclusions about human behavior, aiming to improve the quality of life.  
Required Skills
1. Basic knowledge in image processing or computer vision or deep learning 
2. Computer programming skills
Skills Level
The aim of the project is to develop advanced technologies for visual lifelog image analysis. Visual lifelog image analysis is concerned with the analysis of first-person images captured by wearable cameras or smart phones. The project will involve work in collecting and labelling 
data for supervised learning frameworks and/or the development of deep learning methods for interpreting visual lifelog images and/or experimental evaluation. Image interpretation algorithms will involve image enhancement, object detection and action classification. 
The work will focus on one of the following two ongoing projects: 
(1) automatic barriers 
detection in urban sidewalks for sustainable and safe public spaces, 
(2) tracking the visual 
view of museum visitors for enhancing the overall visiting experience and assisting curators and other museum professionals.