Analyzing the Social Behaviours of Computer Vision Algorithms

Research / Pillars & Groups / Communications & Artificial Intelligence / TAG / Internships / Analyzing the Social Behaviours of Computer Vision Algorithms
17 January. 2021
There are increasing expectations that algorithms should behave in a manner that is socially just. We consider the case of computer vision APIs and their interpretations of people images. Such services have become indispensable in our information ecosystem, facilitating new modes of visual communication and sharing. But while they offer developers a convenient means to add functionality to their creations, most are opaque and proprietary. Many have criticized the way they perform and judge people’s photos and their overall algorithmic fairness to the ground truth. These services are often making judgments on a person’s physical appearance. How do these judgments change when controlling or modify parts of the facial characteristics on a person's photo? What are the most crucial parts for the algorithms that can influence their decisions? Consequently, how is their objective and/or subjective behavior being affected?  
Required Skills
Basic knowledge in image processing or computer vision or deep learning, Python  
Skills Level
The aim of the project is to develop tools for manipulating the physical facial characteristics in a persons’ photo and use the groups’ prior research tools to study the behavior of computer vision services  such as Google Vision, Amazon Rekognition, IBM Watson, Microsoft FACE, Clarifai and Imagga. The project will involve work in data collection and analysis.