17 January. 2021
The capability of our visual system to convey color information of the environment where we are interacting with, is one of the most amazing and complex mechanisms that a human is experiencing in its daily life. However, to fully reproduce this experience is not an easy task. First only some aspects of the complete behavior of the human visual system are understood today. Second, nowadays we have available a large variety of digital color devices, i.e., mobile phones, High Dynamic Range displays, e-watches, SDR displays, projector systems, VR/AR headsets etc. with completely different characteristics. Third, the illumination conditions where images and video are often watched are not optimal. Fourth, these devices are equipped with different computational resources, e.g., e-watches and VR/AR sets have limited computational resources when compared to mobile phones and displays. This make very hard to convey similar visual experience to different users using different digital devices. This work will investigate this issue in its complexity, providing solutions to some of the aspects of how the color and high dynamic range content needs to be managed to convey the most realistic visual experience even on devices with limited computational resources.
Programming skills in C/C++, Python, knowledge of deep-learning is a plus