×

Key insights from Professor Nick Bassiliades’ work.

News / Blog / December 2019 / Key insights from Professor Nick Bassiliades’ work.
13 December. 2019
RISE

Explainable Artificial Intelligence: Interrogating the AI systems

Key insights from Professor Nick Bassiliades’ work.

 
44.png

Explainable AI (XAI) refers to methods and techniques in the application of AI technology, such, that the results of the solution can be understood by human experts. Following Professor Bassiliades' presentation at RISE in November, we were delighted to get some further information in regards to his work and planned future steps through his participation in our blog. 
 
1. Based on the methodologies and implementation processes you have developed, how close are we in adopting Explainable AI (XAI)?
 
Establishing a general methodology for explaining any AI system is far away. However, if we attempt to interpret one AI system at a time, we may be able to achieve the desired outcome and move from obscure-unintelligible systems to explainable systems.

42.png

2. Please elaborate a bit in regard to the work of the 2 projects AI4EU (H2020) and SoCoLa (ΕΛΛΙΔΕΚ).
 

AI4EU is a visionary project on building a platform accessible to anyone, which will provide solutions to AI-oriented problems. Through this platform, researchers all around Europe, could solve open AI challenges like Explainable AI, submitting tools and methodologies. Our work in the platform will be to contribute on Explainable AI, and more specifically on interpretable machine learning and argumentation.
 

In addition, SoCoLA is logic based system that relies on socio-cognitive skills, in order to describe objects and identify unknown objects in a household environment, as well as a set of actions that can be performed on them (affordances), the purpose of using these objects, and whether they are recommended for a person or an agent (e.g. robot) to collaborate with them.
 
 
3. Which are the future steps you are planning to follow in regards to your research?

In AI4EU, our future research focus will be to extend our methodologies and evolve them, as well as to develop new methodologies on explaining black box models like random forests. Another direction would be for black box models to predict failures using time-series data and converting extracted explanations to argumentative narratives.

In SoCoLa, our research will contribute, firstly in the development of formal theories for enabling robots to learn both from observation and speech, secondly in the creation of argumentative means of interaction between robots and human, a framework that brings together actuation, commonsense reasoning, dialectical interaction and knowledge extraction from the web, and thirdly the demonstration of the overall approach in three types: explanatory, exploratory, argumentative.


4. If we could only take one very important piece of information from your presentation, which should it be?
Explanations are necessary! Applications about healthcare or safety-driven applications like robotics and autonomous vehicles, must be explainable to gain users’ trust. Moreover, almost every AI system has to be compliant to law regulations (like GDPR), to be able to answer questions and explain their decisions. In order to reach AI Explainability, we must focus on model-specific solutions. I believe this is the key to XAI success.
 
Furthermore, we strongly believe that Explanations should be somehow based on an argumentation framework, allowing interactions between the user and the system in terms of challenging and justifying decisions made by the AI system.
 
43.png
More Posts