- Md Kamruzzaman Sarker, Wright State University, USA
- Derek Doran, Wright State University, USA
- Pascal Hitzler, Wright State University, USA
- Freddy Lecue , Accenture Labs Dublin and Inria, France
Artificial intelligence is shaping the way of our life by giving us automated decisions. We are using these automated decisions in every sphere of our life, from pattern recognition, autonomous driving, drug discovery etc.
Recent AI algorithms, and in particular deep learning methods, are improving the accuracy of such automated decision, but have limitations for exposing explanations when they fail. Even worse, many methods are unable to identify if or when they will fail, making it difficult for practitioners to know when to "trust" the decisions an AI reaches. This notion of trust is especially important in contexts where a decision has impact on the well being of an individual's health or career, or has economic and societal ramifications.
This tutorial aims at exposing existing solutions towards XAI, their limitations and the potential benefits of incorporating data semantics research and technologies to address them. The tutorial will offer an overview of the present methods for XAI, their limitations, the few contributions the data semantics community have made thus far, why data semantics is likely to be crucial role for XAI, and directions that our community can potentially follow in this emerging and exciting research area.