Session Organizers:

Session Title: Towards Fusion of Semantic Knowledge into Deep Learning Models

Abstract:

How can we represent data such that the resulting semantic structures become useful in a deep model? Usefulness, has a twofold connotation: intrinsically, it concerns a quantifiable improvement in the performance of the deep model; extrinsically, it’s related to how semantic representations of data can augment the explainability of the deep model that uses it. The Semantic Technology community, is more and more interested in the problem of integrating ontologies with deep models. In this regard, ontologies can be viewed as a tool or source of insight for overcoming the key challenges of heterogeneous multi-modal representation, fusion, translation, alignment, and co-learning.

This breakout-session aims at exposing existing solutions towards injecting structured knowledge into deep models, their limitations and the potential benefits of semantic technologies to address them. The presenters will introduce the notion of multi-modal learning, focusing on examples from industrial use cases at Bosch. The second part will focus on the most relevant techniques in the state of the art, highlighting best practices and limitations. Finally, we will illustrate how semantic web technologies can be used to complement machine learning systems, opening the discussion to the attendants.

Session Webpage:

https://deepsemantic2019.github.io/


Schedule:

13:05 - 13:25. Talk 1 by Monireh Ebrahimi: Neural-Symbolic Systems: Representation and Reasoning Approaches

13:25 - 13:50 Talk 2 by Jonathan Francis: What is Multimodal Machine Learning?

13:50 - 14:10 Talk 3 by Alessandro Oltramari : Multimodal Sense-making: A Natural Ground for Neuro-Symbolic Systems

14:10 - 14:30 Discussion will be led by Alessandro Oltramari

  • Questions? Comments?
  • “I’m working on it”: Share your experience in 60 seconds!
  • Time is up! Still Interested? Contact: Alessandro Oltramari