Session - Hybrid AI for Context Understanding
- Alessandro Oltramari, Bosch Research and Technology Center (Pittsburgh, PA)
- Cory Henson, Bosch Research and Technology Center (Pittsburgh, PA)
- Ruwan Wickramarachchi, AI Institute, University of South Carolina (Columbia, SC)
- Don Brutzman, Naval Postgraduate School (Monterrey, CA)
- Richard Markeloff, Raytheon BBN Technologies (Arlington, VA)
Context understanding can be conceived as the capability of making sense of a broad range of situations, e.g. a particular arrangement of cars and pedestrians on a city intersection, a conversation during a customer service call, natural features and other activities observed during a reconnaissance mission, etc. By making sense of the environment, as humans we learned to survive in the wilderness, escaping predators, enduring natural catastrophes, epidemics, and overcoming the intrinsic limitations of our own species. Through sensory stimuli, humans accumulate experiences, generalize and reason over them, storing the resulting knowledge in memory; the dynamic combination of live experience and distilled knowledge during task execution, enables humans to make time effective decisions, evaluating how good or bad a decision was by factoring external feedback in.
But if assessing situations and act consequentially evolved into a robust feature of human intelligence, the construction of a computational model of context understanding represents a long-standing challenge for artificial intelligence. Endowing machines with sense-making capability is not only a key requirement for improving their autonomy but, in first place, is a precondition for enabling seamless interaction with humans. In fact, humans communicate efficiently thanks to their shared mental models of the physical and social context: not only these models foster reciprocal trust by making contextual knowledge transparent, but are also crucial to explain how decision making unfolds in a specific context, and to assess whether the corresponding actions satisfy or not the modeled goals.
This session aims to illustrate three concrete examples of context-understanding at the machine level. In the first presentation we will illustrate a neuro-symbolic architecture to endow conversational agents with common sense knowledge. The second presentation will focus on applying knowledge graph embedding techniques to automatically understand the semantic characteristics of traffic scenes. The third presentation will focus on semantic validation of mission constraints, where an ontology-based approach is applied to assess the autonomous behavior of UAVs.
Topics: Semantic Technologies, Deep Learning, Cognitive Science, Natural Language Understanding, Internet of Things. Verticals: Autonomous Systems, Conversational AI.
Type of session: Presentations
- Presentations (70 minutes)
- Questions, comments, discussion (20 minutes)
Preliminary Agenda (titles are subject to change):
- Introduction: What does it mean for a machine to understand context? (Oltramari)
- Building Chatbots that understand Commonsense (Oltramari)
- Understanding Traffic Scenes using Knowledge Graph Embeddings (Henson, Wickramarachchi)
- Validating UAVs’ missions using ontology-based reasoning (Don Brutzman, Naval Postgraduate School; Richard Markeloff, Raytheon)
- Q/A and open discussion
Expected participation: We’d expect participants from academia and industry. The interest in approaches that integrate data-driven AI and knowledge-based AI is growing, as events like the AAAI-MAKE symposium and Semantic Deep Learning workshop show – just to name two recent event series. As the short description above suggests, our presentations don’t only focus on the application side of the problem, but stem from a theoretical framework where the definition of context understanding is inspired by cognitive science, philosophy, linguistics. In this regard, we anticipate attendance from scholars with an interest in the general aspects of neuro-symbolic approaches, as well as from practitioners with an interest in the industrial nature of our research projects (e.g., scalability, deployment, lesson learned, etc.).