Keynote Speakers
The following speakers have graciously accepted to give keynotes at AACL-IJCNLP 2020.
Percy Liang
Topic
Semantic Parsing for Natural Language Interfaces
Abstract
Natural language promises to be the ultimate interface for interacting with computers, allowing users to effortlessly tap into the wealth of digital information and extract insights from it. Today, virtual assistants such as Alex, Siri, and Google Assistant have given a glimpse into how this long-standing dream can become a reality, but there is still much work to be done. In this talk, I will discuss building natural language interfaces based on semantic parsing, which converts natural language into programs that can be executed by a computer. There are multiple challenges for building semantic parsers: how to acquire data without requiring laborious annotation, how to represent the meaning of sentences, and perhaps most importantly, how to widen the domains and capabilities of a semantic parser. Finally, I will talk about a new promising paradigm for tackling these challenges based on learning interactively from users.
Biography
Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
Song-Chun Zhu
Topic
Explainable AI: How Machines Gain Justified Human Trust
Abstract
The recent progress in computer vision, machine learning, natural language understanding, and AI in general have produced machines for a broad range of applications, however, some key underlying representations, especially neural networks, remain opaque or black boxes. This generates renewed interest in studying representations and algorithms that are interpretable and developing systems that can explain their behaviors and decisions to human users. In this talk, I will introduce our work on explainable AI. The objective is to let human users understand how an AI system works, when it will succeed and fail for what reasons. Thus human and machine can collaborate more effectively in various tasks. We propose a framework called X-ToM: Explanation with Theory of Minds, which poses explanation as an iterative dialogue process between human and AI system. In this process, human and machine learn the mental representations of each other to establish better understanding. Our experiments on human subject shows X-ToM gains justified trust and reliance from users over time in several domains: vision, robotics, and gaming. At the core of this X-ToM framework is a cognitive architecture for human-machine communications (non-verbal or verbal), which is also a unified framework where all types of machine learning can be viewed a various communication protocols.
Biography
Song-Chun Zhu received his Ph.D. degree from Harvard University in 1996. He joined UCLA in 2002 and became a full Professor of Statistics and Computer Science and director of the Center for Vision, Learning, Cognition and Autonomy at UCLA in 2006. In the end of 2020, he returns to China to establish a non-profit organization — the Beijing Institute for General Artificial Intelligence (BIGAI) and holds joint appointments as Chair professor at Peking and Tsinghua Universities. Over the past 30 years, his research has been motivated by pursuing a unified foundation for computer vision, and broadly for AI. He has published about 300+ papers in vision, learning, cognition, language, robotics, AI, statistics and applied math. His work received a number of recognitions, including a Marr Prize in 2003 for image parsing in computer vision, the Marr Prize honorary nominations in 1999 for texture modeling and in 2007 for object modeling. He received the Aggarwal prize from the Intl Association of Pattern Recognition in 2008 for “contributions to a unified foundation for visual pattern conceptualization, modeling, learning, and inference”, the Helmholtz Test-of-time prize in ICCV 2013, and the Computational Modeling Prize from the Cognitive Science Society in 2017. As a junior faculty he received the Sloan Fellow, NSF Career Award, and ONR Young Investigator Award in 2001. He is a fellow of the IEEE Computer Society since 2011. He is the principal investigator leading two consecutive ONR MURI projects on Scene & Event Understanding and Visual Commonsense Reasoning. He serves the community as a general chair for CVPR2012 and CVPR 2019. He is also the founder and chairman of an AI startup company named DMAI, Inc.