Keynote Speakers

Prof. Wen-Lian Hsu
Institute of Information Science , Academia Sinica

Speech Title: An Explainable Machine Learning Model


Biography:
Dr. Hsu is currently a Distinguished Research Fellow of the Institute of Information Science, Academia Sinica, Taiwan. He received a B.S. in Mathematics from National Taiwan University, and a Ph.D. in operations research from Cornell University in 1980. He was a tenured associate professor in Northwestern University before joining the Institute of Information Science in Academia Sinica as a research fellow in 1989. Dr. Hsu’s earlier contribution was on graph algorithms and he has applied similar techniques to tackle computational problems in biology and natural language. In 1993, he developed a Chinese input software, GOING, which has since revolutionized Chinese input on computer. He later applied similar semantic analysis techniques to question answering system and chatbot.

Dr. Hsu is particularly interested in applying natural language processing techniques to understanding DNA sequences as well as protein sequences, structures and functions and also to biological literature mining.

Dr. Hsu developed a model for natural language understanding which can utilize heterogeneous knowledge representation systems. He has applied it to educational tutoring systems and successfully implemented a system that can understand and solve (and explain how so solve) the mathematics word problems of primary school (grade 3).

Dr. Hsu received the Outstanding Research Award from the National Science Council in 1991, 1994, 1996, the first K. T. Li Research Breakthrough Award in 1999, the IEEE Fellow in 2006, and the Teco Award in 2008. He was the president of the Artificial Intelligence Society in Taiwan from 2001 to 2002 and the president of the Computational Linguistic Society of Taiwan from 2011 to 2012. De was the director of the Institute from 2012 to 2018.


Prof. Ee-Peng Lim  
School of Information Systems, Singapore Management University (SMU)

Speech Title: People Analytics for Improving Social Well-Being


Abstract:
People analytics has been often used in the context of analysing employee data for the purpose of talent recruitment, development and retention. In a societal context, people analytics can help us better understand citizen needs and profiles for the purpose of improving their social well-being. In this talk, we will cover three people analytics research topics that involve analysing user-generated online content, behaviour and social network data, with associated downstream socially relevant applications. The three research topics are: (a) user profiling, (b) user identity linkage, and (c) cross-platform user profiling. We will also present an example downstream application that leverage on user profiling to guide citizens to jobs that require matching career preferences.

Biography:
Dr Ee-Peng Lim is the Lee Kong Chian Professor with the School of Information Systems at the Singapore Management University (SMU). He is also the Director of Living Analytics Research Centre in SMU, a National Research Foundation (NRF) supported research centre focusing developing personalized and participatory analytics capabilities for smart city and smart nation relevant applications. Dr Lim received his PhD degree from University of Minnesota. His research expertise covers social media mining, social/urban data analytics, and information retrieval. He has published more than 90 international journal papers and 280 conference papers, many of them appeared at top ACM and IEEE journals and conference venues. He is the recipient of the Distinguished Contribution Award at the 2019 Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD). He currently serves on the Singapore’s Social Science Research Council, and Research Advisory Panel of Prime Minister’s Office.


Prof. Subbarao Kambhampati
Arizona State University

Speech Title: Synthesizing Explainable Behavior for Human-AI Collaboration


Abstract:
As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. Using several case-studies from our ongoing research, I will discuss how such multi-model planning forms the basis for explainable behavior.

Biography:
Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State University. He received his B.Tech. in Electrical Engineering (Electronics) from Indian Institute of Technology, Madras (1983), and M.S.(1985) and Ph.D.(1989) in Computer Science (1985,1989) from University of Maryland, College Park. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. Kambhampati is a fellow of AAAI and AAAS, and was an NSF Young Investigator. He received multiple teaching awards, including a university last lecture recognition. Kambhampati is the past president of AAAI and was a trustee of IJCAI. He was the program chair for IJCAI 2016 , ICAPS 2013, AAAI 2005 and AIPS 2000. He was a founding director of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets.


Prof. Chien-Ju Ho
Department of Computer Science & Engineering, Washington University

Speech Title: Human-in-the-Loop Bandit Learning


Abstract:
Bandit learning is a sequential decision-making framework when only partial feedback is observable. In standard stochastic bandit settings, the learner chooses an action at each time step and observes a reward independently drawn from some distribution associated with the taken action. The goal of the learner is to maximize the sum of the rewards obtained through the taken actions over time. In the past few decades, there has been extensive literature for bandit learning with a wide range of applications. In this talk, I will discuss my recent work investigating the design of bandit algorithms with humans in the loop. In particular, I will focus the discussion on how human biases and incentives influence the design of learning algorithms and talk about the impacts of learning algorithms to humans and the implications of fairness in AI.

Biography:
Chien-Ju is an assistant professor in Computer Science & Engineering at Washington University in St. Louis. Previously, he was a postdoctoral associate at Cornell University. He earned his PhD in Computer Science from the University of California, Los Angeles in 2015 and spent three years visiting the EconCS group at Harvard from 2012 to 2015. He is the recipient of the Google Outstanding Graduate Research Award at UCLA in 2015. His work was nominated for Best Paper Award at WWW 2015.  His research centers on the design and analysis of human-in-the-loop systems, using techniques drawn from the fields of machine learning, algorithmic economics, optimization, and online behavioral social science. He is interested in investigating the interactions between AI and humans, including studying how human behavior influences the design of machine learning algorithms and how the outcomes of machine learning impact human welfare.