PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Language Evolution by Autonomous Robots
Luc Steels, Vrije Universiteit Brussel, Belgium

Accountability, Responsibility, Transparency: the ART of AI
Virginia Dignum, Delft University of Technology, Netherlands

Reading Agents that Hunger for Knowledge
Eduard Hovy, Carnegie Mellon University, United States

 

Language Evolution by Autonomous Robots

Luc Steels
Vrije Universiteit Brussel
Belgium
 

Brief Bio
Available soon.


Abstract
Available soon.



 

 

Accountability, Responsibility, Transparency: the ART of AI

Virginia Dignum
Delft University of Technology
Netherlands
 

Brief Bio
Virginia Dignum is Associate Professor on Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. Her research focuses on value-sensitive design of intelligent systems and multi-agent organisations, in particular on the ethical and societal impact of AI. She is Executive Director of the Delft Design for Values Institute, secretary of the International Foundation for Autonomous Agents and Multi-agent Systems (IFAAMAS), member of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems. She was co-chair of ECAI2016, the European Conference on AI, and vice president of the BNVKI (Benelux AI Association).


Abstract
As robots and other AI systems move from being a tool to being teammates, and are increasingly making decisions that directly affect society,, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? What are the consequences of extended government, corporate, and other organisational access to knowledge and predictions concerning citizen behaviour? How can moral, societal and legal values be part of the design process? How and when should governments and the general public intervene?

Answering these and related questions requires a whole new understanding of Ethics with respect to control and autonomy, in the changing socio-technical reality. Means are needed to integrate moral, societal and legal values with technological developments in Artificial Intelligence, both within the design process as well as part of the deliberation algorithms employed by these systems. In this talk I discuss leading Ethics theories and propose alternative ways to model ethical reasoning and discuss their consequences to the design of robots and softbots. Depending on the level of autonomy and social awareness of AI systems, different methods for ethical reasoning are needed. Given that ethics are dependent on the sociocultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems.
The urgency of these issues is acknowledged by researchers and policy makers alike. Methodologies are needed to ensure ethical design of AI systems, including means to ensure accountability, responsibility and transparency (ART) in system design.



 

 

Reading Agents that Hunger for Knowledge

Eduard Hovy
Carnegie Mellon University
United States
 

Brief Bio
Eduard Hovy is a professor at the Language Technology Institute in the School of Computer Science at Carnegie Mellon University. He holds adjunct professorships at universities in the US, China, and Canada, and is co-Director of Research for the DHS Center for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987, and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). From 1989 to 2012 he directed the Human Language Technology Group at the Information Sciences Institute of the University of Southern California. Dr. Hovy’s research addresses several areas in Natural Language Processing, including machine reading of text, question answering, information extraction, automated text summarization, the semi-automated construction of large lexicons and ontologies, and machine translation. His contributions include the co-development of the ROUGE text summarization evaluation method, the BLANC coreference evaluation method, the Omega ontology, the Webclopedia QA Typology, the FEMTI machine translation evaluation classification, the DAP text harvesting method, the OntoNotes corpus, and a model of Structured Distributional Semantics. In November 2016 his Google h-index was 67. Dr. Hovy is the author or co-editor of six books and over 400 technical articles and is a popular invited speaker. In 2001 Dr. Hovy served as President of the ACL, in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society. Dr. Hovy regularly co-teaches courses and serves on Advisory Boards for institutes and funding organizations in Germany, Italy, Netherlands, and the USA.


Abstract
True intelligent agenthood (as opposed to mere agency) is characterized by self-driven internal goal creation and prioritization. Few AI systems enjoy the freedom today to autonomously decide what to do next; even robots and planning systems start with a fairly concrete goal and stop acting when they have achieved it. In a small experimental project at CMU we have been exploring what it might mean for a Natural Language text reading engine to experience a ‘hunger for knowledge’ that drives what it chooses to read and learn about next, in an ongoing manner. There is no overall goal other than trying to increase its understanding (coverage and interpretations) of the world as described in Wikipedia. The starting point is a sketchy representation of the Infoboxes of all the people listed in Wikipedia, and the principal criterion for choosing what to read about next is the desire to minimize knowledge gaps and remove inconsistencies. In contrast to Freebase, Knowledge Graphs, and other text mining projects, internal generalization is central to our work. To implement the system we combine traditional AI frame proposition representation for the basic information (to make it readable by humans) with neural networks such as autoencoders to perform generalization and anomaly detection.



footer