Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lecture

 

Responsible Agency

Carles Sierra
IIIA-CSIC
Spain
 

Brief Bio
Carles Sierra is a Research Professor of the Artificial Intelligence Research Institute (IIIA-CSIC) in the area of Barcelona. He is currently the Director of the Institute. He received his PhD in Computer Science from the Technical University of Barcelona (UPC) in 1989. He has been doing research on Artificial Intelligence topics since then. He has been visiting researcher at Queen Mary and Westfield College in London (1996-1997) and at the University of Technology in Sydney for extended periods between 2004 and 2012. He is also an Adjunct Professor of the Western Sydney University. He has taught postgraduate courses on different Ai topics at several Universities: Université Paris Descartes, University of Technology, Sydney, Universitat Politècnica de València, and Universitat Autònoma de Barcelona among others.

He has contributed to agent research in the areas of negotiation, argumentation-based negotiation, computational trust and reputation, team formation, and electronic institutions. These contributions have materialised in more than 300 scientific publications. His current focus of work gravitates around the use of AI techniques for Education and on social applications of AI. Also, he has served the research community of MAS as General Chair of the AAMAS conference in 2009, Program Chair in 2004, and as Editor in Chief of the Journal of Autonomous Agents and Multiagent Systems (2014-2019). Also, he served the broader AI community as local chair of IJCAI 2011 in Barcelona and as Program Chair of IJCAI 2017 in Melbourne. He has been in the editorial board of nine journals. He has served as evaluator of numerous calls and reviewer of many projects of the EU research programs. He is an EurAI Fellow and was the President of the Catalan Association of AI between 1998-2002.


Abstract
The main challenge that artificial intelligence research is facing nowadays is how to guarantee the development of responsible technology. And, in particular, how to guarantee that autonomy is responsible. The social fears on the actions taken by AI can only be appeased by providing ethical certification and transparency of systems. However, this is certainly not an easy task. As we very well know in the multiagent systems field, the prediction accuracy of system outcomes has limits as multiagent systems are actually examples of complex systems. And AI will be social, there will be thousands of AI systems interacting among themselves and with a multitude of humans; AI will necessarily be multiagent.

Although we cannot provide complete guarantees on outcomes, we must be able to define with accuracy what autonomous behaviour is acceptable (ethical), to provide repair methods for anomalous behaviour and to explain the rationale of AI decisions. Ideally, we should be able to guarantee responsible behaviour of individual AI systems by construction.

I understand by an ethical AI system one that is capable of deciding what are the most convenient norms, abide by them and make them evolve and adapt. The area of multiagent systems has developed a number of theoretical and practical tools that properly combined can provide a path to develop such systems, that is, provide means to build ethical-by-construction systems: agreement technologies to decide on acceptable ethical behaviour, normative frameworks to represent and reason on ethics, and electronic institutions to operationalise ethical interactions. Along my career, I have contributed with tools on these three areas. In this keynote, I will describe a methodology to support their combination that incorporates some new ideas from law, and organisational theory.



footer