Risks and Benefits of Artificial Intelligence and Robotics

event calendar icon

6 Feb 2017

08:50 -17:45

7 Feb 2017

09:00 -18:15

GMT

By invitation only

event map pin icon

Cambridge Judge Business School

Trumpington St

Cambridge

CB2 1AG

United Kingdom

A workshop for media and security sectors

The Cambridge Centre for Risk Studies in collaboration with the United Nations programme on Journalism and Public Information present a workshop on the Risks and Benefits of Artificial Intelligence and Robotics.

From sensing, finance, medicine, transportation and security, a technological revolution is taking place. Artificial Intelligence (AI) has been a feature of science fiction for almost a century, but it is only in more recent years that the prospect of autonomous robotics and artificially intelligent systems has really become viable. While this will potentially provide great opportunities, these developments are likely to have significant impacts upon the very functioning of society, posing practical, ethical, legal and security challenges – much of which is as of yet not fully appreciated or understood. 

The media and other sources of public information are central in ensuring that citizens and institutions have a realistic and balanced understanding of such technologies. The media can contribute to shaping a culture of collective responsibility that will support the development and use of these technologies according to stringent values and principles. This workshop will seek to deepen knowledge of the risks and benefits associated with such technological advances across a broad range of potential applications, from day-to-day life and to conflict situations. Workshop participants will engage in a series of brainstorming sessions and practical exercises with eminent engineers, academics and policy makers, expanding their professional network in a select, international environment.

Artificial intelligence risks event.
Partners

Programme

Day 1

Monday 6 February 2017

Locations

  • 08:50-12:45 | Castle Teaching Room (morning sessions)
  • 12:45-13:30 | Common Room (lunch)
  • 13:30-14:15 | Lecture Theatre 1 (live demonstration of Darktrace)
  • 14:15-17:45 | Lecture Theatre 1 (afternoon sessions)

08:50 – 09:00

Welcome and introductions

Cambridge Centre for Risk Studies and UNICRI

09:00 – 09:30

Keynote address

Dr Konstantinos Karachalios, Managing Director of The Institute of Electrical and Electronics Engineers (IEEE) Standards Association and Member of the Management Council of IEEE

09:30 – 10:15

Artificial intelligence and robotics 101: what is it and where are we now?

Professor Noel Sharkey, University of Sheffield, UK, Co-Founder of the Foundation for Responsible Robotics (FRR) and Chairman of the International Committee for Robot Arms Control (ICRAC) (TBC)

10:15 – 10:45

Discussion

Moderated by Professor Noel Sharkey

10:45 – 11:15

Coffee & tea

11:15 – 12:00

Ethics and artificial intelligence

Kay Firth-Butterfield, Barrister-at-Law, Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin, and Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics

12:00 – 12:45

Discussion

Moderated by Kay Firth-Butterfield

12:45 – 13:30

Lunch

13:30 – 14:15

Live demonstration of Darktrace: A Shift to Self-Learning and Self-Defending Digital Businesses

Dave Palmer, Director of Technology, Darktrace

14:15 – 15:00

The cyber-security overlap

The triangle of pain: the role of policy, public and private sectors in mitigating the cyber threat – Professor Daniel Ralph, Academic Director, Cambridge Centre for Risk Studies & Professor of Operations Research, University of Cambridge Judge Business School

Modeling the cost of cyber catastrophes to the global economy – Simon Ruffle, Director of Research & Innovation, Cambridge Centre for Risk Studies

Towards cyber insurance: approaches to data and modeling – Jennifer Copic, Research Associate, Cambridge Centre for Risk Studies

Download slides (pdf, 3.09MB)

15:00 – 15:45

Discussion

Moderated by Dr Michelle Tuveson, Executive Director, Cambridge Centre for Risk Studies

15:45 – 16:15

Coffee & tea

16:15 – 17:00

From fear to accountability – the state of artificial intelligence journalism

John C. Havens, Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and contributing writer for Mashable

17:00 – 17:45

Discussion

Moderated by John Havens

Day 2

Tuesday 7 February 2017

Location

  • 09:00-18:15 | Castle Teaching Room

09:00 – 09:15

Recap of first day takeaways

Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute (UNICRI)

09:15 – 10:00

Emerging technologies: quantum computing

Dr Natalie Mullin, 1Qbit Quantum Computing Software Company

10:40 – 10:30

Discussion

Moderated by Dr Natalie Mullin

10:30 – 11:00

Coffee & tea

11:00 – 11:45

Economic and social implications of robotics and artificial intelligence

Olly Buston, Founding Director, Future Advocacy

11:45 – 12:30

Discussion

Moderated by Olly Buston

12:30 – 14:00

Lunch

14:00 – 14:45

Long-term issues of artificial intelligence and the future of humanity

Kyle Scott, Future of Humanity Institute, University of Oxford

14:45 – 15:30

Discussion

Moderated by Kyle Scott

15:30 – 16:00

Coffee & tea

16:00 – 16:45

Robotics and artificial intelligence at the United Nations

Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute

16:45 – 17:15

Discussion

Moderated by Irakli Beridze

17:15 – 18:15

Panel discussion

Moderated by Irakli Beridze, UNICRI and Dr Michelle Tuveson, Cambridge Centre for Risk Studies

Panellists include:

  • Dr Stephen Cave, Executive Director, Leverhulme Centre for the Future of Intelligence, University of Cambridge
  • Professor Noel Sharkey, University of Sheffield
  • Olly Buston, Founding Director, Future Advocacy
  • Dr Natalie Mullin, 1Qbit, Quantum Computing Software Company
  • Kyle Scott, Future of Humanity Institute, University of Oxford
  • Kay Firth-Butterfield, Barrister-at-Law, Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin, and Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics

Speakers

Irakli Beridze

Senior Policy and Strategy Advisor, UNICRI

Senior Strategy and Policy Advisor at UNICRI, with more than 18 years of experience in leading highly political and complex multilateral negotiations, developing stakeholder engagement programmes and channels of communication with governments, UN agencies, international organisations, think tanks, civil society, foundations, academia, private industry and other partners on an international level.

Prior to joining UNICRI served as a special projects officer at the Organisation for the Prohibition of Chemical Weapons (OPCW) undertaking extensive missions in politically sensitive areas around the globe. Recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013.

Since 2015, Initiated and heading the first UN programme on Artificial Intelligence and Robotics. Heading the creation of the UN Centre on AI and Robotics with the objective to enhance understanding of the risk-benefit duality of AI through improved coordination, knowledge collection and dissemination, awareness-raising and global outreach activities. He is a member of various international task forces and working groups advising governments and international organisations on numerous issues related to international security, emerging technologies and global political trends.

Kay Firth-Butterfield

Barrister-at-Law, Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin

Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics

Kay Firth-Butterfield is a Barrister-at-Law and part-time Judge in the United Kingdom where she has also worked as a mediator, arbitrator, business owner and Professor of Law. In the United States, Kay is Executive Director of AI-Austin and former Chief Officer, and member, of the Lucid.ai Ethics Advisory Panel (EAP). She is a humanitarian with a strong sense of social justice and has advanced degrees in Law and International Relations.

Kay advises governments, think tanks and non-profits about artificial intelligence, law and policy. Kay co-founded the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas and as an adjunct Professor of Law teaches Artificial Intelligence and Emerging Technologies: Law and Policy. She is a Distinguished Scholar of the Robert E. Strauss Center for International Security and Law.

Kay thinks about and advises on how AI and other emerging technologies will impact business and society, including how business can prepare for that impact in its internal planning and external interaction with customers and other stakeholders and how society will be affected by these technologies. Kay speaks regularly to international audiences addressing many aspects of these challenging changes.

Jennifer Copic

Research Associate, Cambridge Centre for Risk Studies

Jennifer Copic is a Research Associate at the Centre for Risk Studies. Jennifer supports the research on financial and organisational networks. She is particularly excited to work with tools that help visualise complex data sets. She holds a BS in Chemical Engineering from the University of Louisville and a MS in Industrial and Operations Engineering from the University of Michigan.

Prior to joining the Centre for Risk Studies, Jennifer worked as a systems engineer for General Mills at a manufacturing plant. She really enjoys modelling and visualising data in order to help others make more informed decisions.

Dr Konstantinos Karachalios

Managing Director, The Institute of Electrical and Electronics Engineers (IEEE) Standards Association and Member of the Management Council of IEEE

A globally recognised leader in standards development and intellectual property, Dr Ing Konstantinos Karachalios is managing director of the IEEE Standards Association and a member of the IEEE Management Council.

As managing director, he has been enhancing IEEE efforts in global standards development in strategic emerging technology fields, through technical excellence of staff, expansion of global presence and activities and emphasis on inclusiveness and good governance, including reform of the IEEE standards-related patent policy.

As member of the IEEE Management Council, he championed expansion of IEEE influence in key techno-political areas, including consideration of social and ethical implications of technology, according to the IEEE mission to advance technology for humanity. Results have been rapid in coming and profound; IEEE is becoming the place to go for debating and building consensus on issues such as a trustworthy and inclusive Internet and ethics in design of autonomous systems.

Before IEEE, Konstantinos played a crucial role in successful French-German cooperation in coordinated research and scenario simulation for large-scale nuclear reactor accidents. And with the European Patent Office, his experience included establishing EPO’s patent academy, the department for delivering technical assistance for developing countries and the public policy department, serving as an envoy to multiple U.N. organisations.

Konstantinos earned a PhD in energy engineering (nuclear reactor safety) and masters in mechanical engineering from the University of Stuttgart.

Dave Palmer

Director of Technology, Darktrace

Dave Palmer is a cyber security technical expert with over 10 years’ experience at the forefront of government intelligence operations. He has worked across UK intelligence agencies GCHQ and MI5, where he delivered mission-critical infrastructure services, including the replacement and security of entire global networks, the development of operational internet capabilities and the management of critical disaster recovery incidents. At Darktrace, Dave oversees the mathematics and engineering teams and product strategy. He holds a first class degree in Computer Science and Software Engineering from the University of Birmingham.

Simon Ruffle

Director of Technology Research, Cambridge Centre for Risk Studies

Simon’s responsibilities include managing research in the Centre, particularly the TechCat track – solar storm and cyber catastrophe research, and the Cambridge Risk Framework, a platform for analysing multiple global systemic risks through unified modelling software; a common database architecture and information interchange standards.

He is responsible for developing and maintaining partnership relationships with corporations, governments, and other academic centres. He speaks regularly at seminars and conferences.

He is developing methods for storing and applying the Centre’s Stress Test Scenarios and other Risk Assessment Tools to macro-economic analysis, financial markets and insurance loss aggregation. He is researching how network theory can be applied to understanding the impact of catastrophes in a globalised world, including supply chains, insurance and banking.

Originally studying architecture at Cambridge, Simon has spent most of his career in industry, developing software for natural hazards risk. He has worked on risk pricing for primary insurers, catastrophe modelling for reinsurers, and has been involved in placing catastrophe bonds in the capital markets. He has many years of experience in software development, relational databases and geospatial analysis and has worked in a variety of organisations from start-ups to multinationals.

Professor Noel Sharkey

Emeritus Professor of AI and Robotics, University of Sheffield

Co-director of the Foundation for Responsible Robotics and Chairman of the International Committee for Robot Arms Control

Noel Sharkey PhD DSc FIET FBCS CITP FRIN FRSA Emeritus Professor of AI and Robotics University of Sheffield, co-director of the Foundation for Responsible Robotics and chair elect of the NGO: International Committee for Robot Arms Control (ICRAC). He has moved freely across academic disciplines, lecturing in departments of engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science, robotics, ethics, law, art, design and military colleges. He has held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield).

Noel has been working in AI/robotics and related disciplines for more than three decades and is known for his early work on neural computing and genetic algorithms. As well as writing academic articles, he writes for national newspapers and magazines. Noel has created thrilling robotics museum exhibitions and mechanical art installations and he frequently appears in the media and works in popular tech TV shows such as head judge of robot wars. His research since 2006 has been on ethical/legal/human rights issues in robot applications in areas such as the military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex and civil surveillance. A major part of his current work is advocacy (mainly at the United Nations) about the ethical, legal and technical aspects of autonomous weapons systems.

Olly Buston

Founding Director and CEO, Future Advocacy

Olly Buston is CEO of the think tank and consultancy Future Advocacy which works on some of the greatest challenges faced by humanity in the 21st Century.  Olly is author of the recent report An Intelligent Future? which focuses on what governments can do to maximise the opportunities and minimise the risks of artificial intelligence.

Previously Olly was Director of the ONE campaign for seven years. He has also run the global anti-slavery movement Walk Free, been an Executive Director of the UK Labour Party, and led Oxfam International’s global education campaign from Washington DC.

Dr Stephen Cave

Executive Director and Senior Associate, Leverhulme Centre for the Future of Intelligence, University of Cambridge

Dr Stephen Cave is Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI) and Senior Research Fellow at the University of Cambridge. Previously, he worked for the British Foreign Office as a policy advisor and diplomat. He has written on a wide range of philosophical and scientific subjects, including for the New York Times, The Atlantic, The Guardian, The Telegraph, The Financial Times and Wired, and has appeared on television and radio around the world. His book Immortality was a New Scientist book of the year. He has a PhD in philosophy from the University of Cambridge.

John C Havens

Executive Director, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

John C Havens is Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The Initiative is creating a document called, Ethically Aligned Design to provide recommendations for values-driven Artificial Intelligence and Autonomous Systems as well standards recommendations. Guided by over one hundred thought leaders, the Initiative has a mission of ensuring every technologist is educated, trained, and empowered to prioritise ethical considerations in the design and development of autonomous and intelligent systems.

John is also a regular contributor on issues of technology and wellbeing to Mashable, The Guardian, HuffPo and TechCrunch and is author of Heartificial Intelligence: Embracing Our Humanity To Maximize Machines, Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World.

John was an EVP of a Top Ten PR Firm, a Vice President of a tech startup, and an independent consultant where he has worked with clients such as Gillette, P&G, HP, Wal-Mart, Ford, Allstate, Monster, Gallo Wines, and Merck. He was also the Founder of The Happathon

Project, a non-profit utilising emerging technology and positive psychology to increase human wellbeing.

John has spoken at TEDx, at SXSW Interactive (six times), and as a global keynote speaker for clients like Cisco, Gillette, IEEE, and NXP Semiconductors.  John was also a professional actor on Broadway, TV and Film for fifteen years.

Dr Natalie Mullin

Mathematician and Quantum Algorithms Researcher, 1Qbit

Natalie Mullin is a mathematician and quantum algorithms researcher at 1QBit, the world’s first software company dedicated to quantum computing. Natalie completed her doctorate in Combinatorics and Optimisation at the University of Waterloo. Her research interests include graph theory, machine learning, and operations research.

At 1QBit, Natalie develops optimisation algorithms that utilise quantum annealing. She is currently investigating hybrid classical and quantum combinatorial algorithms that make optimal use of both computational paradigms.

Professor Daniel Ralph

Academic Director, Cambridge Centre for Risk Studies, University of Cambridge Judge Business School & Professor of Operations Research

Professor Daniel Ralph is a Founder and Academic Director of the Centre for Risk Studies, Professor of Operations Research at the University of Cambridge Judge Business School, and a Fellow of Churchill College. Daniel’s research interests include identification and management of systemic risk, risk aversion in investment, economic equilibria models and optimisation methods. Management stress test, via selection and construction of catastrophe scenarios, is one focus of his work in the Cambridge Centre for Risk Studies. Another is the role and expression of risk management within organisations. Daniel engages across scientific and social science academia, a variety of commercial and industrial sectors, and government policy making. He was Editor-in-Chief of Mathematical Programming (Series B) from 2007-2013.

Visit Professor Daniel Ralph’s faculty profile

Kyle Scott

Press Officer, Future of Humanity Institute, University of Oxford

Kyle Scott is the Press Officer at the Future of Humanity Institute in the University of Oxford. He started work on artificial intelligence and existential risk considerations through his previous work in effective altruist organisations such as the Centre for Effective Altruism and 80,000 Hours. He has juggled generalist roles spanning research, finance, marketing, web development, office administration, and more. Kyle graduated from Whitman College, where he studied philosophy and international development.

Dr Michelle Tuveson

Founder & Executive Director, Cambridge Centre for Risk Studies, Cambridge Judge Business School

Michelle Tuveson is a Founder and Executive Director at the Cambridge Centre for Risk Studies hosted at the University of Cambridge Judge Business School. Her responsibilities include the overall executive leadership at the Centre. This includes developing partnership relationships with corporations, governments, and other academic centres. Dr Tuveson leads the Cambridge CRO Council and she chairs the organising committee for the Cambridge Risk Centre’s Annual Risk Summits. She is one of the lead organisers of the Aspen Crisis and Risk Forum. She is an advisor to the World Economic Forum’s 2015 Global Risk Report and a contributor to the Financial Times Special Report on Risk Management. She is also an advisor to a number of corporations and boards as well as a frequent conference speaker.

Dr Tuveson has worked in corporations within the technology sector with her most recent position in the Emerging Markets Group at Lockheed Martin. Prior to that, she held positions with management strategy firm Booz Allen & Hamilton, and US R&D organisation MITRE Corporation. Dr Tuveson’s academic research focusses on the application of simulation models to study risk governance structures associated with the role of the Chief Risk Officer. She was awarded by the Career Communications Group, Inc. as a Technology Star for Women in Science, Technology, Engineering and Maths (STEM). She earned her BS in Engineering from the Massachusetts Institute of Technology, MS in Applied Math from Johns Hopkins University, and PhD in Engineering from the University of Cambridge. She is a member of Christ’s College Cambridge.

Top