6 Feb 2017
08:50 -17:45
7 Feb 2017
09:00 -18:15
GMT
By invitation only
Cambridge Judge Business School
Trumpington St
Cambridge
CB2 1AG
United Kingdom
The Cambridge Centre for Risk Studies in collaboration with the United Nations programme on Journalism and Public Information present a workshop on the Risks and Benefits of Artificial Intelligence and Robotics.
From sensing, finance, medicine, transportation and security, a technological revolution is taking place. Artificial Intelligence (AI) has been a feature of science fiction for almost a century, but it is only in more recent years that the prospect of autonomous robotics and artificially intelligent systems has really become viable. While this will potentially provide great opportunities, these developments are likely to have significant impacts upon the very functioning of society, posing practical, ethical, legal and security challenges – much of which is as of yet not fully appreciated or understood.
The media and other sources of public information are central in ensuring that citizens and institutions have a realistic and balanced understanding of such technologies. The media can contribute to shaping a culture of collective responsibility that will support the development and use of these technologies according to stringent values and principles. This workshop will seek to deepen knowledge of the risks and benefits associated with such technological advances across a broad range of potential applications, from day-to-day life and to conflict situations. Workshop participants will engage in a series of brainstorming sessions and practical exercises with eminent engineers, academics and policy makers, expanding their professional network in a select, international environment.
Locations
08:50 – 09:00
Cambridge Centre for Risk Studies and UNICRI
09:00 – 09:30
Dr Konstantinos Karachalios, Managing Director of The Institute of Electrical and Electronics Engineers (IEEE) Standards Association and Member of the Management Council of IEEE
09:30 – 10:15
Professor Noel Sharkey, University of Sheffield, UK, Co-Founder of the Foundation for Responsible Robotics (FRR) and Chairman of the International Committee for Robot Arms Control (ICRAC) (TBC)
10:15 – 10:45
Moderated by Professor Noel Sharkey
10:45 – 11:15
11:15 – 12:00
Kay Firth-Butterfield, Barrister-at-Law, Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin, and Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics
12:00 – 12:45
Moderated by Kay Firth-Butterfield
12:45 – 13:30
13:30 – 14:15
Dave Palmer, Director of Technology, Darktrace
14:15 – 15:00
The triangle of pain: the role of policy, public and private sectors in mitigating the cyber threat – Professor Daniel Ralph, Academic Director, Cambridge Centre for Risk Studies & Professor of Operations Research, University of Cambridge Judge Business School
Modeling the cost of cyber catastrophes to the global economy – Simon Ruffle, Director of Research & Innovation, Cambridge Centre for Risk Studies
Towards cyber insurance: approaches to data and modeling – Jennifer Copic, Research Associate, Cambridge Centre for Risk Studies
15:00 – 15:45
Moderated by Dr Michelle Tuveson, Executive Director, Cambridge Centre for Risk Studies
15:45 – 16:15
16:15 – 17:00
John C. Havens, Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and contributing writer for Mashable
17:00 – 17:45
Moderated by John Havens
Location
09:00 – 09:15
Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute (UNICRI)
09:15 – 10:00
Dr Natalie Mullin, 1Qbit Quantum Computing Software Company
10:40 – 10:30
Moderated by Dr Natalie Mullin
10:30 – 11:00
11:00 – 11:45
Olly Buston, Founding Director, Future Advocacy
11:45 – 12:30
Moderated by Olly Buston
12:30 – 14:00
14:00 – 14:45
Kyle Scott, Future of Humanity Institute, University of Oxford
14:45 – 15:30
Moderated by Kyle Scott
15:30 – 16:00
16:00 – 16:45
Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute
16:45 – 17:15
Moderated by Irakli Beridze
17:15 – 18:15
Moderated by Irakli Beridze, UNICRI and Dr Michelle Tuveson, Cambridge Centre for Risk Studies
Panellists include:
Senior Strategy and Policy Advisor at UNICRI, with more than 18 years of experience in leading highly political and complex multilateral negotiations, developing stakeholder engagement programmes and channels of communication with governments, UN agencies, international organisations, think tanks, civil society, foundations, academia, private industry and other partners on an international level.
Prior to joining UNICRI served as a special projects officer at the Organisation for the Prohibition of Chemical Weapons (OPCW) undertaking extensive missions in politically sensitive areas around the globe. Recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013.
Since 2015, Initiated and heading the first UN programme on Artificial Intelligence and Robotics. Heading the creation of the UN Centre on AI and Robotics with the objective to enhance understanding of the risk-benefit duality of AI through improved coordination, knowledge collection and dissemination, awareness-raising and global outreach activities. He is a member of various international task forces and working groups advising governments and international organisations on numerous issues related to international security, emerging technologies and global political trends.
Kay Firth-Butterfield is a Barrister-at-Law and part-time Judge in the United Kingdom where she has also worked as a mediator, arbitrator, business owner and Professor of Law. In the United States, Kay is Executive Director of AI-Austin and former Chief Officer, and member, of the Lucid.ai Ethics Advisory Panel (EAP). She is a humanitarian with a strong sense of social justice and has advanced degrees in Law and International Relations.
Kay advises governments, think tanks and non-profits about artificial intelligence, law and policy. Kay co-founded the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas and as an adjunct Professor of Law teaches Artificial Intelligence and Emerging Technologies: Law and Policy. She is a Distinguished Scholar of the Robert E. Strauss Center for International Security and Law.
Kay thinks about and advises on how AI and other emerging technologies will impact business and society, including how business can prepare for that impact in its internal planning and external interaction with customers and other stakeholders and how society will be affected by these technologies. Kay speaks regularly to international audiences addressing many aspects of these challenging changes.
Jennifer Copic is a Research Associate at the Centre for Risk Studies. Jennifer supports the research on financial and organisational networks. She is particularly excited to work with tools that help visualise complex data sets. She holds a BS in Chemical Engineering from the University of Louisville and a MS in Industrial and Operations Engineering from the University of Michigan.
Prior to joining the Centre for Risk Studies, Jennifer worked as a systems engineer for General Mills at a manufacturing plant. She really enjoys modelling and visualising data in order to help others make more informed decisions.
A globally recognised leader in standards development and intellectual property, Dr Ing Konstantinos Karachalios is managing director of the IEEE Standards Association and a member of the IEEE Management Council.
As managing director, he has been enhancing IEEE efforts in global standards development in strategic emerging technology fields, through technical excellence of staff, expansion of global presence and activities and emphasis on inclusiveness and good governance, including reform of the IEEE standards-related patent policy.
As member of the IEEE Management Council, he championed expansion of IEEE influence in key techno-political areas, including consideration of social and ethical implications of technology, according to the IEEE mission to advance technology for humanity. Results have been rapid in coming and profound; IEEE is becoming the place to go for debating and building consensus on issues such as a trustworthy and inclusive Internet and ethics in design of autonomous systems.
Before IEEE, Konstantinos played a crucial role in successful French-German cooperation in coordinated research and scenario simulation for large-scale nuclear reactor accidents. And with the European Patent Office, his experience included establishing EPO’s patent academy, the department for delivering technical assistance for developing countries and the public policy department, serving as an envoy to multiple U.N. organisations.
Konstantinos earned a PhD in energy engineering (nuclear reactor safety) and masters in mechanical engineering from the University of Stuttgart.
Dave Palmer is a cyber security technical expert with over 10 years’ experience at the forefront of government intelligence operations. He has worked across UK intelligence agencies GCHQ and MI5, where he delivered mission-critical infrastructure services, including the replacement and security of entire global networks, the development of operational internet capabilities and the management of critical disaster recovery incidents. At Darktrace, Dave oversees the mathematics and engineering teams and product strategy. He holds a first class degree in Computer Science and Software Engineering from the University of Birmingham.
Simon’s responsibilities include managing research in the Centre, particularly the TechCat track – solar storm and cyber catastrophe research, and the Cambridge Risk Framework, a platform for analysing multiple global systemic risks through unified modelling software; a common database architecture and information interchange standards.
He is responsible for developing and maintaining partnership relationships with corporations, governments, and other academic centres. He speaks regularly at seminars and conferences.
He is developing methods for storing and applying the Centre’s Stress Test Scenarios and other Risk Assessment Tools to macro-economic analysis, financial markets and insurance loss aggregation. He is researching how network theory can be applied to understanding the impact of catastrophes in a globalised world, including supply chains, insurance and banking.
Originally studying architecture at Cambridge, Simon has spent most of his career in industry, developing software for natural hazards risk. He has worked on risk pricing for primary insurers, catastrophe modelling for reinsurers, and has been involved in placing catastrophe bonds in the capital markets. He has many years of experience in software development, relational databases and geospatial analysis and has worked in a variety of organisations from start-ups to multinationals.
Noel Sharkey PhD DSc FIET FBCS CITP FRIN FRSA Emeritus Professor of AI and Robotics University of Sheffield, co-director of the Foundation for Responsible Robotics and chair elect of the NGO: International Committee for Robot Arms Control (ICRAC). He has moved freely across academic disciplines, lecturing in departments of engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science, robotics, ethics, law, art, design and military colleges. He has held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield).
Noel has been working in AI/robotics and related disciplines for more than three decades and is known for his early work on neural computing and genetic algorithms. As well as writing academic articles, he writes for national newspapers and magazines. Noel has created thrilling robotics museum exhibitions and mechanical art installations and he frequently appears in the media and works in popular tech TV shows such as head judge of robot wars. His research since 2006 has been on ethical/legal/human rights issues in robot applications in areas such as the military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex and civil surveillance. A major part of his current work is advocacy (mainly at the United Nations) about the ethical, legal and technical aspects of autonomous weapons systems.
Olly Buston is CEO of the think tank and consultancy Future Advocacy which works on some of the greatest challenges faced by humanity in the 21st Century. Olly is author of the recent report An Intelligent Future? which focuses on what governments can do to maximise the opportunities and minimise the risks of artificial intelligence.
Previously Olly was Director of the ONE campaign for seven years. He has also run the global anti-slavery movement Walk Free, been an Executive Director of the UK Labour Party, and led Oxfam International’s global education campaign from Washington DC.
Dr Stephen Cave is Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI) and Senior Research Fellow at the University of Cambridge. Previously, he worked for the British Foreign Office as a policy advisor and diplomat. He has written on a wide range of philosophical and scientific subjects, including for the New York Times, The Atlantic, The Guardian, The Telegraph, The Financial Times and Wired, and has appeared on television and radio around the world. His book ‘Immortality’ was a New Scientist book of the year. He has a PhD in philosophy from the University of Cambridge.
John C Havens is Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The Initiative is creating a document called, Ethically Aligned Design to provide recommendations for values-driven Artificial Intelligence and Autonomous Systems as well standards recommendations. Guided by over one hundred thought leaders, the Initiative has a mission of ensuring every technologist is educated, trained, and empowered to prioritise ethical considerations in the design and development of autonomous and intelligent systems.
John is also a regular contributor on issues of technology and wellbeing to Mashable, The Guardian, HuffPo and TechCrunch and is author of Heartificial Intelligence: Embracing Our Humanity To Maximize Machines, Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World.
John was an EVP of a Top Ten PR Firm, a Vice President of a tech startup, and an independent consultant where he has worked with clients such as Gillette, P&G, HP, Wal-Mart, Ford, Allstate, Monster, Gallo Wines, and Merck. He was also the Founder of The Happathon
Project, a non-profit utilising emerging technology and positive psychology to increase human wellbeing.
John has spoken at TEDx, at SXSW Interactive (six times), and as a global keynote speaker for clients like Cisco, Gillette, IEEE, and NXP Semiconductors. John was also a professional actor on Broadway, TV and Film for fifteen years.
Natalie Mullin is a mathematician and quantum algorithms researcher at 1QBit, the world’s first software company dedicated to quantum computing. Natalie completed her doctorate in Combinatorics and Optimisation at the University of Waterloo. Her research interests include graph theory, machine learning, and operations research.
At 1QBit, Natalie develops optimisation algorithms that utilise quantum annealing. She is currently investigating hybrid classical and quantum combinatorial algorithms that make optimal use of both computational paradigms.
Professor Daniel Ralph is a Founder and Academic Director of the Centre for Risk Studies, Professor of Operations Research at the University of Cambridge Judge Business School, and a Fellow of Churchill College. Daniel’s research interests include identification and management of systemic risk, risk aversion in investment, economic equilibria models and optimisation methods. Management stress test, via selection and construction of catastrophe scenarios, is one focus of his work in the Cambridge Centre for Risk Studies. Another is the role and expression of risk management within organisations. Daniel engages across scientific and social science academia, a variety of commercial and industrial sectors, and government policy making. He was Editor-in-Chief of Mathematical Programming (Series B) from 2007-2013.
Kyle Scott is the Press Officer at the Future of Humanity Institute in the University of Oxford. He started work on artificial intelligence and existential risk considerations through his previous work in effective altruist organisations such as the Centre for Effective Altruism and 80,000 Hours. He has juggled generalist roles spanning research, finance, marketing, web development, office administration, and more. Kyle graduated from Whitman College, where he studied philosophy and international development.
Michelle Tuveson is a Founder and Executive Director at the Cambridge Centre for Risk Studies hosted at the University of Cambridge Judge Business School. Her responsibilities include the overall executive leadership at the Centre. This includes developing partnership relationships with corporations, governments, and other academic centres. Dr Tuveson leads the Cambridge CRO Council and she chairs the organising committee for the Cambridge Risk Centre’s Annual Risk Summits. She is one of the lead organisers of the Aspen Crisis and Risk Forum. She is an advisor to the World Economic Forum’s 2015 Global Risk Report and a contributor to the Financial Times Special Report on Risk Management. She is also an advisor to a number of corporations and boards as well as a frequent conference speaker.
Dr Tuveson has worked in corporations within the technology sector with her most recent position in the Emerging Markets Group at Lockheed Martin. Prior to that, she held positions with management strategy firm Booz Allen & Hamilton, and US R&D organisation MITRE Corporation. Dr Tuveson’s academic research focusses on the application of simulation models to study risk governance structures associated with the role of the Chief Risk Officer. She was awarded by the Career Communications Group, Inc. as a Technology Star for Women in Science, Technology, Engineering and Maths (STEM). She earned her BS in Engineering from the Massachusetts Institute of Technology, MS in Applied Math from Johns Hopkins University, and PhD in Engineering from the University of Cambridge. She is a member of Christ’s College Cambridge.