This major one-day Summit from IEEE – the world’s largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity – considered AI and Ethics and posed the question, ‘Who does the thinking?’ In a series of keynote speeches and lively panel discussions, the summit brought together leading technologists, legal thinkers, philosophers, social scientists, manufacturers and policy makers to:
- Consider the social, technological, legal, and philosophical questions surrounding this ongoing international debate, including the best ways to manage risk and reward.
- Consider proposals to program ethical algorithms with human values into machines, since autonomous machines will likely confront situations in which their actions will have ethical consequences.
- Consider the social implications of the applications of Artificial Intelligence in fields as diverse as healthcare, education, finance, transportation, and warfare.
The Artificial Intelligence and Ethics Summit 2016 poses the question: "Who does the thinking?" In this video, IEEE European Public Policy Initiative Chair Marko Delimar gives brief welcoming remarks, followed by the opening keynote address by Wojciech Wiewiórowski, Assistant European Data Protection Supervisor.
The advent and increased sophistication of autonomous systems offers significant potential benefits in diverse application domains including manufacturing and transportation, healthcare and financial services, exploration, maintenance and repair. In addition to cost and risk reduction, potential benefits include enhanced productivity, precision and accuracy, better health outcomes, lower mortality and injury rates due to human error, as well as opportunities for greater human creativity. These are counter-balanced by a broad range of ethical, social, philosophical and legal concerns, including further dehumanising warfare, creating existential threats and damaging the fabric of human society.
From the perspective of reducing the likelihood of negative as well as unintended consequences, what is the best way to manage risk and reward? Should those responsible for technological innovation in the domain of autonomous systems be given carte blanche, or what kinds of guiding principles, regulation or even pre-emptive bans should be considered? This panel will discuss social, technological, legal, and philosophical questions surrounding this ongoing international debate.
There have been proposals to program ethical algorithms into machines such as cars or robots. For instance, there have been utilitarian algorithms weighing costs and benefits of machine decisions, as well as attempts to build Kant’s Categorical Imperative into systems. These engineering efforts are important, because as cars or medical and care robots continue to proliferate in society, these machines will likely confront situations in which their actions will have ethical consequences. But are machines capable of making what humans consider ethical or moral decisions? Should they take these decisions? Or do humans always need to be in the loop? This panel will convene around these important questions.
AI techniques are already being applied to a variety of sectors, ranging from autonomous systems to automation of decision making activities, impacting on people’s lives in fields as diverse as education, finance, healthcare, transportation, and warfare. There are clearly social implications and ethical challenges arising from the use of AI in such rapidly evolving sectors. Despite the technical capabilities that may exist, there are often practical challenges associated with balancing competing demands on finite capacity and resources.
In a context where decisions related to the “suitability” or “prioritisation” of individuals to access services (e.g. health insurance) and technological innovations may potentially be based on increasingly automated assessments of perceived risk factors, how do we ensure that critical decision making continues to incorporate a strong ethical dimension aligned with human values? In an increasingly complex, black-box based decision making environment, quis custodiet ipsos custodies? (Who watches the watchmen?)