Social Implications: Perils & Promises of AI - IEEE AI & Ethics Summit 2016

666 views
Download
  • Share

AI techniques are already being applied to a variety of sectors, ranging from autonomous systems to automation of decision making activities, impacting on people’s lives in fields as diverse as education, finance, healthcare, transportation, and warfare. There are clearly social implications and ethical challenges arising from the use of AI in such rapidly evolving sectors. Despite the technical capabilities that may exist, there are often practical challenges associated with balancing competing demands on finite capacity and resources.

In a context where decisions related to the “suitability” or “prioritisation” of individuals to access services (e.g. health insurance) and technological innovations may potentially be based on increasingly automated assessments of perceived risk factors, how do we ensure that critical decision making continues to incorporate a strong ethical dimension aligned with human values? In an increasingly complex, black-box based decision making environment, quis custodiet ipsos custodies? (Who watches the watchmen?)

Panelists include:

  • Greg Adamson – Chair of IEEE Technical Activities, Ethics, Society & Technology Initiative
  • Nikolaos Mavridis – Founder and Director of the Interactive Robots and Media Lab (IRML)
  • Paul Nemitz – Director for Fundamental Rights and Union Citizenship, DG Justice, European Commission
  • Aurélie Pols – Data Governance & Privacy Advocate, Krux Digital Inc. & member Ethics Advisory Group (EAG) – European Data Protection Supervisor (EDPS)

AI techniques are already being applied to a variety of sectors, ranging from autonomous systems to automation of decision making activities, impacting on people’s lives in fields as diverse as education, finance, healthcare, transportation, and warfare. There are clearly social implications and ethical challenges arising from the use of AI in such rapidly evolving sectors. Despite the technical capabilities that may exist, there are often practical challenges associated with balancing competing demands on finite capacity and resources.

In a context where decisions related to the “suitability” or “prioritisation” of individuals to access services (e.g. health insurance) and technological innovations may potentially be based on increasingly automated assessments of perceived risk factors, how do we ensure that critical decision making continues to incorporate a strong ethical dimension aligned with human values? In an increasingly complex, black-box based decision making environment, quis custodiet ipsos custodies? (Who watches the watchmen?)

Advertisment

Advertisment