IEEE Digital Reality: AI Biases and Inclusion

This video program is a part of the Premium package:

IEEE Digital Reality: AI Biases and Inclusion


  • IEEE MemberUS $10.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $10.00
  • Non-IEEE MemberUS $20.00
Purchase

IEEE Digital Reality: AI Biases and Inclusion

3 views
  • Share

Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media, enabling functionality that users take for granted.

Recently, image analysis algorithms have become widely available as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. However, while tagging APIs offers developers an inexpensive and convenient means to add functionality to their creations, most are opaque and proprietary, and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed.

In this talk, Dr. Styliani Kleanthous discussed recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. She presented her techniques for discrimination discovery in this domain, as well as research and efforts to understand a user and the developers' perceptions of fairness. In addition, she explored the sources of such biases, by comparing human versus machine descriptions of the same people/images.

Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media, enabling functionality that users take for granted.

Advertisment

Advertisment