Parallel Session 2: Beyond a Buzzword: The Role of Human Rights in the Governance of Artificial Intelligence
Rapid advances in artificial intelligence (AI) are enabling exciting new applications, but ethical, legal, and policy frameworks are struggling to keep pace. In particular, the integration of international human rights law in AI governance is hotly contested.
The international community is addressing documented and potential future human rights violations stemming from the application of a variety of AI-related technologies. Two applications of particular concern, due to their current and widespread applications and potential to severely impact civil and political human rights, are the use of AI for facial recognition and content moderation (from “content filtering” to more blanket forms of censorship). Authoritarian regimes around the world are increasingly employing facial recognition technologies for mass surveillance purposes, violating rights such as freedom of association, assembly, expression, and privacy. In the online context, the use of AI for content moderation reflects a shift from reactive identification of content to proactive filtering of content by algorithms. Although the latter approach has been adopted by the private sector in response to the challenges of moderating vast amounts of content across contexts, automated moderation is an imperfect, opaque, and blunt tool that presents human rights concerns such as threats to freedom of expression, which encompasses the freedom to seek, receive and impart information of all kinds.
In this session, panelists discuss various approaches to integrating international human rights norms and the rule of law in the governance of AI and its many applications, to address the current problematic use of these technologies and the direction of their future development.