Law, Emotion, Disinformation

Understanding the Legal Challenges of AI-driven Disinformation

LED research project aims to explore legal mechanisms to address the challenges posed by AI-enabled recognition and exploitation of users’ emotions in the dissemination of online disinformation. As artificial intelligence (AI) becomes more advanced and accessible, concerns are rising about its potential to influence individuals’ perceptions and behaviors. LED seeks to study these emerging risks and propose regulatory solutions that protects fundamental rights, particularly freedom of expression.

Our main research question is thus:

What legal mechanisms should be put in place to effectively regulate the use of AI-enabled recognition and exploitation of the users’ emotions and reduce the risk of dissemination of disinformation?”

Our Interdisciplinary Approach

AI-driven disinformation is a complex phenomenon that requires expertise from multiple scientific disciplines. To ensure a proper factual ground for our legal analysis, we adopt a strong interdisciplinary approach. The LED project integrates insights from experts in the following disciplines:

  • Psychology – To understand the role of emotions in the phenomenon of disinformation.
  • Communication sciences – To examine the mecanisms behind the spread of online disinformation.
  • Computer sciences – To assess the capabilities and the limits of AI technologies in emotion recognition and exploitation.
  • Law – To identify the regulatory gaps into the legislation and to develop new legal models to counter AI-driven disinformation.

An interdisciplinary panel of experts will guide the project, ensuring a comprehensive and evidence-based approach of the legal analysis.

Research Work Plan

The LED project is structured into 5 key work packages (WPs):

  • WP1 establishes, through an interdisciplinary expert group, the factual assumptions and conceptual framework regarding online disinformation fueled by AI emotion recognition and exploitation.
  • WP2 examines how the right to quality information can curb online disinformation arising through emotion recognition and exploitation.
  • WP3 investigates alternative methods of regulation, such as auto-regulation, co-regulation and transparency measures to reduce AI-enabled disinformation.
  • WP4 aims to draft and disseminate guidelines and recommendations for policymakers and other stakeholders.
  • WP5 ensures effective project coordination and an efficient dissemination of research findings to the academic community, stakeholders and the general public.

Other Objectives

Ultimately, LED seeks to bridge legal, technological, and societal perspectives to develop a balanced and effective regulatory framework for AI-driven disinformation arising through emotion recognition and exploitation.

Research centres

This work is supported by the Fonds de la Recherche Scientifique FNRS

This website is licensed under the Creative Commons BY-NC-SA 4.0 license