TRUSTWORTHY AI BY DESIGN: WEBINAR INSIGHTS FROM THE AI, BIG DATA, AND DEMOCRACY TASKFORCE

Experts from the AI, Big Data, and Democracy Taskforce gathered on 1. October 2024 for the „Trustworthy AI by Design“ webinar.  The taskforce consists of four EU funded projects which have been cooperating to tackle the challenges of AI and big data while protecting democratic values. Did you miss the webinar? Have a look at the key insights.

Taskforce Goals

The taskforce is a joint initiative of AI4Gov, KT4Democracy, ITHACA, and ORBIS projects. It supports the European Commission’s vision of creating inclusive and innovative societies. By focusing on how AI and big data affect democracy and civic participation, these projects aim to develop solutions that ensure transparency and safeguard democratic processes.

Webinar Highlights

The event showcased the combined efforts to provide tools helping policymakers and citizens make informed decisions. Each project brings its unique approach to solving the issues related to AI’s impact on democracy, while also considering the needs of different communities and cultures.

  • AI4Gov
    George Manas from the University of Piraeus spoke about how AI systems should be fair and aligned with democratic values. The project aims to reduce biases in AI and ensure that AI-based decisions are transparent and ethical.
  • ORBIS
    Professors Grazia Concilio and Ilaria Mariani from Politecnico di Milano presented works on developing AI-powered models for inclusive public participation. ORBIS focuses on enhancing engagement in democratic practices while ensuring transparency through ethical guidelines.
  • ITHACA
    Presented by team members from SIMAVI, CERTH, and the University of Patras, the initiative aims to create a digital platform that supports civic engagement. The project focuses on developing AI tools that prioritize fairness, privacy, and security, ensuring trust in democratic processes.

Speakers introduced the ITHACA platform, which enhances civic participation through trustworthy AI. Key points included:

Trust and Ethics
The platform prioritizes technical accuracy, user privacy, safety, legal compliance, fairness, and environmental consciousness to build trust.

Functional Requirements
Stakeholder input helped identify 65 technical and functional requirements, including content management, integration, accessibility, summarization, and security.

Hybrid Architecture
Microservices and event-driven messaging approach ensures scalability and real-time system adaptability.

AI Trustworthiness Tools:

  • The AI Fairness Tool promotes inclusivity by preventing bias.
  • PPML (Privacy Preserving Machine Learning) safeguards user privacy through differential privacy techniques.
  • The AI Cybersecurity Tool detects and addresses security vulnerabilities in AI systems.

Visual Interface
Moderators can monitor AI fairness, privacy, and security metrics, maintaining user anonymity.

  • KT4Democracy
    Jennifer Edmond from Trinity College Dublin highlighted the project’s focus on the cultural aspects of AI in democracy. KT4Democracy plans to create digital tools that promote active civic participation, balancing technology with human-centered values.

The taskforce is more than just a group of individual projects; it’s a unified effort to share knowledge, best practices, and strategies. This collaboration strengthens the projects‘ ability to address the challenges AI and big data pose to democracy, leading to a broader impact on policymaking and citizen engagement.

The webinar stressed the urgency of developing trustworthy AI systems to support democratic values. The taskforce’s work reflects the EU’s commitment to creating a technology landscape that respects human rights and upholds democratic principles.

You can find the presentation together with the video recording from the webinar here: https://bit.ly/4gVpSsy

Documents:
AI, Big Data &Demcoracy Taskforce_Post-Webinar Report Oct24

Taskforce Joint Flyer

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Europe Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.