Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The impact of AI on disinformation and democracy

We are proud to support POLITICO’s expanded coverage of artificial intelligence (AI) and elections this year through this dedicated series.
As a global foundation working to safeguard democracy from digital threats — from illegal data collection to algorithmic polarization — we know the role of AI throughout 2024’s global marathon of elections will shape public debates and policy for years to come.
At its best, technology fosters connection, creativity and activism. But whether it’s a single deepfake, a major influence operation deployed with generative AI, or social media feeds made even more addictive through machine learning, the rollout of AI in 2024 will challenge regulators, captivate the public and shape lasting narratives ahead of our policymakers’ ability to catch up. It may take even longer to separate moral panics from the real impact of AI on disinformation, polarization and hate.
The most critical beat in journalism today
Even for a tech-focused philanthropy like ours with expertise at our disposal, the upheavals of AI are triggering challenges that we, like our societies and our governments, are only beginning to reckon with.
This is why high-quality journalism matters.
Independent media will play a huge role in charting and demystifying AI’s civic disruptions in real time, holding tech companies accountable for the harms caused by their products.
We are experiencing a febrile information age, undermined and eroded by Big Tech corporations that profit from distorting democracy and amplifying outrage. A byproduct of this business model is the extinction-level threat faced by public-interest media, as monolithic and divisive newsfeeds replace and defund robust, adversarial journalism.
In other words, this is a vital beat.
In this series, you’ll read about AI’s impact on disinformation during the 2024 global election cycle, in Europe — with special attention on Germany, France and Poland — and around the world.
While polls in the United States and the United Kingdom are also on many people’s minds, beyond those countries’ borders, it was important to include the impact of AI and disinformation in less-covered contexts, which is why this series will take you to the polls in Mexico and the fallout from Indonesia’s February ballot.
The risks of AI-fueled disinformation and algorithmic distortion of our civic debates are everywhere, but they are likely to be even more pronounced in non-Western regions of the world where social media corporations are known to underinvest in safeguards.
Europe’s crucial role in tech regulation
AI may not single-handedly sway an election this year — but that is missing the point.
To echo the journalist and Nobel Peace Laureate Maria Ressa (we are proud backers of Ressa and Dmitry Muratov’s 10-Point Plan to address the information crisis), what’s at stake here is quite simple: our shared reality.
By partnering with POLITICO in Europe, we also recognize the role of the EU’s leadership in advancing responsible tech regulation. Europe has a unique opportunity to assert oversight of Big Tech corporations — if it can overcome the frequent enforcement challenges that come with its ambitious regulation, whether it’s the GDPR or, increasingly, the Digital Services Act, Digital Markets Act, the AI Act and more.
The European Union’s large market size means that changes to meet EU standards can have global repercussions, as companies adjust their products and policies worldwide.
Yet, despite these favorable conditions, Europe still struggles to rein in the tech industry. The problem lies in misaligned incentives. Powerful laws exist, but underfunded regulators and a regulatory bottleneck in countries (notably Ireland) have created an enforcement gap.
Democracy and sovereignty over technology
What’s needed, too, is help to elevate the political importance of Europe’s tech regulation, so that its ambitions don’t become mired in member country bureaucracies.
Like the orderly transition to renewable energy, this is a core question of democratic integrity and sovereignty. However, tech policy is still too rarely treated as such.
The threats posed by AI-driven disinformation are daunting, but not insurmountable. Through rigorous journalism, robust regulation, and a renewed commitment to democratic oversight of our technology and the corporations that deploy them, we can protect the liberating potential of technology while upholding the principles of strong, resilient and open societies.
For Luminate, this series marks an important step in that journey: We hope it can shine a light on the challenges around us, and galvanize those invested in a safer digital future.

en_USEnglish