top of page

AI regulation must be multistakeholder

Hertie School, Centre for Digital Governance and Data Science Lab jointly host public discussion about the EU AI Act.

Artificial Intelligence (AI) is poised to alter not only industries, but the very tissue of our societies. “It is exactly the fast pace of this technology and its widespread impact that requires a similarly quick and agile reaction from our policymakers, entrepreneurs and civil society organisations.” With these words, Hertie School President Cornelia Woll kicked off “Too smart to regulate? How AI challenges good governance”, a public discussion about the importance – and difficulty – of AI regulation and the EU AI Act. The event took place on 6 March and was hosted by the Hertie School together with its Centre for Digital Governance and Data Science Lab.

The panel included:
•    Carla Hustedt, Director of the Centre for Digital Society, Stiftung Mercator
•    Jan Hiesserich, Vice President Strategy & Communications, Aleph Alpha 
•    Matthias Spielkamp, Co-founder and Executive Director of AlgorithmWatch
•    Kai Zenner, Digital Policy Adviser, European Parliament

The discussion was moderated by Hertie School Professor of Digital Governance and Director of the Centre for Digital Governance Daniela Stockmann, and Assistant Professor of Computer Science and Public Policy Lynn Kaack. 

Risks of AI – accountability gap and a lack of digital literacy


To begin the panel, the speakers gave their views on the biggest risks of AI as well as which aspects they think need regulation.


Carla Hustedt was especially concerned with accountability related to AI use: “We need a better understanding of responsibilities along the value chain,” she argued. She was also worried about the concentration of power on the digital market, where “big tech companies are trying to leverage their powers” and “don’t always use AI for good”. Matthias Spielkamp criticised that discussions on AI usage are based on a false balance between the risks they pose and the new opportunities they provide us. “We have seen very broad claims being made about the capabilities of AI, and we haven’t really seen those materialise,” he said. “The biggest problems of humanity cannot be solved by technology.”















Jan Hiesserich was especially concerned with digital literacy. Though regulation provides guardrails for the use of AI, he said, it should not be seen “as a reason for people to stop thinking”. He called for a broader societal debate about what technology is supposed to do and where its limits lie. Kai Zenner was equally concerned about the lack of digital literacy in the public. Although he was confident that digital literacy would improve with time, he worried that during the “transitional period”, “bad actors” would find ways to interfere with our democracy, and added that we are not prepared for cyber-attacks. While he viewed the European Digital Services Act and AI Act as progress, he noted that they would probably not be enforced in time for the upcoming EU parliamentary elections.


The EU AI Act is not perfect, but it is progress

The panel discussed in depth the European Union’s AI Act, which was passed earlier this week. According to Zenner, who was involved in the process as adviser to Member of the European Parliament Axel Voss, finding consensus for the legislation was challenging for a number of reasons. The biggest hurdles were the divide between advocates and sceptics of regulation; choosing which stakeholders should comprise the responsible commission, which left certain actors out; and the lack of time to address all issues relevant to AI, leading to some being neglected. “We still managed to produce some good results,” said Zenner, “but there are also a few bad ones.”


Hustedt and Spielkamp both agreed that despite the loopholes in the act, it was crucial to get it passed, as no legislation would mean no regulation. “We wouldn’t have been able to get a better deal later”, Hustedt argued, given that the next EU Parliament could be more opposed to regulation than the current one. Hiesserich, though agreeing on the importance of regulation, criticised that the legislation was “rushed”, and that the debate took place without a clear idea of what exactly was being regulated. 


AI regulation needs to be multistakeholder


To conclude the panel, the speakers were asked how AI regulation can effectively respond to the challenges and opportunities of AI in the future. All four speakers agreed that it is crucial that the debate on AI regulation include a broad range of stakeholders. “This is a process where you have to have many people at the table,” Hiesserich stressed. “We all have to understand better where we’re coming from, and that’s something that I feel is missing in the debate.”


“Too smart to regulate? How AI challenges good governance” was the fifth Hertie Futures Forum event, a series which celebrates the Hertie School’s 20th anniversary. We are grateful to Christ&Company Consulting GmbH for their sponsorship of this event.

Watch the recording here:  

bottom of page