The video contains captions (subtitles) in English and Hungarian. The primary language of the captions is Hungarian which is turned on by default. In order to change to English please click 'Settings' icon at the bottom right of the video player, select 'Subtitles/CC' and choose English instead of Hungarian.

Olívia Erdélyi on AI governance: consistency, scientific rigour, expertise, a multi-layered regulatory structure

The third keynote speech at the Humans in Charge conference in Budapest was delivered by Prof. Dr. Olívia J. Erdélyi. Olívia is a Senior Lecturer at the University of Canterbury in New Zealand, a Visiting Professor at the University of Bonn in Germany, and a Senior Partner for AI Ethics and Governance at the PHI Institute headed by George Tilesch. In her presentation “AI Regulation Around the Globe – State of Play”, she highlighted the main regulatory problems and possible solutions.

Not everyone needs to reinvent the wheel

According to Olívia Erdélyi, anyone who wants to have a clear picture of the field of AI regulation today will not have it easy.

“If I could pick one word to describe the current situation, which is a nightmare for everyone who has to comply with regulations, it would be inconsistency,” the University of Canterbury lecturer remarked referring to the many competent global, continental and national organisations and the host of implementation plans and practices that come about as a result of regulatory efforts.

“If you attend board meetings or sit for a debate anywhere in the world, you will find that humanity has something we all have plenty of, and that is ego. Well, the ego wants us to appear to be the best at everything. So we are all prone to wanting to see the things we come up with, which is exactly why there are so many organisations working in this domain,” she explained.

Olívia Erdélyi thinks the risk-based approach the European Union applies in designing the upcoming regulation on artificial intelligence is a good example to follow. It is a four-tiered pyramid with the top setting out the uses of AI that pose unacceptably high risks (e.g. the use of AI for profiling and the use of such profiles when assessing loan applications or at job interviews), and the bottom including, by definition, risk-free uses (e.g. the use of AI to allow/deny video games, spam filtering, etc.).

The expert pointed out that not everyone needed to reinvent the wheel.

“So if (a country) is thinking about implementing an AI law and developing some national rules, mirror image rules, then this is an approach (i.e. the EU’s risk-based approach) that it can already rely on. You don’t have to make up your own. You don’t need to produce a new definition of an AI system. Believe me, there are countries that do. So, if there is already a definition of AI systems that has been adopted by the EU in its AI Act, the OECD and the G20, it is unlikely for another definition to conquer the world,” argued Olívia Erdélyi.

The speaker went on to discuss the five value-based AI-related principles of the OECD (Organisation for Economic Co-operation and Development), a major global player, and how they fit in with the principles laid down in EU, US, Canadian and Chinese rules.

She noted that some OECD principles did not have a direct equivalent in the regulations of these countries or the Community, while other principles showed some similarity but used a slightly different terminology. However, these minor differences mostly matter in practice.

Consistency and scientific rigour

The Hungarian-born international expert identified “the use of consistent and scientifically accurate terminologies and taxonomies” as the key message of her presentation.

To illustrate her point, she put two definitions on the screen and asked the audience to volunteer to interpret the two definitions, to which they replied that these were not exact but vague definitions. She then continued:

“When sitting in a room with policymakers, they sometimes say, well, OK, we need to come up with something that is understandable to the public and other policymakers. That’s perfectly fine, and we shouldn’t worry so much about technological terms. But that’s not true because if you think about it, who will enforce these rules? Yes, there are high-level guidelines, they trickle down to the companies and end up with the product development teams, the programmers, other technical people, and they have to implement these rules. If you are an IT professional and you see such a definition, you turn around and run out of the room. There is nothing to do with this definition. And this is one of the points I would like to illustrate because most decision-makers are not aware of this. They think like a lawyer, they think: well, I’m a regulator, I need a relatively general approach. Which is true again. When designing regulations, they do not need to be very specific. Some regulations should remain general but use the right words. Otherwise those to whom the rules apply will be confused and the rules will not be effective.”

She gave another example of how the artificial intelligence committee of the International Organization for Standardization (ISO) had for two years been unable to agree on how to define the concepts of transparency, explainability and interpretability.

“The funny thing is that we have a standard under development with the header ‘transparency’, and another one that we have explicitly called ‘explainability’. These standards are at an advanced stage or halfway there. And we haven’t actually figured out what the definition of these two concepts is, or, as a matter of fact, of three concepts. So these are issues that seem like minor problems from a regulatory point of view, but as we move towards standardisation and even more elaborate implementation stages, they become the elephant in the room.”

Expertise is the key to AI governance

Olívia Erdélyi believes that there are two options for the governance approach:

“Either we choose a special AI regulatory authority to deal with AI issues, or we choose a fairly decentralised system with specialised agencies (ministries – ed.) taking over AI-related tasks,” she explained.

“But in any case, we will need an agency that coordinates between the other agencies because that’s just how regulation goes, and if there’s no coordination, things tend to go wrong. (...) The one thing I would like to stress here is expertise, and not just regulatory expertise, but technical, scientific, machine learning and artificial intelligence expertise. You have to be able to communicate with the technical staff implementing the rules, and again, use the right terms for the right things.”

The devil is in implementation

The visiting professor at the University of Bonn considers it of utmost importance that we think in terms of implementation already when defining the regulatory system. Here, she pointed to four levels of abstract thinking, starting with high-level principles, continuing through more specific but necessarily still general rules (such as the AI Act of the EU in her opinion) and relatively detailed international standards, to specific, precise implementing measures tailored to a particular context (e.g. intraorganisational rules).

“Regulation as such is abstract, while implementation is very concrete”, Olívia Erdélyi wrote on the screen, and said that balance could be achieved, for example, by establishing multi-level, multi-layered, highly flexible regulatory frameworks, such as the four-level Lamfalussy process in the EU financial markets.

“Finally, I would just like to say that please, follow the rules of common sense, of reason – and not your ego – because there are many good solutions. Adopt what you can, add your own that you have to add, and those who have to comply with your rules will be thankful,” Olívia Erdélyi concluded.

This speaker

Olivia J. Erdélyi

Senior Lecturer, Christchurch (NZ) & Bonn Universities

Dr. Olivia J. Erdelyi is an internationally recognized AI ethics and policy expert and consultant with a multidisciplinary background in computer science, economics, law, and political science.

Her work centers around developing sustainable policies and robust regulatory and governance frameworks to enable beneficial development and societal adoption of emerging technologies, in particular AI.

She is active in international AI policymaking and advises governments and organizations on their journey towards AI readiness.

More speakers

Speeches

Maria Luciana Axente, a renowned AI ethics expert and advocate for children's rights, spoke at the "Humans in Charge" conference, focusing on child protection in the digital age. She explored the opportunities AI offers in education and health but warned of its darker side, including reducing human interaction crucial for childhood development. Axente highlighted notable efforts, such as UNICEF's "AI for Children" initiative, designed to answer emerging ethical questions around AI and children's safety.
As Data & AI Lead for Public Sector & Health at Microsoft Spain, he highlighted the need for collaboration between government and private sector in managing AI. He emphasised the transformative potential of AI. Sanchez underscored the importance of a cautious yet optimistic approach, referencing Microsoft's own AI regulation framework. He referenced Spain's progressive national AI strategy and the potential for other EU countries like Hungary to adopt the AI Sandbox model.
At the "Humans in Charge - Steering the AI Age Responsibly" conference, George Tilesch, international expert and PHI Institute for Augmented Intelligence's founding president, emphasized on the convergence of technology, regulation and social inclusion in anticipation of AI Act's implementation. He expressed the vital need for proactive planning and hoped the conference’s insightful discussions would help Hungary prepare for its upcoming EU presidency.
The President of the National Media and Infocommunications Authority, emphasized the need for collaboration among researchers, developers, and decision-makers for ethical AI development and usage. While AI has great potential to enhance life quality and human efficiency, it also poses significant challenges, particularly with deepfake technologies eroding faith in digital reality. Koltay called for further exploration of the AI's legal implications, data protection, and vital ethical standards.
Brando Benifei, Italian Member of the European Parliament and co-rapporteur of the EU’s Artificial Intelligence (AI) Act, spoke live via video link to the conference participants. He said that the title of the NMHH conference “Humans in Charge – Steering the AI Age Responsibly” encapsulated perfectly what they wanted to achieve with the new Community legislation: a set of human-centred rules that allow strong human oversight, minimise risks and promote the reaping of the benefits.
Italian researcher and AI4GOV founder Gianluca Misuraca spoke at the "Humans in Charge" conference on AI governance and the vital role of the public sector. He emphasized the importance of managing AI's potential benefits and risks for public services and society. Misuraca noted government's role as AI regulator, user and facilitator, and highlighted the challenge of adopting AI in public services while protecting citizens, especially under uncertain outcomes. He also stressed the need to prepare the workforce for increased AI use.

Panel discussions

The panel discussion on 'Responsible AI in Digital Platforms, Telco & Media' focused on AI's role in these sectors and exploring strategies, challenges, and regulatory compliance. The panel comprised experts from Microsoft Spain, PHI Institute for Augmented Intelligence, OpenAI and T-Systems International.
The panelsists – who are internationally renowned AI experts – discussed AI's power as a constructive force but also potential threats and risks. The main focus was on creating awareness regarding AI safety and security, protecting vulnerable populations, particularly the youth, and the role of institutions and defense against AI misuse.
The world is nearing consensus on ethical AI, presaged by the anticipated EU AI Act. Questions of creating norms, operationalizing them, and establishing governance structures are central. Leaders are expected to understand AI policy, ethics, and communicate its implications effectively.
The fourth panel discusses AI-infused government services as a key area for AI introduction in society. Questions revolve around EU political readiness for AI, creating trustworthy AI environments, the role of AI sandboxes, and partnerships between public authorities and AI leaders. The participating panel experts hail from a diverse array of AI-related fields.

Photo gallery

Photo gallery