Olívia Erdélyi on AI governance: consistency, scientific rigour, expertise, a multi-layered regulatory structure
The third keynote speech at the Humans in Charge conference in Budapest was delivered by Prof. Dr. Olívia J. Erdélyi. Olívia is a Senior Lecturer at the University of Canterbury in New Zealand, a Visiting Professor at the University of Bonn in Germany, and a Senior Partner for AI Ethics and Governance at the PHI Institute headed by George Tilesch. In her presentation “AI Regulation Around the Globe – State of Play”, she highlighted the main regulatory problems and possible solutions.
Not everyone needs to reinvent the wheel
According to Olívia Erdélyi, anyone who wants to have a clear picture of the field of AI regulation today will not have it easy.
“If I could pick one word to describe the current situation, which is a nightmare for everyone who has to comply with regulations, it would be inconsistency,” the University of Canterbury lecturer remarked referring to the many competent global, continental and national organisations and the host of implementation plans and practices that come about as a result of regulatory efforts.
“If you attend board meetings or sit for a debate anywhere in the world, you will find that humanity has something we all have plenty of, and that is ego. Well, the ego wants us to appear to be the best at everything. So we are all prone to wanting to see the things we come up with, which is exactly why there are so many organisations working in this domain,” she explained.
Olívia Erdélyi thinks the risk-based approach the European Union applies in designing the upcoming regulation on artificial intelligence is a good example to follow. It is a four-tiered pyramid with the top setting out the uses of AI that pose unacceptably high risks (e.g. the use of AI for profiling and the use of such profiles when assessing loan applications or at job interviews), and the bottom including, by definition, risk-free uses (e.g. the use of AI to allow/deny video games, spam filtering, etc.).
The expert pointed out that not everyone needed to reinvent the wheel.
“So if (a country) is thinking about implementing an AI law and developing some national rules, mirror image rules, then this is an approach (i.e. the EU’s risk-based approach) that it can already rely on. You don’t have to make up your own. You don’t need to produce a new definition of an AI system. Believe me, there are countries that do. So, if there is already a definition of AI systems that has been adopted by the EU in its AI Act, the OECD and the G20, it is unlikely for another definition to conquer the world,” argued Olívia Erdélyi.
The speaker went on to discuss the five value-based AI-related principles of the OECD (Organisation for Economic Co-operation and Development), a major global player, and how they fit in with the principles laid down in EU, US, Canadian and Chinese rules.
She noted that some OECD principles did not have a direct equivalent in the regulations of these countries or the Community, while other principles showed some similarity but used a slightly different terminology. However, these minor differences mostly matter in practice.
Consistency and scientific rigour
The Hungarian-born international expert identified “the use of consistent and scientifically accurate terminologies and taxonomies” as the key message of her presentation.
To illustrate her point, she put two definitions on the screen and asked the audience to volunteer to interpret the two definitions, to which they replied that these were not exact but vague definitions. She then continued:
“When sitting in a room with policymakers, they sometimes say, well, OK, we need to come up with something that is understandable to the public and other policymakers. That’s perfectly fine, and we shouldn’t worry so much about technological terms. But that’s not true because if you think about it, who will enforce these rules? Yes, there are high-level guidelines, they trickle down to the companies and end up with the product development teams, the programmers, other technical people, and they have to implement these rules. If you are an IT professional and you see such a definition, you turn around and run out of the room. There is nothing to do with this definition. And this is one of the points I would like to illustrate because most decision-makers are not aware of this. They think like a lawyer, they think: well, I’m a regulator, I need a relatively general approach. Which is true again. When designing regulations, they do not need to be very specific. Some regulations should remain general but use the right words. Otherwise those to whom the rules apply will be confused and the rules will not be effective.”
She gave another example of how the artificial intelligence committee of the International Organization for Standardization (ISO) had for two years been unable to agree on how to define the concepts of transparency, explainability and interpretability.
“The funny thing is that we have a standard under development with the header ‘transparency’, and another one that we have explicitly called ‘explainability’. These standards are at an advanced stage or halfway there. And we haven’t actually figured out what the definition of these two concepts is, or, as a matter of fact, of three concepts. So these are issues that seem like minor problems from a regulatory point of view, but as we move towards standardisation and even more elaborate implementation stages, they become the elephant in the room.”
Expertise is the key to AI governance
Olívia Erdélyi believes that there are two options for the governance approach:
“Either we choose a special AI regulatory authority to deal with AI issues, or we choose a fairly decentralised system with specialised agencies (ministries – ed.) taking over AI-related tasks,” she explained.
“But in any case, we will need an agency that coordinates between the other agencies because that’s just how regulation goes, and if there’s no coordination, things tend to go wrong. (...) The one thing I would like to stress here is expertise, and not just regulatory expertise, but technical, scientific, machine learning and artificial intelligence expertise. You have to be able to communicate with the technical staff implementing the rules, and again, use the right terms for the right things.”
The devil is in implementation
The visiting professor at the University of Bonn considers it of utmost importance that we think in terms of implementation already when defining the regulatory system. Here, she pointed to four levels of abstract thinking, starting with high-level principles, continuing through more specific but necessarily still general rules (such as the AI Act of the EU in her opinion) and relatively detailed international standards, to specific, precise implementing measures tailored to a particular context (e.g. intraorganisational rules).
“Regulation as such is abstract, while implementation is very concrete”, Olívia Erdélyi wrote on the screen, and said that balance could be achieved, for example, by establishing multi-level, multi-layered, highly flexible regulatory frameworks, such as the four-level Lamfalussy process in the EU financial markets.
“Finally, I would just like to say that please, follow the rules of common sense, of reason – and not your ego – because there are many good solutions. Adopt what you can, add your own that you have to add, and those who have to comply with your rules will be thankful,” Olívia Erdélyi concluded.
This speaker
Olivia J. Erdélyi
Senior Lecturer, Christchurch (NZ) & Bonn Universities
Dr. Olivia J. Erdelyi is an internationally recognized AI ethics and policy expert and consultant with a multidisciplinary background in computer science, economics, law, and political science.
Her work centers around developing sustainable policies and robust regulatory and governance frameworks to enable beneficial development and societal adoption of emerging technologies, in particular AI.
She is active in international AI policymaking and advises governments and organizations on their journey towards AI readiness.
More speakers
Axente
Erdelyi
Misuraca
Kriisa
Tilesch
Drajkó
Porkoláb
Halmos
Slooten
Bárd
Juhász
Benifei