Director of Avenga Labs
As Avenga Labs noted in the article AI Trends Spring 2021, AI is becoming almost invisible and a part of every IT solution for our private lives and businesses. There’s simply no way back to the pre-AI world.
There are incoming and existing regulations in the EU concerning IT aspects, such as privacy, data protection and Artificial Intelligence (AI).
People seem to keep on talking about autonomous cars killing pedestrians and wondering who or what would be responsible in case of such a tragic accident. The driver? The car manufacturer? The algorithm? The neural network? These kinds of discussions usually show a lack of interest in what is really going on in the regulatory space and what are the planned rules and regulations, which is why Avenga Labs has written this article.
The goal of this article is to briefly introduce the incoming regulations within the AI space in the European Union (EU), which is one of Avenga’s primary markets as an IT consulting company with a European HQs in Germany and Poland, and partners all over the EU.
I see all the major news outlets covering the regulatory aspect of AI in the EU, omitting entirely the European AI strategy, which is a pity because the regulations make more sense in the context of the AI strategy for the EU. The EU strategy was presented in 2018 and its goal was to ensure that Europe plays an important role in the development of AI and thus benefit from AI advancements in a safe and ethical way.
The EU strategy focuses on three main aspects. The first of them is AI governance unified within the entire EU, second is the access to data (which is a critical fuel for AI) and third is the requirement for AI to work towards the good of the EU citizens.
Regulations apply to all three areas.
For instance in data requirements, all the public institutions will be required to have their data both public and private at the same time (public meaning accessible by everyone but not violating privacy laws).
AI is being recognized as a great opportunity for the EU economies, as large chunks of the EU budget are to be spent on AI investments and lowering AI-maturity gaps between member states. So definitely, it’s not the EU against AI, but it’s actually the EU supporting AI as the driving force of the economy. Because of its importance, the regulations must be put into place to avoid the misuse of AI against the civil rights of EU citizens.
From the social perspective, it’s all about avoiding dystopian visions of the future, where AI driven corporations are in control of our lives and businesses.
“Today we aim to make Europe world-class in the development of secure, trustworthy, and human-centered artificial intelligence. And, of course, the use of it.” said European Commissioner Margrethe Vestager.
It’s the first such comprehensive AI regulation proposal in the world.
The consequences of violating the rules are being fined up to 6% of the global turnover or 30 million euros (whichever is the higher figure). I started with this to show the financial seriousness of the regulations, which are even higher than those concerning GDPR (General Data Protection Regulation).
A proposal for a Regulation on the European approach to Artificial Intelligence is available here.
The EU has sped up passing regulations to avoid the fragmentation of laws in different countries so as to maintain uniformity among its member states. In 2017, the EU recognized the need to accelerate legislation activities as AI was becoming an important aspect of businesses and the private lives of its citizens.
“Remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” seems at first to be a privacy group’s dream come true. However,there are exceptions such as finding a “perpetrator or suspect” which may be exploited and overused by law enforcement. Still, I’d consider it a significant step towards a ban of uncontrolled facial recognition in the EU.
In fact, it does not mean a ban of facial recognition and classification for all the commercial and public entities except law enforcement. Why not? The devil lies in details, as usual.
The key word here seems to be “real-time”, so what about the recordings? It seems face and people analysis based on recordings will still be allowed, however only when permitted by regulatory authorities.
AI driven bots and chats have to identify themselves as AI before starting a conversation, so no more surprises for the end users who start conversations believing they are talking to actual people and then are bewildered by the artificial and useless responses.
No fake calls and no fake bots, at least not being allowed by the law is going to be our reality soon in Europe.
Let companies focus on creating better AI bots instead of hiding them behind friendly human names.
→ Bots eating your web application alive, even when you sleep.
These kind of AI systems are strictly prohibited:
The third one seems to apply to most of the behavior analysis systems:
There are already discussions about what it means for all those loan scoring systems based on previous banking and social activities. Are they high-risk in the context of the new regulations?
AI systems in high risk areas such as critical infrastructure, military, and law enforcement, are to be governed by specialized and much stricter rules than for other industries and they are not allowed to be used before authorization of regulatory institutions is given.
In case of any doubt, the definition of high-risk AI systems can be extended to include virtually any system “they pose a high risk of harm to the health and safety or the fundamental rights of persons”. I like this kind of extension that closes potential loopholes which could be exploited.
And in the case of biometric systems, it means all of such systems: ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk.
AI systems that are used to support administrations as they make decisions about the benefits and support for EU citizens, and those supporting the decision process, are also considered high-risk as they may impair the rights of the people because of errors and inaccuracies in their implementations or data.
And, all of these high-risk systems are required to have “human oversight”, so no HAL 9000 scenario is allowed under these regulations. Risk management frameworks are obligatory for all such systems.
High-risk AI systems are obliged to keep trace and logs of all their activities in order to detect and correct any improper activities (Record-keeping requirements).
And who is responsible for the accidents and their consequences? It’s clear and unambiguous: the “specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system“. So there’s no doubt, not anymore. It’s not the fault of the algorithms or any other IT abstraction, or their physical representation such as robots (no matter how human they could look like), it’s the responsibility of actual people or companies who put the devices and software on the market.
A quality management system must be put in place as an obligatory requirement. Certifications are foreseen as one of the indicators of product conformity with the regulations.
All high-risk AI systems will be listed in the EU database for stand-alone high-risk systems.
Information about incidents and malfunctioning is to be shared with the authorities who have granted access to reports and raw data from devices and IT systems.
Both physical and psychological harm to the people caused by artificial intelligence is prohibited and to be avoided at all costs. Let me quote:
“Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being”.
Regulations are always limiting what the companies and research groups are allowed to do. So what about innovation? Fortunately, the regulatory draft recognizes this problem and provides a solution. “Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. “
Privacy focused groups demand stronger and more unambiguous regulations to ban facial recognition technologies regardless of the industry and country.
On the other hand, many startups and (you guessed it) US based internet giants are protesting and trying to influence legislation that would make it less inconvenient for them.
There’s also growing hope on the other side of the pond, that the US will also embrace more regulation for privacy and AI, and a fear by AI driven companies that such regulations will also take place in the US.
It’s a very important first step and there will be more in the future.
Regulations limit what is allowed by law, however from the business perspective, it’s better that they are known in advance giving more time to prepare, which may even include starting new initiatives in different areas and cutting down those that would violate the new regulations.
Businesses have enough volatility and want to know the boundaries in which they can operate and build their market success.
Contrary to popular negative sentiment, regulations can also be perceived as the strongest driving force for IT innovation. They cannot be worked around nor avoided and they must be implemented in a given time frame. As always, there’s a lot of anxiety about the precision and usual lack of concrete technical guidelines, which are to be figured out in the process of planning changes in the IT systems.
Let’s just recall the introduction of GDPR and local implementations in the member countries. It required tons of work in the IT systems, with the arguably most difficult Right to be forgotten liberty to be implemented in all the IT systems.
A similar situation happened in the banking sector with the famous regulation number 9.
It is about to happen in AI/ML systems everywhere in Europe.
We all want to trust AI, that it will be ethical, fair and explainable. And, so does the European Commision.
→ Explainable AI is what business needs in its path towards Responsible AI
I’m really glad to see the European Commision recognizing both our human rights and needs as customers in the AI driven present and future.
For all the data scientists targeting EU countries with their solutions, it means taking into account new goals and regulations in order to deliver the best solutions technologically possible without violating the new regulations.
→ Have a look at how Avenga can help