The BRUSSELS effect. What you need to know about the EU AI act

The BRUSSELS effect. What you need to know about the EU AI act

In April 2021, the proposal for a regulation on Artificial Intelligence of the European Union (EU AI act) was published.

Why do we need a law for Artificial Intelligence (AI)?

Nobody doubts that AI brings many benefits to companies and society. It has great potential to help humanity in so many ways, but it also has the potential to cause serious harm. With great power comes great responsibilities.

AI makes decisions for us, and about us, but not always with us.

The AI can decide the advertising we see, the opinions we read, the news and information that is accessible to us, or whether we are eligible for a loan, or what is the best medical treatment we need.

For all these reasons, the European Union (EU) is willing to create a law that regulates AI systems with the aim of minimizing the risks, and at the same time multiplying their benefits for society. In addition, by establishing a stable legal framework, it will encourage investment in the sector in the EU.

What is considered AI, and who is affected by this law?

As always happens when we talk about AI. Also in this case, the definition is broad: “Artificial intelligence is a set of technologies that, by improving prediction, optimizing operations and resource allocation, and customizing service provision, can generate a wide range of economic and social benefits in all sectors and social activities”.

Likewise, its scope of application will be broad. All European companies, organizations and citizens, and will also be mandatory for AI providers established outside the EU and that their products and services are used within the EU. And furthermore, providers established in a third country, and that the output information generated by their AI systems is used within the Union (for example, if a provider with an AI system for the early detection of tumors by image recognition has developed and trained the algorithms outside the EU, but they are used for European patients… you will also have to comply with the law).

What does the bill say?

The proposal uses a risk-based approach. AI systems are classified into 4 categories based on perceived potential risk:

Unacceptable risk. The use of AI is completely prohibited.

This classification includes all systems that limit people's free decision, exercise manipulation, or serve to implement some type of social scoring, as well as biometric identification systems in public spaces.

Source: Digital Future society / Mobile World Capital Barcelona https://digitalfuturesociety.com/  https://mobileworldcapital.com/

There are some exceptions such as military applications that fall outside of this proposal.

High risk. They are the ones that potentially compromise human rights.

It includes the security components of regularized products, such as medical equipment, machinery, vehicles and the aviation sector. Also, products with stand-alone AI systems, such as critical infrastructure management systems, support systems for granting credits or aid, or legal penalties. As well as the use of AI for biometric identification (with some exceptions, such as anti-terrorist security, human trafficking, child pornography, among others).

In these cases, AI systems must comply with rules related to Traceability, Transparency and Robustness. In addition, to ensure Transparency, these systems must be included in a repository of the European Commission.

Low risk. For example, the use of Bots. In this case, what will be required is that the user is informed that he is communicating (verbally or in writing) with a bot and be free to decide to continue with the conversation or not. It would be something like the GDPR requirement that requires all websites that use cookies to inform users and they can decide whether to accept or not.

This category also includes Deep Fake (modified images or videos), which is not prohibited but must be labeled as such for the user's knowledge.

And, finally, in case emotion recognition or biometric systems are being used, it will also be necessary to make it public. Something similar to what happens in our streets with the security cameras that are forced to have a sign that warns them.

In summary, in this category Transparency and public communication are essentially required.

Minimal Risk. These are the uses of AI that cannot in any way cause harm to the user, for example, AntiSpam filters.

In this category, no requirements will be applied. But it is recommended that some kind of code of conduct or ethics be voluntarily established (AI Ethics)

No alt text provided for this image

It is not always self-evident to decide the level of risk of an AI. Since the danger does not lie in the AI system itself, but in its use. A knife is useful for eating, and cooking, for example, but it can also be used for killing.

An anti-spam filter is very useful in avoiding unsolicited advertising, but it can also be used so that on your social network's wall you only receive posts with a certain bias... which goes against your freedom (something similar to what happened with the Cambridge Analytics case). Therefore, in these cases they should be considered High-Risk systems (attacks on human rights). Surely this will be one of the cases under discussion in the coming months.

Another important topic that will be discussed will be the Accountability of AI systems. Who is responsible in case of damage?

Let us imagine the case of a vehicle operating in autonomous mode, whether it is a fully autonomous vehicle (level 5), or semi-autonomous (level 3-4). In the event of an accident, who is responsible? The car manufacturer? The developer of the AI system for driving? Is the sensor manufacturer responsible for providing the key data at that time, for example, the lidar?, the telecommunications provider? …. the owner of the car?

We have a period of two or three years of intense negotiations ahead of us until the proposed regulation becomes law, expected in 2025.

What is the Brussels effect?

The law will have a European legal scope, but it is hoped that its effects will apply far beyond the borders of the European Union. This is what is called the Brussels Effect, as has already happened with the GDPR regarding data privacy.

When a large market first legislates on a new subject, on the one hand, 1) it serves as an example for the legislation of other countries, and on the other, 2) it influences the design and production chains of products and services of global companies. Whether for a matter of scalability and productivity, any company prefers to have its products globalized and suitable for the largest number of markets, so in the event that an important market has legislation that it has to comply with, the IT provider will include the same requirements in its products/services regardless of the market to which they are directed.

Linkedin is a global platform without geographical barriers. Some of its algorithms would be in the high-risk category because they decide on the visibility of each user and their posts within the network, and also on the job offers that they receive or to which they have access.

Linkedin (and its matrix Microsoft) is not interested in developing unique algorithms for the European Union, and different ones for the rest of the countries in which it has users. Or even worse, develop and maintain specific algorithms for each of the legal frameworks in which it operates.

For this reason, Linkedin, which must comply with European law, will be interested in developing a single platform that complies with European law, and that the rest of the countries have legislation compatible with that of Europe, and thus a single platform will serve everyone (one fits all).

To achieve this, Microsoft (Linkedin) is expected to take advantage of the fact that the law is currently in a review phase, to influence the final version in a way that best suits its interests. And once approved, they will try to influence other countries to legislate in a way that is “compatible” with European law.

This is the Brussels effect that is being pursued in the European Union, being the first to legislate on AI.

And in this way, the European law on AI aspires to become a global standard.
Katye Bennett

PR pro, podcaster, communicator, connector, AV + tech + design, speaker, storyteller, CEO, 1 of 40 CE Pro Women to Watch, speaks AV fluently, helping connect design-build pros + integrators + brands.

2y

Very interesting and very well written, thank you for sharing these insights!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics