The New EU AI Act

Our research member, and Center Representative for Center for Digital Welfare, Cancan Wang published an opinion piece in Børsen on the recent AI Act put forward by the EU.

Below, we’d like to share the piece with you in full length and English translation. Enjoy the read!

 

EU’s AI Act doesn’t hinder innovation but accelerates safe and responsible growth

The EU’s recently adopted AI Act has been received with scepticism from the business sector in Denmark that fears it will hamper development. However, in its critical response of the act they fail to see the growth potential of a standardised global AI market, which will pave the way for responsible innovation and investment opportunities.

By Cancan Wang, lecturer and center representative, Center for Digital Welfare, IT University of Copenhagen.

The AI Act, which is intended to regulate artificial intelligence on the European market, has been met with some resistance from the business sector. However, it is a pronounced misconception that EU regulation will necessarily hinder the innovation of artificial intelligence in Europe.

Although regulations on artificial intelligence impose some limitations, they also function as a roadmap for safe and responsible development of the technology. AI Systems that do not violate EU citizens’ fundamental rights such as personal assistants, task automation, and AI in advertising, will largely not be regulated. Meanwhile, systems such as CV scanners used to select candidates for a position before they reach an actual desk will be subject to special legislation to prevent bias and discrimination. Systems that pose a particularly high risk, as for example AI used for facial recognition or assesings risks of criminal activity among citizens, will be completely prohibited.

The AI Act may lead to investment opportunities and ensures that global development is based on European values, while also creating a roadmap to safe and responsible use of artificial intelligence.

Europe’s influence on the global market
The AI Act was adopted by the European Parliament in March 2024 and is the first of its kind in the world. However, this is not the first time that the EU has legislated in the digital realm. The regulation falls under a wide range of other initiatives aimed at cybersecurity, social media, the internet, and data protection law, such as the GDPR, with which most people are familiar.

It is therefore not new for the Union to impose strict legislative requirements in digitization, but there is also a very specific reason for it. When the EU imposes requirements on, for example, artificial intelligence, they apply not only to companies in Eu-rope but to all companies worldwide that have AI-systems placed on the European marked. In other words, if you want to trade with and in Europe, you must meet a wide range of stringent requirements, which most global companies are willing to do, as the Union constitutes the largest single market in the world.

Thus, by being the first and most stringent to regulate artificial intelligence, the EU can influence global development so that it is based on European values and ensures the fundamental rights of EU citizens.

An example of this was when the GDPR was adopted in 2016, which imposed restrictions on how companies may use and collect data about EU citizens. Although the legislation only applies to the European market, many companies have chosen to adopt the GDPR not only because of the distinctly high value of market access, but also be-cause the costs of adapting requirements to different regions are higher than implementing the more stringent regulations as a global standard. In parellel, many states such as Australia, Brazil, Canada, South Korea, India, and Japan and South Korea have developed similar data regulations.

This means, that there is little incentive for companies to relocate their business out-side the Union as the EU legislation will likely be adopted by regions that currently have less regulated markets over the next few years. It may be true, as many businesses, organizations and leaders have expressed concerns about, that the legislation will impact startup companies more than the large global tech giants, who have more resources to deal with compliance. While there is no conclusive assessment of the actual compliance costs of the AI Act for SMEs, a forward-looking perspective suggests the regulations also opens up new investment opportunities for companies.

Indeed, it is not only the EU that imposes requirements on private companies; many other actors do as well, such as customers, partners, and investors. Many investment funds, in particular, have strict guidelines for which companies they can and will invest in. For example, many states, including Denmark, invest heavily in tech companies. However, without global guidelines, it is difficult for companies seeking investments to understand which requirements they need to meet. In this way, the AI Act can open up new investment opportunities because companies that comply with the regulation are likely to also meet the requirements of investment funds.

Guidance is needed to promote growth
While artificial intelligence holds great potential for addressing future challenges, it is also crucial that we take control of a technology that will affect large parts of society. Several public and private actors in Denmark are still hesitant to implement the technology. Only about 44 percent of Danish companies use AI systems, which they explain with, among other things, a lack of employee skills and knowledge about the technology.

As businesses are trying to jump the speedtrain to AI, the absence of understanding how the technology works is creating a “AI trust gap” both from the consumers and businesses, that are considerate of risks such as disinformation, bias, safety and security. Meanwhile, examples from public authorities paint a similar picture of implementation happening too quickly. For example, the municipality of Aarhus tried to use AI in evaluation of parents competences, but had to close the project, as it affected the decisions of the case workers too much.
Companies and organizations can benefit from the AI Act and use it as a roadmap to help them navigate around pitfalls and obstacles. This does not mean that we are done building. However, it ensures that we have a common direction and value base that can steer growth and the use of artificial intelligence in a responsible manner.

The EU is not as far along in digital development as regions such as the US and Asia, but equating this development with EU regulation of digitization is misleading. Econo-mists and Legal scholars suggested that fragmentation of Europe’s langugage, culture, and surprisingly, the lack of common law, may have been the culprits of the limited development of Europe’s tech industry.

We may ask ourselves if Europe’s approach to tech should be measured by market sizes at all, when other factors such as ethics, sustainability and welfare also should be weighed in. And just as important, we must be aware that one does not necessarily exclude the other, as I have tried to argue in this opinion piece.

While we wait for the regulations to be finalized, the European Parliament elections is just around the corner, so we can conveniently ask ourselves that very question in the meantime – especially in Denmark. Can we both protect the rights of EU citizens and take the lead in development and implementation of artificial intelligence at the same time?