Stricter rules will soon apply to the use of artificial intelligence (AI) in the European Union (EU). After lengthy negotiations with the European Council, representatives of the European Parliament reached a consensus on the main features of the AI Act.
The proposed legislation is intended to ensure that AI is safe in Europe, that basic rights and democracy are respected, and that companies derive the maximum benefit from the technology. European Commissioner for Internal Market Thierry Breton referred to the agreement on X, formerly Twitter, as "historic." Commission President Ursula von der Leyen spoke of a "global novelty."
New Technological Risks
As rapid advances are made in the field of AI, the amount of criticism increases (as Medscape Medical News has reported). Critics highlight the dangers and are calling for regulation.
The European Commission first published a legislative proposal for the AI Act in April 2021. But the negotiations almost broke down recently over the question of regulating the so-called foundation models. These are very powerful AI models that are trained with a broad range of data. They can be used as the foundation for many other applications, including ChatGPT.
Provisions in the proposed legislation focus on the potential risks for AI and the extent of its effects. The greater the potential dangers of an application, the stricter the requirements shall be. Particularly risky forms of AI application could even be prohibited.
Transparency rules have been imposed on the large AI companies such as OpenAI, Microsoft, and Google. They must provide information on what data are being used to train the technology and how copyright law is being upheld.
What the Act Includes
The most important aspects of the AI Act are the following:
- Protective measures for general AI
- Restriction on the use of biometric identification systems by law enforcement agencies
- Ban on social scoring and AI (ie, on systems that are used for manipulation or that exploit users' weaknesses)
- Users' right to submit complaints and receive meaningful explanations
- Financial penalties of €35 million, 7% of global turnover up to €7.5 million, or 1.5% of turnover
Facial Recognition
Because language models like ChatGPT do not fit into the risk-based sorting, they are governed by their own sets of rules. Germany, France, and Italy have resisted the regulation of language models for fear of slowing innovation.
The purpose of large language models is not clear. Therefore, they are called general purpose AI (GPAI) in the draft legislation. For GPAI models with high systemic risk, the Parliament's chief negotiator implemented similarly strict conditions.
On the question of which AI technologies are not legal, the European Parliament agreed on some points. Systems that scan faces from publicly available data on a large scale shall be prohibited. AI software that attempts to recognize emotions in the workplace and in education institutions shall also be prohibited.
Some EU member states have insisted that there be exceptions to some bans so that investigative authorities can use certain technologies. For example, real-time facial recognition should be prohibited as a matter of principle but allowed in some exceptions, such as to prevent terrorism or to look for suspects in murder or abduction cases.
The agreed text must be formally accepted by the Parliament and the Council to become EU law.
This article was translated from the Medscape German edition.
Comments