
Have you already used the famous ChatGPT to work or search for information for personal use? If you have, you’ve probably noticed that its speed and efficiency are as fascinating as they are unsettling. If, while still in its “infancy”, this technology can already deliver such results, imagine what it will be like in a few years—when a single tool will be capable of doing all the work and the market will become a red ocean full of sharks competing for the AI oligopoly.
To prevent this almost apocalyptic scenario that seems to be approaching, and considering the collateral problems that are arising related to privacy, data, and the disappearance of many current jobs, the EU has developed a draft law to regulate the development of AI.
Before diving into the foundations and motivations behind this proposed law, it’s important to first clarify the difference between conventional Artificial Intelligence and Generative Artificial Intelligence, since all the controversy has arisen with the latter.
While Artificial Intelligence focuses on analyzing and interpreting data, Generative Artificial Intelligence can create original and unique content from scratch. The fact that this technology has entered the world with a creative component represents an unexpected leap forward—but also a threat. Until now, everyone felt safer thanks to the barrier that human creativity imposed between technology and people. Well, that barrier no longer exists.
To bring some clarity to the uncertainty spreading among companies and users worldwide, the European Union aims to protect its citizens from data theft and privacy violations online with the new Artificial Intelligence Act. Want to know what it’s about? Let’s break it down point by point below!
The priority of this law is to ensure that AI systems used within the EU are safe, traceable, transparent, and inclusive. To achieve this, AI systems are analyzed and classified according to the level of risk they pose to users. Depending on the degree of danger, different levels of regulation will apply—ranging from unacceptable risk to high risk, generative AI, and limited risk. Once approved, these will become the world’s first AI-specific regulations. Let’s take a closer look at what each level means.
Unacceptable Risk: AI systems considered to pose an unacceptable risk will be banned, as they represent a direct threat to people. These include systems involving cognitive manipulation of behavior—especially targeting vulnerable groups—and the classification of people based on behavior, social status, or personal characteristics. Real-time biometric identification systems are also included.
High Risk: These are AI systems that may infringe on fundamental rights. They are divided into two groups: AI systems used in products subject to EU product safety legislation, and AI systems used in the following eight areas, which must be registered in a public database:
Generative AI: Generative AI systems may continue to operate as long as they comply with the following requirements:
Limited Risk: These systems must meet minimal transparency requirements, ensuring users are always informed. For example, users must be notified when content is a deep fake or otherwise manipulated.
Depending on the country, the legal interpretation of AI may vary; however, there are certain practices universally prohibited under all jurisdictions, including the following:
Unfair Discrimination: The law may prohibit the use of AI algorithms that perpetuate discrimination based on characteristics such as race, gender, religion, or sexual orientation. This includes automated decision-making in areas such as hiring, credit, and criminal justice.
Mass Surveillance Without Consent: The law may impose restrictions on the use of AI technologies for the mass surveillance of individuals without their consent. This includes recording or tracking people without a legal basis or adequate safeguards to protect privacy.
Information Manipulation: The use of AI to deliberately spread false or misleading information to influence public opinion or individual behavior may be prohibited. This includes manipulating recommendation algorithms or creating user profiles that generate filter bubbles or informational bias.
Risks to Safety and Human Life: The law will establish specific regulations to ensure safety and minimize the risks associated with the use of AI systems. This may include banning autonomous systems that pose unacceptable risks to human life or requiring safeguards in sectors such as autonomous transport or AI-assisted healthcare.
If your company is already one of the pioneers using AI in its operations—or if you believe it could greatly enhance your team’s workflow—don’t hesitate to subscribe to Educa.Pro, where you’ll find all kinds of training and information on this topic. We’ll be waiting for you!