The impacts that Artificial Intelligence, particularly Generative AI, is having on our lives are staggering, which is why it requires precise regulations.
The debate is ongoing. For instance, in Italy, in March 2023, the use of ChatGPT was halted by the Privacy Guarantor and then reinstated after a month.
In this context, the AI Act comes into play, the first European regulation on Artificial Intelligence aimed at regulating its use to ensure better conditions for the development and use of this technology.
What is the purpose of AI legislation?
The European Parliament aims to ensure that AI systems are safe, transparent, traceable, environmentally friendly, non-discriminatory, and human-supervised, thus balancing innovation and protection within the field of Artificial Intelligence systems.
The EU’s goal is to make the European Union “a global hub for AI and to ensure that AI is human-centered and reliable. This goal translates into the European approach to excellence and trust through concrete norms and actions. [https://digital-strategy.ec.europa.eu/it/policies/european-approach-artificial-intelligence]
What are the rules of the AI Act?
The rules outlined in the AI Act impose different obligations on providers and users based on the level of risk associated with Artificial Intelligence.
Risks have been classified into various types:
Specific Risk for Transparency
This category includes General Purpose Artificial Intelligence systems such as chatbots or ChatGPT, where users should be aware that they are interacting with a machine.
Contents generated by AI must be labeled as such, and providers must design systems so that content is clearly marked as generated or manipulated by AI.
This category encompasses Artificial Intelligence systems considered a threat to individuals and thus will be prohibited. Examples include:
- cognitive behavioral manipulation of vulnerable individuals or groups (e.g., toys using voice assistance to encourage dangerous behaviors in minors)
- social scoring
- biometric identification and categorization of individuals
This category includes Artificial Intelligence systems that negatively impact security or fundamental rights. All high-risk systems will be specifically monitored and evaluated before entering the market and throughout their lifecycle.
High-risk systems must meet specific requirements such as risk mitigation systems, high-quality data sets, activity logging, human oversight, detailed documentation, etc.
This includes critical infrastructures such as in the water, gas, and electricity sectors, systems for determining access to educational institutions or for recruiting, biometric identification systems, categorization, and emotion recognition.
Finally, in the limited-risk category, Artificial Intelligence systems must meet minimum transparency requirements allowing users to make informed decisions, deciding after interacting with applications whether to continue using them or not.
This category includes most AI systems, such as recommendation systems or spam filters. For these systems, there will be no obligations due to the minimal risk they represent. However, companies may voluntarily commit to adopting codes of conduct for their AI systems.
Where are we now?
Work on AI regulation began in 2021 and continues to this day.
On December 9, 2023, the Parliament reached a provisional agreement with the Council on the AI law. To become an official law of the European Union, the agreed text must be formally adopted by both the Parliament and the Council, after which it will enter into force 20 days after publication in the Official Journal. The law will become applicable two years after its entry into force, except for bans that will be applied already after 6 months.
To address the transitional period before the regulation becomes fully effective, the Commission will launch an AI Pact. The pact will serve to encourage and support businesses in planning ahead for the measures provided by the AI Act.
What will happen in the future?
Companies that do not comply with the imposed rules will be fined with penalties ranging from €7.5 to €35 million or from 1.5% to 7% of annual turnover. Proportionate caps will be provided for SMEs and startups.
One of the obligations for General Purpose systems will be to publish a list of materials used for algorithm training and make AI-produced contents recognizable.