What’s in the EU’s first law on artificial intelligence?

European Parliament on Wednesday Artificial intelligence has given final approval to far-reaching rules, which the EU hopes will drive innovation and defend against harm.

Known as the ‘AI Act’, the law was first proposed by the European Commission in April 2021.

The AI ​​Act is being touted as the first comprehensive worldwide legal framework on artificial intelligence, addressing the risks of artificial intelligence and positioning Europe to play a leading role on the global stage.

The purpose of the Artificial Intelligence Act is to obligate artificial intelligence developers and deployers to set clear requirements and responsibilities regarding specific uses of this technology.

But it wasn’t until the Microsoft-funded chat GPT went public in late 2022 that real artificial intelligence competition began, and so did the race to regulate it.

China and the US introduced regulations on artificial intelligence last year Europe The law of is the most comprehensive.

The EU will implement this law in a phased manner.

A full ban on the highest-risk forms of artificial intelligence will come into effect later this year, while rules on systems like ChatGPT will come into effect 12 months after the law comes into effect, with the remaining provisions coming into force in 2026.

AI Models

While EU negotiators were debating the text, internal tensions and outside lobbying about regulating general-purpose artificial intelligence models such as chatbots were at their height.

Creators of such models must provide details about what material (such as text or images) they used to train their systems and comply with EU copyright law.

For example, higher expectations have been placed on OpenAI’s latest chat GPT-4 and Google’s Gemini, which the EU says poses ‘systemic risks’.

These risks can include causing serious accidents, being misused for far-reaching cyber attacks or promoting harmful prejudices online.

Companies offering these technologies are required to assess and mitigate risks, track and report serious incidents such as fatalities to the Commission, take measures to ensure cyber security and report on the energy consumption of their models. Details must be provided.

The commission has already established the AI ​​Office, which will enforce rules on general-purpose AI.

A risk-based approach

The EU views artificial intelligence systems as a threat to democracy, public health, rights and the rule of law.

High-risk products such as medical devices, used in education or such infrastructure systems, will face greater obligations to mitigate any risk.

For example, high-risk suppliers should develop systems with quality data, ensure human supervision and maintain appropriate documentation.

Even after placing their products on the market, suppliers will have to be closely monitored.

EU citizens will have the right to complain about artificial intelligence systems, while public bodies will have to register high-risk AI systems deployed in a public EU database.

Breaking the rules can cost these companies dearly.

The EU can impose fines on artificial intelligence providers between 7.5 and 35 million euros ($8.2 million and $38.2 million), or between 1.5 and 7 percent of the company’s global turnover, depending on the size of the violation.

The rules also stipulate that citizens must be informed that they are dealing with artificial intelligence.

For example, deepfake images generated using AI should be clearly labeled while chatbots should say they are powered by AI.

Restriction

Some forms of artificial intelligence are banned by the European Union because the risks they pose are considered too great.

This section contains related reference points (Related Nodes field).

These include predictive policing, emotion recognition systems in workplaces or schools, and social scoring systems, which evaluate individuals based on their behavior.

The law also prohibits police officers from using facial recognition technology, but if they are looking for someone who is convicted or suspected of a serious crime such as rape or terrorism, they can Exemption will be granted.

Police can ask to use the technology to locate victims of kidnapping or trafficking, but they would need approval from a judge or other judicial authority and would be limited to a certain time and place.

The AI ​​Act

The European Commission says on its website that the act is part of a wider package of policy measures to support the development of reliable artificial intelligence, including the Artificial Intelligence Innovation Package and the Integrated Plan on Artificial Intelligence. . These measures will guarantee the protection and fundamental rights of people and businesses.

Accordingly, the new rules aim to promote trusted artificial intelligence in Europe by ensuring that artificial intelligence systems respect fundamental rights, safety and ethical principles and address the risks of very powerful and effective artificial intelligence models. are

Why are rules on artificial intelligence needed?

The Artificial Intelligence Act ensures that Europeans can trust what artificial intelligence has to offer. Although most artificial intelligence systems are not risk-free and can contribute to solving many societal challenges, some artificial intelligence systems pose risks that must be addressed to avoid undesirable consequences.

For example, it is often not possible to determine why an artificial intelligence system made a decision or prediction and why it took a particular action, so it can be difficult to assess whether someone has been unfairly harmed. is, such as in an employment decision or in an application for a public benefit scheme.

Commission says that while current legislation provides some protection, it is insufficient to address the specific challenges that artificial intelligence systems can bring.

High risk

Critical infrastructure (e.g. transport), which may endanger the lives and health of citizens, educational or vocational training, which may determine access to education and one’s professional course in life (e.g. examination scores ), security components of products (e.g. artificial intelligence applications in robot-assisted surgery), employment, workforce management and access to self-employment (e.g. CV sorting software for recruitment procedures). , essential private and public services (e.g. credit scoring denying citizens access to credit), law enforcement that may interfere with people’s fundamental rights (e.g. evidence review), migration, asylum and border control management (e.g. automated examination of visa applications), administration of justice and democratic processes (e.g. artificial intelligence solutions for searching court decisions).

These types of high-risk artificial intelligence systems will be subject to stricter obligations before being brought to market.

Limited risk

Limited risk refers to the risks associated with a lack of transparency in the use of artificial intelligence. The Artificial Intelligence Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust.

For example, when using artificial intelligence systems like chatbots, humans must be made aware that they are interacting with a machine so that they can make an informed decision to continue or withdraw. Providers must also ensure that AI-generated content is identifiable.

In addition, AI-generated text published for the purpose of informing the public on matters of public interest must be labeled ‘artificially generated’. This also applies to audio and video content, which contains deepfakes.

Implementation and enforcement

The European Artificial Intelligence Office, established within the Commission in February 2024, oversees the implementation of the Artificial Intelligence Act with member states. It aims to create an environment where artificial intelligence technologies respect human dignity, rights and trust.

It also promotes collaboration, innovation and research in artificial intelligence among various stakeholders. Additionally, it has engaged in international dialogue and cooperation on artificial intelligence issues, recognizing the need for global alignment on AI governance.

Through these efforts, the European Artificial Intelligence Office seeks to position Europe as a leader in the ethical and sustainable development of artificial intelligence technologies.

Join Independent Urdu’s WhatsApp channel for authentic news and current affairs analysis Here Click


#Whats #EUs #law #artificial #intelligence
2024-07-24 23:47:43

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.