It clearly defines the risk levels of AI technologies and identifies who may be at risk following identifying violations. The KPMG expert shared what organizations should do immediately so that the adopted law does not become bitter for them and what risk management practices would ensure long-term compliance with the requirements of the new law.
Growing mistrust led to action
When at the end of 2022 the chatbot “ChatGPT” stunned the world with its ability to write texts in a few seconds, generate large data or perform other functions that take a lot of time or are difficult for humans, there were not only surprises in the public, but also waves of anxiety regarding whether this technology is being used properly and does not violate human rights.
A comprehensive survey of public opinion in 2023 by KPMG and the University of Cleveland. revealed that out of 17 thousand of people surveyed in 17 AI leading countries, three out of five do not trust AI systems, and 71 percent stressed the importance of regulation.
According to KPMG expert Edvinas Žukauskas, growing public distrust of AI systems and the issue of data security led to the EP’s decision to adopt the AI law. It is true that the latter will fully enter into force from 2026, but certain aspects will be applied earlier in different periods.
“Innovative business, which aims for higher productivity, better profit indicators and to offer its customers quality services or products, is rapidly implementing AI technologies into its operational processes. They are forced to do this in order to remain in a dynamic competitive environment and gain or maintain strong positions in the market. The new law, which will enter into force from 2026, will affect business processes, but it will also ensure that the development and use of technology meets the set standards, as well as transparency, security and protection of human rights, which have led to public questioning on these issues. comments the expert.
Significant fines are expected
The new Artificial Intelligence (AI) Law introduces four risk categories to identify and classify AI systems according to their level of security. The low-risk category includes systems that do not pose a threat and can be used without restrictions.
Below the moderate risk category are systems that may have some impact on user security or privacy, but not so great as to require very strict regulatory measures. High-risk systems that may have a significant impact require additional security measures. Under the fourth category, unacceptable risk, systems that may violate human rights or EU regulations are prohibited.
“National authorities will supervise compliance with the provisions of the IP Act, and they will be assisted by the IP Office of the European Commission. If it is found that the company uses AI systems that are classified as an unacceptable risk, such companies may be punished by imposing financial fines”, warns E. Žukauskas.
The new law stipulates that companies using the highest risk AI systems can be fined up to 35 million. euros or will have to pay up to 7 percent. total global annual turnover for the previous financial year, by type of infringement and company size. Smaller violations can net up to $7.5 million. euros or 1.5 percent. annual turnover.
The business must immediately assess the risks
The law, which will take full effect from 2026, will encourage businesses that have implemented or use data engineering (DI) technologies in their operations to immediately conduct a risk assessment, if they have not already done so.
“The first step that a business must take following the law is passed is to assess the available AI systems and categorize them according to risk levels. Depending on which risk category – minimal, high, medium or unacceptable – the AI system falls into, significant changes to processes and operations may be required. This is an important step in order to avoid violations of safety and ethical norms and the sanctions provided for in the law”, says expert E. Žukauskas.
Another important aspect, according to E. Žukauskas, which is important for companies to implement immediately, is to implement risk management systems. The latter can mitigate the impact of risk on operational efficiency and reduce the possibility of compliance violations.
“The adopted law requires companies to ensure compliance of AI technologies and data processing processes with the highest data protection standards. Therefore, it will be important not only to promote employees’ awareness of data security, but also to provide them with knowledge on how to recognize and respond to potential threats. In addition, it will be necessary to constantly update security protocols and prepare resilience plans in order to effectively deal with emerging challenges in the future”, notes E. Žukauskas.
He adds that to maximize the potential of AI, it is important to find a balance between automation and human competencies. This means that human oversight should be integrated into the operation of AI systems, ensuring that decisions are made ethically and responsibly.
“Human supervision is a necessary part of AI systems, guaranteeing data security and compliance with ethical standards. It is recommended to include external experts who would ensure that all AI decisions are thoroughly reviewed and, if necessary, adapted in order to avoid incorrect or harmful consequences”, advises E. Žukauskas.
In addition, according to the expert, it is necessary to inform the company’s employees and partners regarding the operation of AI systems, the opportunities they provide and the risks they pose. This will help create a responsible and ethical work environment, strengthen trust within the company and with external partners.
#Expert #artificial #intelligence #law #risks #assessed #time #fines #Business
2024-05-08 03:01:37