2023-12-04 19:07:53
Overall, there is a desire to better understand and be more informed regarding the consequences of generative AI, particularly at work. In addition, 70% of French people believe that generative AI should be under human control, for proofreading and verification following intervention.
Towards trustworthy AI…
The study reveals partial confidence among the French regarding AI: 30% express an essential need to develop “trustworthy” AI. This involves orchestrating control and regulation adapted in synergy with innovation and progress to develop Artificial Intelligence that meets citizens’ expectations with security.
68% of employees use ChatGPT at work behind their employer’s back.
Knowing that ChatGPT is a public and unprotected application, there is a real control issue to put in place for many companies. The question of ethics, although complex and dependent on the socio-cultural context, is quickly addressed in the same way as the subject of legal responsibility. To respond to crucial issues of ethics and data protection, or to limit the number of controversies and numerous incidents linked to Generative AI, it is necessary to create trustworthy AI.
2/3 of French people plead for Trusted AI
According to an IBM study, the deployment of ethical AI in business is done step by step. First, we put its use into context in the overall strategic vision of the company. Then we establish governance that can ensure its implementation. Finally, we integrate it into the business cycle by engaging stakeholders and organizing a structure. It is also necessary to define a policy, by managing and supporting the culture as well as internal training. The implementation of a rigorous methodology and processes is necessary.
GDPR and AI, what balance?
Today, AI technologies feed on massive quantities of data, much of which is personal and even sensitive. It is therefore necessary to find a balance between the protection of the latter, compliance with legal rules, and the development of this technology. AI comes with its share of complexity in terms of GDPR with risks that might lead to discrimination, data theft, violation of privacy or abusive management of information.
In response to these issues, the CNIL has put in place a certain number of public resources to respond to the challenges posed by AI. Among them: How to secure a system or ensure transparency, find out regarding the issues related to the use of AI, how to self-evaluate your AI system, etc.
As a witness to the actions implemented on a large scale: the AI Act. It is the European regulation which aims to regulate the uses of artificial intelligence by classifying applications according to their level of ethical risk: minimal, limited, high or unacceptable. Its aim is to ensure that AI systems placed on the European market are safe and respect citizens’ fundamental rights and EU values. All this while guaranteeing legal certainty and strengthening security requirements.
Data protection and end-to-end encryption
To avoid infringing user rights, it is necessary to respect certain fundamental rules of the GDPR, namely:
-Data compliance and legitimacy of use
-Collection and compliance with consent
-Limitation of data use to what is necessary
-Data security assurance
-Need for information and Transparency
To comply with GDPR, businesses can opt for end-to-end encryption. It is a security technique that ensures that only the sender and the recipient can access the content of the data exchanged. This even when they pass through third-party networks. When it comes to artificial intelligence and data protection, end-to-end encryption plays a crucial role. It helps ensure the confidentiality and security of sensitive information used in AI systems.
Implement security audits
The issue surrounding security audits is significant. Businesses today are more exposed to cyber attacks than ever before. It is therefore essential to regularly put in place a dynamic process to ensure the reliability and recency of the data protection measures in force. Otherwise, companies are exposed to loss of confidential data which might lead to more or less serious financial, human or moral consequences.
With the exchange of massive amounts of data linked to AI, it is essential for companies to effectively protect their Information Systems. It is therefore wise for the development of Trusted AI to go through the implementation of these audits in order to detect and prevent potential security flaws.
They can take various forms depending on the objectives and needs specific to the company:
-Diagnostic technique
-Vulnerability test
-Strategic audit
-Strength test
-Social engineering test
5 direct actions for your business
Transparency and traceability: Provide clear explanations to your users regarding how data is used, collected or processed as well as the decisions made by AI systems.
Data Protection: Ensure data security is a priority when developing algorithms and AI models. Make sure you are up to date with GDPR, the AI Act and any applicable laws.
Responsible data management: Implement robust methodology and governance protocols around data management.
Safety and continuous assessment: Conduct regular protocols to assess risks and the effectiveness of measures already in place. Be up to date and certified by professional third parties.
Training and awareness: Train your teams on ethical issues and best practices in terms of data protection. Encourage a safety-focused culture within your organization by recruiting a Chief Ethics Officer if necessary.
1701733796
#create #trustworthy