“French cyber policy has shown itself to be far-sighted and pragmatic” (James Hodge, Splunk)

2023-09-28 12:46:00

LA TRIBUNE – British Prime Minister Rishi Sunak has repeatedly affirmed his desire to make his country an international champion of artificial intelligence. This fall he is organizing a major summit around technology, where the Enigma machine was invented. Where is AI regulation in the UK today?

JAMES HODGE – A white paper was published this summer, in order to propose an approach “ pro-innovation » of the regulation of AI, which could pave the way for a law next year. This includes the idea of ​​thinking by industry, rather than treating technology in a monolithic manner. This is about learning from certain mistakes made by the GDPR, with each site that asks for your preferences in terms of cookies, even though this is not always important for the Internet user.

It’s the same thing for AI: let’s take on one side a chatbot with which I interact to obtain a medical appointment, to whom I therefore transmit personal health information, and on the other hand a second chatbot to which I ask a few questions on the website of a household appliance manufacturer, because I am trying to understand why my washing machine no longer works. It’s the same technology, but the privacy implications are completely different.

Cyberattack: Washington fears that Chinese hackers will disrupt the computer systems of its armies

The same goes for facial recognition. We are not talking about the same thing, depending on whether it involves checking the identity of passengers taking the train or for an anti-terrorist unit to spot a man suspected of wanting to commit an attack.

The approach advocated by the white paper is therefore, in my opinion, very pragmatic. However, it differs from that adopted by the European AI Act, which places the majority of compliance obligations on the suppliers of the technology, rather than on the different industries which use it. An approach by industry is in my opinion more relevant, to the extent that we otherwise find ourselves with very general legislation which, by tackling too many possible cases, risks not working, hindering innovation or even missing certain risks. We see this with the GDPR approach to cookies: most Internet users just click without reading the details.

There has been a lot of talk recently about the damage that the wave of generative AI could cause in the hands of hackers. How do you assess the risk posed by AI in terms of cybersecurity?

Since a program like ChatGPT is capable of writing lines of code, it seems only natural that it is also capable of writing malware, and thus giving a helping hand to the most malicious actors . And indeed, the cybersecurity company CyberArk recently demonstrated that it had succeeded in having ChatGPT encode a polymorphic virus by skillfully twisting its query.

However, the virus in question was more of an experiment than anything else. Just because an AI managed to code it doesn’t mean it’s effective. We must also ask ourselves if it works correctly once deployed, if it is capable of thwarting antiviruses and other protection barriers. It takes a large amount of knowledge to be able to tick all these boxes, and generative AI is, for the moment, not there yet.

Related Articles:  NBA Joins Team Fortnite to Celebrate NBA 75th Anniversary Season and 2022 NBA All-Star

French hospitals still vulnerable to cyberattacks

My point of view, as well as that of many other players in the sector, is that the main risk posed by generative AI lies rather in manipulation: by allowing hackers to express themselves easily, and without mistakes, in a language that is not theirs, it can allow them to more easily numb the vigilance of victims and lead to human errors which remain the source of the vast majority of cyberattacks today.

The good news is that companies are getting better at spotting malware. A recent Splunk study shows that it now takes an average of two months for a company to detect such software in its system: five years ago, we were talking about nine months. French companies are particularly well positioned: 29% of French organizations reported violations over the last two years, compared to 61% in the rest of Europe.

What explains this French success, in your opinion?

France has a more qualified population in the field of cybersecurity than the rest of Europe. 10% of French companies report difficulties in recruiting qualified cybersecurity personnel, compared to 23% at European level. Overall, French cyber policy has shown itself to be far-sighted and pragmatic. We have notably observed the implementation of major initiatives such as the “Cyber ​​Campus” which brings together the entire French cybersecurity community, from private companies to public institutions, including teachers and students.

Cyber ​​Solidarity Act: “We are finally getting into the concrete in terms of European digital sovereignty”

The lack of talent is one of the main obstacles facing cybersecurity policies and companies today. However, we believe that generative AI can provide solutions here by allowing more workers to manipulate complex cyber defense tools. This is why at Splunk we recently launched an “AI assistant”, which allows you to formulate cyber queries in articulate language, for example “Were there any security breaches in my firewall today?” “. This reduces the minimum level to be able to ask questions and obtain answers regarding the security of the organization.

1696192204
#French #cyber #policy #shown #farsighted #pragmatic #James #Hodge #Splunk

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.