How cybercriminals are increasingly using AI – and how you can protect yourself

2023-11-17 12:06:00

AI currently has a subordinate role, but educating the population is now the most effective preventative measure to protect once morest future damage. The number of fraud crimes on the Internet is constantly increasing. The reasons for this are increasing internet use and technological progress, according to AI experts.

In 2022, reported cybercrime crimes increased by 30 percent to more than 60,000. This includes sexual crimes as well as property crimes. There was an increase of 23 percent in fraud crimes to more than 27,600 cases and the damage amounted to 700 million euros – although the number of unreported cases is likely to be much higher, warned experts from the Federal Criminal Police Office (BK) and the Board of Trustees for Road Safety (KFV). a press conference in Vienna. “One reason for the rapid increase is the constant technological progress. In addition, the perpetrators often operate from abroad, which makes it more difficult to trace the crimes and to access the perpetrators and the stolen assets,” explained Manuel Scherscher, head of the department for Economic crime and fraud in the BK. Artificial intelligence then represents a new challenge in this area.

So-called deepfake clips are used. These are manipulated audio or video recordings in which AI is used to integrate a person’s face and voice in real time to claim something that is false or never happened. “In order to protect yourself effectively once morest deepfakes (…), you should know what technical options already exist and how you can protect yourself from them,” explained Sven Kurras from the company “Risk Ident”, which uses intelligent software in the fight once morest online fraud. The AI ​​expert showed that faces, voices, videos and even entire dialogues can already be artificially generated, although some of these are currently still error-prone.

How to detect deepfakes

There are a few points to pay attention to in order to detect deepfakes, says Kurras. “Blurred transitions between faces and the background are very suspicious, as are asymmetrical glasses. If parts of images or videos have different resolutions, you should also be on the lookout.” Gut feeling is also important: Is the other person behaving out of character? Are there any abnormalities in facial expressions, mouth movements, teeth, blinking or lip synchronization? According to the AI ​​expert, a different pronunciation, emphasis, choice of words or dialect than usual can also be alarm signals. If you have suspicions during a live video call, you might ask the other person to carry out specific tests, such as singing, to unmask these text-to-speech models.

However, AI not only offers dark sides, but also a lot of positive potential, for example in the area of ​​simplifying work and increasing efficiency, the experts said. But it is often not known what is permitted and at what point one is committing a criminal offense. As a current survey by the property protection department at KFV shows, only just under ten percent of those surveyed said they had comprehensive knowledge of AI, 52 percent only had basic knowledge, 35 percent classified their understanding as limited and three percent said , having no knowledge at all in the area. Armin Kaltenegger, head of the property protection department and the law and standards department at the KFV, explained that the use of artificial intelligence can also become a criminal offense unknowingly and without bad intentions, for example in the case of violation of copyrights or negligent trust in AI-controlled algorithms.

Loading

info By clicking on the icon you can add the keyword to your topics.

info
By clicking on the icon you open your “my topics” page. They have of 15 keywords saved and would have to remove keywords.

info By clicking on the icon you can remove the keyword from your topics.

Add the topic to your topics.

1700224584
#cybercriminals #increasingly #protect

Leave a Replay