Deepfakes, which can create sophisticated fake content using technologies such as AI or machine learning, can be used for fun, such as creating virtual portraits that do not exist in this world or a service that anyone can appear in movie trailers, but the job recruitment video is deep on the back. The U.S. FBI warns that fraud is taking place using fakes. In the midst of this, is there any way to tell if the video call party is using a deepfake?
From July to August 2022, the Better Business Bureau, a Canadian and U.S. nonprofit that promotes and market credibility, said that scammers can use deepfakes to look like celebrities recommend products, or to create dramatic results for diet products. They have warned that they are deceiving consumers, such as counterfeiting that there is. Then, as a countermeasure once morest deepfakes, it is suggested that you find a part, saying that you can find a blurry image by looking at the image or video carefully.
On the other hand, unlike viewing stored images or images, in video calls that deliver images in real time, it is easy to get some crisp movements or blurring of images depending on communication or camera precision. However, although real-time deepfakes in video calls were overlooked because they did not go unnoticed until very recently, deepfakes have fatal weaknesses that make them easy to distinguish. The weakness of deepfakes to point out is that it is not good for deepfakes to create a profile. So, if you suspect the video caller might be using a deepfake, it’s a good idea to ask the caller to turn to the side.
In fact, famous actors such as Silver Star Stallone and Ryan Reynolds, Elon Musk, etc. are all convincing from the front view, but if you look at the side view photo facing to the right, you can see that the surface of the face is greatly damaged, the eyes are not depicted, or there is a problem with the face created by the deepfake. can be seen at a glance.
The cause of the deepfake profile collapse is that the outline or profile information is not sufficiently learned when facing sideways. Because the deepfake model only obtains an area close to the front of the face and does not obtain sufficient data regarding the span in the temporal region, it is said that deepfakes are at the level of inventing face shapes.
Software that detects faces as deepfakes detects points around landmarks for an algorithm to match face orientation and location. At this time, as revealed in the 2015 paper, the number of profile landmarks is reduced by 50 to 60% compared to the face viewed from the front in order to make landmarks for major parts of the face such as eyebrows, eyes, nose, and mouth. Because of this, the area supplemented by AI that is not detected as a landmark becomes larger in the profile, so there is a problem that the profile by the deepfake is inconsistent.
On the other hand, since Hollywood actors and TV talents with a lot of video data have a lot of profile that can be detected with data, videos that reflect their profile even with deep fakes are being released to a considerable degree. For example, a popular TV talent video was realized because the profile profile and surface did not collapse, but 66 hours of video were available for this purpose. Usually, there are not many opportunities to shoot a profile like this, so it is difficult to reproduce the profile by spoofing with a deepfake.
An official from an AI security company said that as a measure once morest deepfakes during a video conference call, submitting your profile in advance as an identity check and checking during a call can actually be a protective measure once morest deepfakes. If you turn to the side, you will think the depiction is failing, and ask them to ask you to turn to the side. One expert likewise pointed out that profile is a big problem with current deepfake technology, and that deepfakes work quite well for frontal faces, but don’t work well for sidefaces, leaving the AI to make some sort of guesswork.
Of course, there is also the possibility that a new-generation 3D landmark positioning system will improve deepfake performance. However, this makes it easier to acquire and reflect profile data to the last. Unless you’re a celebrity with a lot of exposure to video data, you can’t usually get a lot of profile data, so it’s not going to solve the deepfake problem.
However, in a thesis published by the Taipei University research team, a sample that generated profile data in a less inconsistent form was disclosed even with a frontal photograph, where the profile might hardly be seen.
Touching a face or waving a hand in front of the face also interferes with the quality of the deepfake, but the technology to precisely solve the problem of overlapping the deepfake face is advancing, and in the latest state, the profile is emphasizing the most unimprovable solution. Of course, it’s only a matter of time before deep learning like NeRF grows, and we question whether solving deepfakes in the profile direction will be a valid test five years from now. right. The fact that the profile reveals a deepfake is that it is in its current state.
The biggest threat to spoofing and fraud caused by deepfakes is that we do not expect them. Related content this placecan be checked in