EU warns TikTok over illegal content and misinformation

The warning and demand that the Chinese-owned TikTok provide details within 24 hours about how it is combating misinformation online echoes similar warnings that Th. Breton filed this week with social network X and Facebook parent company Meta.

“Given that your platform is widely used by children and teenagers, you have a special responsibility to protect them from violent content depicting hostage-taking and other gruesome videos that are reported to be widely distributed on your platform without adequate safeguards,” – in a letter to the head of TikTok, Shou Zi Chew, Th. Breton.

Th. Breton stressed that TikTok and other major internet platforms need to comply with the Digital Services Act (DSA), which gives the EU the power to fine platforms up to 6 percent. their global circulation if they do not fight against illegal content.

Like Elon Musk, the owner of the X platform, and Mark Zuckerberg, the head of Meta, Th. Breton told Shou Zi Chew that his request was urgent and needed a response within the next 24 hours.

The full text of the letter was published in Th. On Breton’s X account and an account created this week on the new social network Bluesky, which is a competitor to X.

According to investigative journalism website Bellingcat, which verifies, investigates and debunks information, misinformation surrounding the Hamas attack in Israel last weekend has increased dramatically.

It documented examples of false or misleading videos being posted on X, TikTok and other platforms.

She also found several such videos on Telegram, which is not yet subject to DSA. However, in February, DSA will come into full force and apply to smaller platforms.

#warns #TikTok #illegal #content #misinformation

How‌ can social ​media platforms better protect younger users from the spread ‍of misinformation and violent content?

**Interview with Dr. Emily Vargas, Digital⁣ Media Analyst**

**Editor:** Thank⁣ you for joining us today, Dr. Vargas. Recent ​criticisms from EU commissioner Thierry Breton highlight serious concerns regarding misinformation and violent content on platforms like TikTok. What ⁤are your thoughts on the urgency of these warnings?

**Dr. Vargas:** Thank you for having me. The warnings from Commissioner Breton reflect a growing recognition of the unique role social media platforms play ⁣in shaping information dissemination, especially given their popularity among younger audiences.⁤ His demand for TikTok to provide details on their strategies to combat misinformation within a very ‌short timeframe shows the seriousness of⁢ the situation. Misinformation spreads rapidly on platforms like TikTok and has significant implications‌ for public⁣ discourse and safety [1].

**Editor:** Breton specifically mentioned the need for platforms to protect children and teenagers from violent content. How do you see this responsibility shaping the ‍policies⁢ of such companies?

**Dr. ‌Vargas:** That’s a​ critical point.‍ Given that TikTok primarily attracts younger users, the responsibility to filter out harmful content is even more pronounced. Companies ‌must establish comprehensive moderation policies and invest in technology⁤ that⁢ can effectively identify and eliminate not just misinformation but graphic content as ‍well. This is especially important⁤ as studies, including one from the Integrity Institute, show that​ misinformation is more likely to ‌be shared compared to factual content[[[[[1](https://www.nytimes.com/2022/10/13/technology/misinformation-integrity-institute-report.html)].

**Editor:** Can you elaborate on how the frameworks for policing content might change in light of these pressures?

**Dr. Vargas:** Certainly! ‍We are likely to see more stringent regulations and accountability measures put in place by governments. Platforms may be forced to⁣ adopt more transparent content moderation practices and provide regular reports about their ⁤efforts in combating misinformation. The lawsuit from the EU is a precursor to possibly larger-scale ⁢actions aimed at ‍safeguarding users, which may include⁣ hefty fines for lax content management. This not only protects users but also compels companies to prioritize safety and ‍accuracy in‍ their ⁣algorithms[[[[[1](https://www.nytimes.com/2022/10/13/technology/misinformation-integrity-institute-report.html)].

**Editor:** ‌what can users ⁣do to protect themselves ‌and their communities from misinformation and violent content while using these ‍platforms?

**Dr. Vargas:** Users have a role to play as well. They should be vigilant⁤ and critical⁢ of the content they consume and share. Educational initiatives around media literacy can empower users, especially younger ones, to question sources and verify claims before sharing. Additionally, ⁢advocating for stronger content controls with these platforms can⁤ amplify the call for safer online environments. It’s about creating a culture of responsibility both from ​the user side and the platform side.

**Editor:** Thank you, Dr. Vargas, for your insights on this pressing issue. It will be interesting to see how these platforms respond to the scrutiny they’re under.

**Dr. Vargas:** Thank you for having me! The next few months will be pivotal‍ in shaping the ⁢future of social⁤ media governance.

Leave a Replay