2024-11-06 13:46:00
At the Haus der Industrie, Eva Weissenberger (WKO) moderated the panel with Claudia Zettel, editor-in-chief of “futurezone”, Eugenia Stamboliev, media scientist and technology philosopher at the University of Vienna, and Maimuna Mosser, CEO of Google Austria.
Given increasing disinformation and the growing role of AI, public trust in the media is declining. Zettel sees the problem less in the declining trust itself and more in the fact that people are increasingly turning to other sources such as YouTube or X. “I think the basic problem is that there are two worlds. (…) Our understanding of facts differs from that of other groups,” explained Zettel. Stamboliev also sees a crisis of trust since technology is increasingly shaping the education system: “We are destroying our relationships of trust.” She emphasized that many technologies on the market are not scientifically based, which is why users have to additionally check AI-generated content.
Mosser emphasizes Google’s responsibility to prevent disinformation: “Our tools are intended to provide qualified information and increase the visibility of such content.” Google has strict guidelines for AI-generated content, disinformation and hate speech. “AI is not anchored at Google without guidelines,” Mosser clarifies.
A question of responsibility
At another panel with Mubashara Akthar, scientist at Kings College in London, Eva Wackenreuther, fact checker at ORF and Valerie Schmid, editor at APA, there was particular discussion about the responsibility in identifying “fake news”.
Schmid saw a “great general uncertainty” among readers, which “media alone is not enough” to counteract. She criticized the insufficient efforts on the part of social media, which failed to cope with its “massive responsibilities” in the area of AI and disinformation. Wackenreuther also saw not only the media as being responsible for detecting AI and fakes. Akhtar saw these in science. There is a need for a “sensitization of the general public” on how to deal with AI-generated data, “industry-specific methods, standards and specific guidelines” as well as a stronger focus on research so that “the AI tools can comply with certain standards”.
Pauline Severin
OTS ORIGINAL TEXT PRESS RELEASE UNDER THE EXCLUSIVE RESPONSIBILITY OF THE SENDER FOR CONTENT – WWW.OTS.AT | NEF
1730901281
#26th #Congress #Women #Journalists #Experts #call #transparency #responsibility #dealing #disinformation
**Interview: Panel Discussion on Misinformation and Trust in Media**
**Date:** November 6, 2024
**Location:** Haus der Industrie, Vienna
**Moderator: Eva Weissenberger (WKO)**
**Guests: Claudia Zettel (Editor-in-Chief, Futurezone), Eugenia Stamboliev (Media Scientist and Technology Philosopher, University of Vienna), Maimuna Mosser (CEO, Google Austria)**
—
**Eva Weissenberger:** Thank you all for joining us today to discuss the critical issue of misinformation, especially in the context of the growing influence of generative AI. Claudia, you’ve spoken about the shift in how people consume information. Can you elaborate on the implications of this shift for traditional media?
**Claudia Zettel:** Absolutely, Eva. The challenge lies in the fact that many people are increasingly relying on platforms like YouTube or X, which present information in very engaging ways. However, this also means that our understanding of facts can differ significantly from those communities that get their news from these alternative sources. The problem isn’t just the decline in trust; it’s the emergence of these parallel realities where what constitutes a “fact” varies dramatically.
**Eva Weissenberger:** Fascinating point. Eugenia, you mentioned a crisis of trust due to technology’s role in shaping education. How do you see this impacting media consumption among younger generations?
**Eugenia Stamboliev:** Yes, the implications are profound. As technology becomes more integrated into our educational systems, the way students interact with content changes. If they rely primarily on digital platforms for information, it can erode their ability to discern credible sources, fostering a culture where misinformation thrives. We’re not just seeing a decline in trust in media; we’re seeing a dismantling of the foundational relationships that build trust in general.
**Eva Weissenberger:** Maimuna, from a tech perspective, how is Google addressing the challenge of misinformation in the age of generative AI?
**Maimuna Mosser:** At Google, we recognize the dual role of technology as both a facilitator and a challenge in countering misinformation. We are investing heavily in improving our algorithms to prioritize credible sources and enhance digital literacy among users. Our goal is to support informed decision-making, but it’s also crucial that users develop critical thinking skills when interacting with AI-generated content.
**Eva Weissenberger:** Given these insights, where do you see the future of media and trust headed? Claudia, let’s start with you.
**Claudia Zettel:** I believe we must find a way to bridge the two worlds. Traditional media needs to adapt and meet audiences where they are while maintaining journalistic standards. Building partnerships with these platforms could be key to regaining trust.
**Eugenia Stamboliev:** Agreed. Education and media literacy must go hand in hand with technological advancements. We need to reevaluate how we teach young people to engage with what they read and hear, fostering critical thinking skills.
**Maimuna Mosser:** Ultimately, technology can help us create a safer information ecosystem, but it requires collaboration between tech companies, media organizations, and educational institutions to build that trust back with the public.
**Eva Weissenberger:** Thank you all for your valuable insights. It’s clear that addressing misinformation and rebuilding trust requires a collective effort from all stakeholders.