AI: after the excitement, time for choices in the editorial offices

2024-04-20 06:07:44

Simple support or lever for transformation: many journalists are testing artificial intelligence (AI) and editorial staff are groping around the issues raised, from employment to ethics.

What will become of my job?

The question is on everyone’s minds at the international journalism festival in Italy, in Perugia, until Sunday.

AI tools, imitating human intelligence, are commonly used in editorial offices around the world to transcribe sound files, summarize texts, translate… So much so that in Germany, the Axel Springer group announced the beginning 2023 job cuts at its dailies Bild and Die Welt, on the grounds that AI could now “replace” certain journalists.

Over the past year and a half, generative AI, which makes it possible to obtain texts and images upon simple query in everyday language, has opened the way to new uses, raising new fears. Voices and faces can, for example, be cloned for the production of a podcast or the presentation of a TV news program. Last year, the Filipino site Rappler was able to create a brand aimed at young people, by converting its long articles into comics, graphics and even videos.

Media players agree on the fact that the profession of journalist will concentrate on tasks with higher added value. “It’s you who do the work” and “the tools we produce are assistants,” insisted the CEO of Google News, Shailesh Prakash, in Perugia.

Sub question

The cost of entry into generative AI has fallen with the arrival of ChatGPT in November 2022. The tool created by the American start-up OpenAI is now accessible to small newsrooms.

Inspired by this, with the help of engineers, the Colombian investigative media Cuestión Pública has developed its own tool allowing, in the event of “breaking news”, to search its archives for elements of context. “This can be edited immediately on our application,” emphasizes Claudia Baez, general director of Cuestión Pública.

However, “many media companies do not produce their own language models” that are the basis of AI interfaces, observes Natali Helberger, professor at the University of Amsterdam, who highlights the need for “safe and dignified” technologies. of confidence”.

Information/disinformation

According to an estimate by EveryPixel Journal last summer, as many images were created by AI in one year as in 150 years of photography. Faced with this tidal wave of summary content, how can information emerge?

Faced with “deepfakes” (perfected montages), media and tech players are uniting, notably with the Coalition for the Provenance and Authenticity of Content (C2PA), which seeks to define shared standards.

“The heart of our work remains information gathering and field reporting. We will rely on human reporters for a long time to come,” with, perhaps, the support of artificial intelligence, recalls Sophie Huet, recently appointed deputy director of information in charge of AI at AFP.

Related Articles:  Best Modern Console To Buy | TudoCelular Guide
From the law of the jungle to regulation

The NGO Reporters Without Borders (RSF), which has expanded its scope of action to defend reliable information, initiated the Paris Charter on AI and journalism at the end of 2023. “She emphasizes transparency. To what extent will publishers have to disclose when they have used generative AI?” asks Anya Schiffrin, director of training at Columbia University (United States). The debate is ongoing at Swedish public radio: “should each content be specified or should users trust the brand?”, reports its AI manager Olle Zachrison.

Regulations are in their infancy in the face of constantly evolving technology. A pioneer, the European Parliament adopted a text in mid-March regulating AI models, without restricting innovation. As for the editorial staff themselves, charters or guides to good practices are spreading. “We change our guidelines every three months,” notes Ritu Kapur, head of Quintillion Media in India. No article can be written by AI and the images generated cannot represent reality.

How to avoid looting?

AI models need to feed on data. Without paying suppliers? In December, The New York Times filed a lawsuit against OpenAI, creator of ChatGPT, as well as Microsoft, its main investor, for copyright infringement. Conversely, other press groups have signed agreements with OpenAI, such as the German Axel Springer, the American press agency AP and recently the French daily Le Monde as well as the Spanish group Prisa Media (El País, As).

While the press lacks resources, it is tempting to collaborate, notes Emily Bell, professor at the Columbia journalism school. She perceives a form of external pressure: “get on board, don’t miss the train!”

1713749436
#excitement #time #choices #editorial #offices

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.