American researchers point out a major flaw in the way AI is trained

2023-11-28 09:30:00

Image : gremlin/Getty Images.

Following an experiment on how to design artificial intelligence models, a team of computer science researchers from the University of Toronto and MIT warns: the design of AI models presents a problem that, if the If not remedied quickly, might have disastrous consequences for human beings.

In summary, all AI models must be trained on large amounts of data. But their investigation indicates that the current way of doing things is deeply flawed.

AI is already influencing our daily lives

Look around you and observe how AI has already introduced in our daily lives and in society: Alexa reminds you of your appointments, machines diagnose the cause of your fatigue, algorithms suggest prison sentences, etc. Without forgetting that certain professions use AI tools, for example to filter bank loan applications. In 10 years, virtually everything we do will be controlled by an algorithm.

So, if you have sent applications for renting accommodation, loan applications, or you have applied for jobs made for you, and you only receive refusals, it may not simply be bad luck. It may be that these negative results are actually due to poor training of artificial intelligence algorithms.

More specifically, as Aparna Balagopalan, David Madras, David H. Yang, Dylan Hadfield-Menell, Gillian Hadfield and Marzyeh Ghassemi point out in their article published in Science, AI systems trained on descriptive data invariably make decisions that are much more severe than those that humans would make.

And if these results are not corrected, these AI systems might wreak havoc in areas involving important decision-making.

A notable difference in judgment

Normative or descriptive?

As part of a project investigating how AI models justify their predictions, the aforementioned scientists found that humans in the study sometimes gave different answers when asked to attribute data to descriptive or normative labels.

A ” affirmation normative » implies a value judgment, because it indicates what should be. For example: “He should work harder to pass his exam. » A description, for its part, is objective, since it describes what is. For example: “The rose is red. »

The team, perplexed, decided to explore the question further with another experiment; this time, she gathered four sets of data, in order to test more configurations.

A judgment that takes context into account

Among the datasets, the scientists chose one with photos of dogs and a regulation prohibiting aggressive dogs from entering an apartment. They then asked several groups to label normative or descriptive data, using a process that reflects how the data was formed. This is where things get interesting.

The people in charge of “descriptive” labels had to ask themselves the question of the factual presence or not of certain characteristics: aggressiveness, poor hygiene, etc. without knowing the context. In the event of a positive response, these people unknowingly indicated that the rule was violated, and the dog banned from the apartment. At the same time, another group was tasked with applying “normative” labels to the same images, following being informed of the aggressive dog rule.

In this study, it turned out that participants were more likely to condemn dogs when they were unaware of the rules.

The difference in judgment is also significant. The group dedicated to “descriptive” labels condemned (unknowingly) 20% more dogs than the group which was aware of the regulation.

Towards a reproduction of inequalities?

AIs have biases…

The results of this experiment may have serious consequences for our daily lives, especially for the less privileged. Even more so if we add it to the already known biases of artificial intelligence.

For example, let’s analyze the danger of a “machine learning loop” feeding an algorithm designed to evaluate doctoral applications. Powered by thousands of previous applications and by data selected by those who supervise it, the algorithm learns which candidate profiles are generally selected: candidates with good grades, a good record, a good school. .. and who are white.

This does not mean that the algorithm is racist, but that the data used to train it is biased. The lawyer Francisco de Abreu Duarte draws a parallel with the situation of people in poverty when faced with credit: “poor people do not have access to credit because they are poor. And since they are not given credit, they remain poor. »

Nowadays, this bias problem is omnipresent in technologies using machine learning. It is not only regarding biases that might be described as racist, but also regarding discrimination relating to gender, age or disability, for example.

…and judge more harshly

“Most researchers interested in artificial intelligence and machine learning take into account that human judgments are biased [car empreints de préjugés]but this result reveals something much worse,” Alert Marzyeh Ghassemiassistant professor of electrical engineering and computer science at MIT.

If human judgments are already biased, these models not only reproduce the already problematic biases, they go even further. Indeed, the data on which they are trained presents a flaw: human beings do not characterize a situation or a person in the same way when they know that their opinion will be used as part of a judgment.

Ultimately, artificial intelligence turns out to be much harsher than human beings, even if we take their prejudices into account. This is why its use to classify data might ultimately be a ticking time bomb. Particularly if the AI ​​models are not trained properly.

Source : ZDNet.com

1701165304
#American #researchers #point #major #flaw #trained

Leave a Replay