AI-Powered Lie Detectors Could Lead to a Storm of False Accusations

AI-Powered Lie Detectors Could Lead to a Storm of False Accusations

Although people lie a lot, they generally refrain from accusing others of lying, due to social norms surrounding false accusations and a certain respect for politeness. But artificial intelligence could shake up these conventions.

In a study recently published in iScienceResearchers have shown that individuals are more likely to accuse others of lying when the accusation is made by AI. According to the authors of the work, these findings illustrate the social implications and implications of using AI systems when it comes to detecting lies, and thus help decision-makers when they want to implement similar techniques.

“Our society has strong and well-established norms when it comes to accusations of lying,” says the study’s lead author, Nils Kubis, a behavioral scientist at the University of Duisburg-Essen in Germany.

“It would take a lot of courage and evidence to publicly accuse others of lying. But our study shows that AI can become an excuse that people can hide behind, to avoid being held accountable for the consequences of their accusations.”

As the researchers point out, human society has long operated on the idea of ​​trust by default, which explains why people often assume that what they hear is the truth. Because of this tendency to trust others, humans are ill-equipped when it comes to spotting lies. Previous studies have also shown that individuals do no better than chance when they have to separate fact from fiction.

Mr. Kubis and his team wanted to determine whether the presence of artificial intelligence would change social norms and established behaviors, regarding accusations of lying.

See also  Behind computers unstable workers

To find out, the researchers asked 986 people to write two descriptions—one true and one false—of what they planned to do over the weekend. The team then trained an algorithm using this data, creating an AI model that was able to identify false statements 66 percent of the time, a much higher percentage than a human could achieve in real time.

The team then recruited more than 2,000 participants to act as judges, reading the statement and deciding whether it was true or false. These people were divided into four groups: “reference,” “forced,” “prohibited,” and “choice.”

In the control group, participants answered “true” or “false” without the help of artificial intelligence. In the forced group, they always received a prediction from the computer before making a decision. In the other two groups, participants had the option of receiving the prediction in question. People in the “blocked” group who asked for the prediction did not receive it, while this was indeed the case for people in the “choice” group.

The researchers found that participants in the control group were 46 percent accurate when it came to deciding whether a statement was true or false. Only 19 percent of people in this group said the statements they read were false, even though they knew half of them were. This confirms the fact that people usually avoid calling someone out for lying.

In the forced group, where the AI ​​made a prediction whether intended or not, more than a third of participants found the statements to be false. This rate was significantly higher than that of the control groups and those who were “blocked,” meaning they could not get a prediction from the AI.

See also  Nobel Science Awards for 2024: Artificial Intelligence Bonus

When the AI ​​predicted that a statement was true, only 13% of participants said that most of it was false. However, when the same program claimed that the statement was false, more than 40% of participants took the same view.

Algorithmic Risks

Additionally, among those who asked for and received predictions from the AI, a total of 84% of participants embraced the prediction and made accusations of lying when the AI ​​claimed the statement was false.

“This shows that once people have access to such an algorithm, they will rely on it and can change their behavior. If the algorithm says something is a lie, people will be willing to agree with it. This is worrying, and shows that we need to be careful with this technology,” says Kubis.

Interestingly, participants seemed reluctant to use AI as a lie detection tool. In the “blocked” groups, and those who could choose to receive a prediction, only a third of participants asked the computer for help.

This situation surprised the researchers, because they had previously explained to participants that the algorithm could detect lies more effectively than humans. “This may stem from this particularly strong effect that we see in many studies, which is that people are overconfident in their ability to recognize lies, even though humans are really bad at it,” Kubis said.

AI is known for its frequent errors and its tendency to reinforce biases. Based on the study’s findings, Mr Kubis suggests that policymakers reassess their willingness to use the technology in complex areas, such as when it comes to providing asylum to new arrivals at borders.

See also  Google Drive will soon stop working on Windows 8 and 8.1

“There’s a lot of excitement around AI, and a lot of people think that algorithms are really effective, or even objective. I’m really worried that this could lead people to become overly reliant on these systems, even if they don’t work very well.”

Subscribe to our sprawling newsletter

Encourage us to pay for coffee

You May Also Like

About the Author: Octávio Florencio

"Evangelista zumbi. Pensador. Criador ávido. Fanático pela internet premiado. Fanático incurável pela web."

Leave a Reply

Your email address will not be published. Required fields are marked *