Hallucinations of neural networks: what they are, why they happen and what to do about it

Hallucinations of neural Almost every day there is news about the achievements of neural networks. ChatGPT, Midjourney and others are among the most popular laos whatsapp number data queries in Google Trends. It may seem that neural networks can already do everything, and people should prepare for unemployment in the near future.

Machines do solve many problems, but their “brains” are not yet perfect. And creativity is generally inaccessible to robots. Frequent hallucinations of artificial intelligence make it not the most reliable replacement for humans. This is especially dangerous in areas where the safety of human life and health is ensured.

Let’s figure out why and how neural networks make mistakes, and how to reduce their hallucinations at the user level.

How neural networks make mistakes and why

Opacity of results

It is never possible to explain the principle by which a neural network produced a particular result. Although the principles of its operation are known , the robot’s “thinking” process remains a dark forest. For example, you show a neural network a picture of a car, and it answers that it is a flower. It is useless to ask why the result is exactly that way.

If for an ordinary user this error is procedure for issuing permits for performing high-risk work by the state labor service just a funny case, then for business processes unpredictability can lead to unpleasant consequences.

Such opacity will be a problem for any business that seeks to communicate openly with customers: to explain the reasons for refusals and other actions of the company. At risk are social networks and various platforms that trust moderation only to robots.

For example, artificial intelligence (a neural network is used as its method for working with data) Facebook* in 2021 deleted user posts with a historical photo with a Soviet soldier and the Victory Banner over the Reichstag. The neural network considered this a violation of community rules. Later, the company apologized and justified itself by saying that due to the pandemic, most of the employees who moderate content are not working. The company used an automatic check, which led to such an error.

YouTube also reported shortcomings in AI moderation. During the pandemic, neural networks overdid it so much that they deleted twice as many videos as usual (eleven million) for inappropriate content, of which 320 thousand were appealed and returned. So for now, the digital world cannot cope without people.

Discrimination from neural network

Sometimes artificial neural networks behave cn leads unfairly towards people of a certain nationality, gender, race in the sphere of business and law enforcement.

In 2014, Amazon AI evaluated responses to job postings and found that women’s scores were always lower. As a result, female candidates were more likely to be rejected for jobs, even though there were no objective reasons for this.

In the US, facial recognition systems have misidentified a large percentage of African Americans, who have subsequently been unjustly arrested.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top