The need for large Neural networks are very “gluttonous” and require a iraq whatsapp number data huge amount of information for their training, sometimes even personal information. Collecting and storing data is not the easiest, cheapest, and fastest task. When there is not enough information for training, the neural network begins to make mistakes.
So in 2022, drivers of right-hand drive cars began to receive fines because of AI. The neural network simply did not understand the difference between a person in the seat and just a seat. Fines came for not wearing a seat belt. The developers commented on this problem precisely because of the lack of data: the robot needs to be shown a lot of right-hand drive cars so that they do not make mistakes. But this is not a guarantee of accuracy, because AI can be confused by glare and glare on cameras.
The need for large Sometimes these cases are even comical.
For example, Tesla’s autopilot failed to recognize a rare type of vehicle for our time – a horse-drawn cart on the road. And sometimes the neural network’s glitches are scary. Once, the same Tesla saw a person in front of it in an empty cemetery, which scared the driver.
There are no clear answers
Neural networks sometimes cannot cope with a strength or empathy? task that a five-year-old child would solve if he was shown a circle and a square and asked which is which. The child will answer exactly which is a square and which is a circle. The neural network will say that it is 95% a circle and 5% a square.
Risk of hacking and deception of the neural network
Neural networks are susceptible to hacking just like other systems. Indeed, if you know the weak points of a neural network, you can change its operation. For example, the autopilot of a car will start driving through a red light, not a green one.
Scientists from Israel and Japan tried to cn leads fool a neural network in an experiment. They reduced the accuracy of face recognition to a minimum using makeup. Before that, the neural network failed in the presence of camouflage, glasses, bright elements in the frame, but this method of disguise is very noticeable. Therefore, such an artificial hallucination was important to check whether makeup alone could confuse the AI, which was also developed with a neural network. The accuracy of recognition with ill-conceived makeup did not give such failures.
confrontation attack, when a very small noise is added to the image. Then the neural network begins to incorrectly determine what is in the image: a table instead of a car, a tree instead of a shopping cart, and so on. Even after ten years, there is no absolute solution to this problem.