Chat GPT

Neural Network Hallucinations

Chat AI
#chatgpt #free
typography

Neural networks can swiftly locate the information needed, process documents, and even write a thesis. However, the problem is that along with truthful answers, neural networks may also present made-up events as facts.

What Are Neural Network Hallucinations

These are AI “fantasies” unconnected to reality. Artificial intelligence can confuse historical dates or fabricate them, alter the biographies of famous people, and invent scientific research that has never been conducted. And it does so very convincingly.

For instance, when asked if there are good places to dine on Beaufort Island in Hong Kong, ChatGPT affirmatively responded and even named a restaurant, advising to book a table in advance. However, Beaufort Island is uninhabited since it’s located in Antarctica, and the chatbot simply fabricated a tourist-targeted narrative.

AI also generated stories for more bizarre queries, such as a hypothetical alien invasion in the Ural Mountains. The text contained a brief description of the events, precise dates, and conclusions about the impact of this invasion on humanity and the scientific community.

There have also been comical situations linked to object recognition systems. For instance, Tesla’s autopilot couldn’t identify a horse-drawn carriage. The unfamiliar transport baffled the artificial intelligence.

Sometimes neural network hallucinations are alarming – the same Tesla onboard computer identified a person on an empty road. It’s needless to say what the driver felt at that moment. And the chatbot MY AI from Snapchat posted a strange photo in stories, although such a feature (stories) wasn’t present in the application. This inexplicable and frightening “intelligent” behavior of the program led some users to delete the app.

Yet, all of this is not critically detrimental, as long as it doesn’t involve tasks requiring precision, usually encompassing multi-stage solutions where a single error at any stage could lead to an incorrect answer.

Why Does This Happen

Since AI is not a searchable database with adjustable algorithms, the likelihood of errors is vast.

Yandex developers identify two main reasons:

  • Pretraining on large data sets, not always containing accurate information.
  • The principles of the language model itself.

Let’s start with the first point. During training, AI “devours” a plethora of information – false, contradictory, unverified, or outdated data. Nonetheless, these aren’t always sufficient. Currently, ChatGPT, for example, has no internet access and is unaware of current global events.

AI is built on the principle of neural connections in the human brain, but unlike humans, it lacks critical thinking and personal opinion. They cannot doubt, make conclusions, or match facts. They look for answers in their associative memory, and if they can’t find them – they fabricate their own.

As for the principles of model construction, neural networks lack the task of outputting truthful data. Upon receiving a query, the program merely guesses the sequence of words in a sentence, using statistical knowledge, until a coherent text is produced. And to improve this guessing, AI needs to be fed even more data.

What Can Be Done to Minimize the Number of Hallucinations:

  • Provide more specificity in queries – AI lacks abstract thinking, so tasks must be set clearly and can even include examples (the program likes this).
  • Avoid using rare words – this reduces the likelihood of receiving incorrect results.
  • Set patterns – make corrections, step-by-step guidance (works for programming tasks).
  • Add context – be clearer with introductory information, such as if you need an article about a film, include the release year and cast.
  • Fact-check – AI can make logical errors, change words in quotes, mix up dates and, along with useful text, generate nonsense.

Developers are also trying to solve the problem of hallucinations. OpenAI decided to change their strategy, aiming to “reward” the algorithm for each correct reasoning step rather than the final right answer. How well the new motivation system will work can be assessed after ChatGPT’s next update.

Free access to Chat GPT and Journey
← Previous Article Back to blog Next Article →