Skip to content

Reaction to our blog on AI Hallucinations:

  • by

There were two reactions to our blog, one that agrees with us that there must always be human oversight on information gathered by AI. A human should routinely “kick the tyres” on the information, I.E. Verify the nature and accuracy of the individual databases and not slant or “spin” any reports.

The other reaction was that this was obviously misinformation because computers are perfect, and don’t put “spin” on information like humans do.

The following are from links from IBM, Google Cloud, Wired and CNN.

IBM: Rely on human oversight  “Making sure a human being is validating and reviewing AI outputs is a final backstop measure to prevent hallucinations. Involving human oversight ensures that, if the AI hallucinates, a human will be available to filter and correct it. A human reviewer can also offer subject matter expertise that enhances their ability to evaluate AI content for accuracy and relevance to the task.”

Google Cloud:  What are AI Hallucinations?

“AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading”.

WIRED In Defense of AI Hallucinations.                                                   

“It’s a big problem when chatbots spew untruths. But we should also celebrate these hallucinations as prompts for human creativity and a barrier to machines taking over.”

CNN Business AI tools make things up a lot, and that’s a huge problem.                   

“The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

The AI researcher said that a better behavioural analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

Leave a Reply

Your email address will not be published. Required fields are marked *