Wednesday, April 2, 2025

January 21, 2025

Why Only Asking AI For Answers Is Not A Good Idea

The problem with prompting an AI large language model with a question is that the results frequently contain fabricated information.

[Image: here]

Gone are the days when we took some time to search for something on Google and then scrolled around across varying sources to find an answer.

Now it’s far too tempting to simply ask the likes of ChatGPT for a quick and comprehensive answer. However, you must be warned, there is a huge caveat with this approach.

The problem with prompting an artificial intelligence (AI) large language model (LLM) with a question is that the answers can often contain false information that it has fabricated. This made-up information is known as an AI hallucination.

Yep, even ChatGPT gets a little delulu, as the kids say these days. (That’s short for delusional).

MyBroadband notes that these hallucinations occur for multiple reasons, one being that the LLM is not trained on sufficient data compared to the information it is expected to generate for users. They are trained, in these instances, to just take wild guesses, which means you end up with some off answers.

If biased data — limited data, which paints an inaccurate picture — is used, the model will produce a less accurate prediction than if trained on a more complete data set.

Then there is also the small problem of AI not having the real-world context and history that a human might, which confuses its reasoning.

A lack of grounding, or the inability of an AI to understand real-world knowledge and factual information, may also cause a model to generate information that may seem plausible but is factually incorrect.

For example, when the first demo of Google’s Bard LLM was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” It answered incorrectly that the telescope “took the first pictures of a telescope outside of our own solar system.”

Astronomers soon pointed out that the first photo of an exoplanet was taken in 2004, and the James Webb Telescope was only fully assembled in 2011. Rookie error, AI.

Another factor is whether or not the data used to train the LLM is accurate.

The internet is full of false claims, anecdotal evidence and straight-up inaccurate reporting, so when a model is trained on data from websites – like Wikipedia or Reddit – that may contain inaccurate information, it is likely to hallucinate.

The other problem is that LLMs do not have a way to fact-check the information they present, so they produce the next word or phrase they think makes the most sense when generating an answer.

AI system developers have their work cut out for them – trying to make improvements to increase the accuracy of the outputs produced by their LLMs – but still, hallucinations still happen often enough to warrant you thinking twice before you trust what ChatGPT (or the like) put down for you.

[Source: MyBroadband]