The huge genre of fiction spans everything from fantasy to romance novels, to science fiction, to historical fiction. Somehow, fiction is much of what people want to read and write.
Are users provoking AI to make things up, too?
It seems that even though people complain about computers hallucinating, they might better be described as confabulations or fantasies. Confabulations fill in spaces between things that are known with plausible connector stories. Fantasies refer to made up possibilities in an imaginary world. Fantasy ranges from the idea of dating someone to the use of strange “powers and forces” in a fantasy genre.
Like much of what people say, Generative AI (GenAI) results are based mostly on truth. Listening to a friend’s story at a bar, you want to check it out with other reliable sources before completely accepting it or passing it along to others. The same process is critical for what we learn from GenAI.
In the beginning, computer scientists used computers to calculate accurate trajectories or add up columns of accounting spreadsheets; both are definitive calculations. People still expect computers to give accurate and specific results. But all the while, AI researchers were striving to have the computers produce creative results for things that might be. When GenAI interprets instructions of how to combine constraints of topic, question, and style of presentation, this is not a deterministic act. When it says something to please you, this is a creative act. When it makes a list of possible solutions, this is a creative act. People are drawn to test limits of things, including GenAI.
So curious inquisitive users provoke GenAIs by asking questions with dubious bases or difficult-to-find answers. People can be skeptical about the new, so they try to find the weaknesses in things. Maybe people shouldn’t be surprised, then, by GenAI stretching truth as it prioritizes a readable, plausible, or persuasive response to their prompts. And there’s money to be made by influencers looking for, and showing, troubling confabulations that GenAI presents as facts.
For most of a decade, large language models (LLMs) were useful to computer scientists for analyzing language and other communication. Then came GenAI, listening to user requests for approach tone and format of response. It created fluid and persuasive answers. It delivered LLMs as powerfully attractive AI, accessible to all. In 2025, this new GenAI front end to LLMs had already achieved some 500,000,000 people using it daily.
While people do love the continuity of GenAI responses, they still are perplexed when computers produce anything but correct responses. GenAI’s focus on story and continuity has enchanted users; drawn in by its rhetorical approach, they forget that perfect stories need to be questioned.
People believing more in the results GenAI produces can be good, too. One study particularly impressed me: when people that believed in conspiracy theories had a chance to listen to someone trying to debunk their unfounded ideas, their views didn’t change. But when the debunking came from GenAI, showing them material that contradicted their fake theories, they reduced their conspiracy theories (“Durably reducing conspiracy beliefs through dialogues with AI,” Science, Sept. 13, 2024). People seem to default to believing computers to be factual and unbiased.
It is not good to get confused by confabulations. As with stories told by people, how much can we tolerate prioritizing making things memorable over just saying what it is confident to be true? Often the mistakes come from the tone we ask it to assume. The curious user may even taunt AI by asking it questions that we know are not yet resolved, or about knowledge we expect it might not have access to.
Still, the field is working to make AI more accurate, and there are even commercial companies that supply tools that will look over GenAI results, checking for real sources, and do great work by presenting fact-checking AI as their reason for being. But maybe it is more important to learn from AI’s mistakes and omissions to reflect on what is unusual in our questions.
I look forward to a world where we don’t just delegate to machines, but also learn from them.

Ted Selker is a computer scientist and student of interfaces.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment