OpenAI warns of AI hallucination challenges

OpenAI warns of AI hallucination challenges
Share this post on :

CALIFORNIA (Kashmir English): OpenAI raised new concerns about the trustworthiness of AI chatbots by warning that hallucinations, coherent but incorrect answers, are still a significant problem despite the advances in technology.

Large language models such as GPT-5 and ChatGPT create plausible but false information because of shortcomings in the pretraining process and testing strategies, according to a recently released research paper.

Examples where a chatbot was consistently providing false responses about a researcher’s scholarly work and personal information are given in the paper.

Scientists maintain that existing accuracy-based evaluation systems prompt models to “guess” instead of acknowledging uncertainty.

They suggest new testing approaches that penalize errors of confidence but reward good uncertainty, so that AI outputs will be more reliable.

Scroll to Top