The500Feed.Live

Everything going on in AI - updated daily from 500+ sources

← Back to The 500 Feed
📄 ResearchMay 14, 2026

When Answers Stray from Questions: Hallucination Detection via Question-Answer Orthogonal Decomposition

Hallucination detection in large language models (LLMs) requires balancing accu racy, efficiency, and robustness to distribution shift. Black-box consistency methods are effective but demand repeated inference; single-pass white-box probes are effi cient yet treat answer representations in isolation...

Read Original Article →

Source

http://arxiv.org/abs/2605.14449v1