Asaduzzaman, Md, Giorgi, Ioanna, Masala, Giovanni Luca (2025) Filtering hallucinations and omissions in Large Language Models through a cognitive architecture. In: 2025 IEEE Symposium on Computational Intelligence in Natural Language Processing and Social Media (CI-NLPSoMe Companion). . pp. 1-5. IEEE (doi:10.1109/CI-NLPSoMeCompanion65206.2025.10977857) (KAR id:110125)
|
PDF
Author's Accepted Manuscript
Language: English |
|
|
Download this file (PDF/574kB) |
Preview |
| Request a format suitable for use with assistive technology e.g. a screenreader | |
| Official URL: https://doi.org/10.1109/CI-NLPSoMeCompanion65206.2... |
|
Abstract
While Large Language Models (LLMs) have outpaced recent technological advancements, challenges like hallucinations and omissions persist in all LLMs due to the underlying architecture and model training. Hallucinations refer to instances where the model generates incorrect, fabricated, or ungrounded information. Omissions occur when the model fails to provide certain details or skips relevant information in its response.
This paper proposes a novel hybrid methodology to mitigate these phenomena by integrating an LLM (GPT-3.5) with an external brain-inspired cognitive architecture. Unlike classical approaches, our hybrid system leverages mechanisms for long-term memory, structured reasoning, and multi-modal learning and presents further opportunities for improving LLMs with continuous learning, multilingual skills and focus of attention, without ad hoc fine-tuning. The hybrid system was tested and compared with two standalone LLMs (GPT-3.5, Gemini) through simulated open dialogue that mimic daily conversations These tests involved implicit conversational questions or statements on topics like social contexts and basic knowledge, e.g., discussing animals or comparing numbers.
Our proposed model reduced hallucinations and omissions compared to the standalone LLMs on the same benchmark dataset. Specifically, reductions were observed in (i) hallucinations: 33.85% over GPT-3.5 and 37.48% over Gemini, (ii) omissions: 29.80% over GPT-3.5 and 27.20% over Gemini; (iii) instruction loss: 8.13% over GPT-3.5 & 4.68% over Gemini.
| Item Type: | Conference or workshop item (Proceeding) |
|---|---|
| DOI/Identification number: | 10.1109/CI-NLPSoMeCompanion65206.2025.10977857 |
| Uncontrolled keywords: | LLM, hallucinations, omissions, cognitive models |
| Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.87 Neural computers, neural networks |
| Institutional Unit: | Schools > School of Computing |
| Former Institutional Unit: |
There are no former institutional units.
|
| Depositing User: | Ioanna Giorgi |
| Date Deposited: | 30 May 2025 09:55 UTC |
| Last Modified: | 22 Jul 2025 09:23 UTC |
| Resource URI: | https://kar.kent.ac.uk/id/eprint/110125 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):

https://orcid.org/0000-0001-9583-6959
Altmetric
Altmetric