Stedman, James, Giorgi, Ioanna, Masala, Giovanni Luca (2026) A Cognitive Approach to Resolving Semantic Ambiguity in Quantifier Interpretation for Machine Translation. In: International Conference Proceedings. (In press) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:114285)
|
PDF
Author's Accepted Manuscript
Language: English Restricted to Repository staff only |
|
|
Contact us about this publication
|
|
| Official URL: https://www.icrconf.com/ |
|
Abstract
This paper presents the first cognitive modelling approach to addressing scope ambiguity in machine translation. Quantifier scope ambiguity arises when multiple quantifiers appear in a single sentence, and their intended interpretation depends on their relative scope rather than merely their surface order or linear structure. This presents a challenge when translating between languages that differ in how scope relations are grammatically expressed. We study this problem in English-Japanese translation, with English allowing scope ambiguity and Japanese typically requiring scope to be made explicit, meaning that a single English sentence may correspond to multiple candidate Japanese translations, determined by contextual cues. We train a cognitive architecture to perform scope interpretation and translation through procedural neural operations that manipulate contextual representations, rather than relying on surface-level statistical associations. The architecture learns the task from a small set of examples, using substantially less parallel data than conventional neural translation approaches. We evaluate the architecture on a controlled dataset of 259 test cases against Large Language Models, which represent the current state-of-the-art in related ambiguity tasks. In scope interpretation, the cognitive architecture achieves 85% accuracy, outperforming all tested LLMs, including deepseek-r1:8b (71%), gpt-oss:20b (69%), llama3.1:8b (53%), and gemma3:4b (14%). In translation, the architecture reaches 63% accuracy, whereas LLM models range between 23% and 45%. Moreover, the architecture achieves a near-maximum translation entropy, indicating that it successfully distributes translations across alternative interpretations. In contrast, LLMs produce substantially lower entropy scores, revealing a strong bias towards a fixed interpretation. The results demonstrate that cognitive modelling may provide a data-efficient and robust approach to handling scope ambiguity in machine translation.
| Item Type: | Conference proceeding |
|---|---|
| Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, |
| Institutional Unit: | Schools > School of Computing |
| Former Institutional Unit: |
There are no former institutional units.
|
| Funders: | University of Kent (https://ror.org/00xkeyj56) |
| Depositing User: | Giovanni Masala |
| Date Deposited: | 01 May 2026 13:40 UTC |
| Last Modified: | 05 May 2026 15:33 UTC |
| Resource URI: | https://kar.kent.ac.uk/id/eprint/114285 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):

https://orcid.org/0000-0001-9583-6959
Total Views
Total Views