Skip to main content
Kent Academic Repository

People defer to AI moral advice, but not blindly

Landes, Ethan, Francis, Kathryn, Everett, Jim A.C. (2026) People defer to AI moral advice, but not blindly. Cognition, 272 . Article Number 106504. ISSN 0010-0277. E-ISSN 1873-7838. (doi:10.1016/j.cognition.2026.106504) (KAR id:113194)

PDF Publisher pdf
Language: English


Download this file
(PDF/2MB)
[thumbnail of 1-s2.0-S0010027726000703-main.pdf]
Preview
Request a format suitable for use with assistive technology e.g. a screenreader
PDF Author's Accepted Manuscript
Language: English

Restricted to Repository staff only

[thumbnail of Landes et al 2026 Cognition .pdf]
Official URL:
https://doi.org/10.1016/j.cognition.2026.106504

Abstract

As AI large language models (LLMs) become increasingly embedded in everyday technologies, should we be concerned about their capacity to influence human beliefs - particularly in the moral domain? Being persuaded because one is convinced by the LLM generated reasons can support the moral and intellectual growth of users, while being persuaded because one defers to the LLM can prevent, or even reverse, growth and understanding. In three studies, we investigate whether and how people revise their moral judgments after receiving advice from LLMs. In Study 1, we find that despite rating human advisors as more trustworthy, participants were equally persuaded by LLMs in everyday moral dilemmas. In Study 2, we used a methodologically realistic paradigm in which participants interacted with a genuine LLM, finding that the LLM’s past performance and judged trustworthiness did not have an effect on its persuasiveness in everyday moral dilemmas. In Study 3, participants interacted with an LLM that defended its moral recommendation with good reasons, no reasons, or bad (i.e., patently absurd) reasons. While high-quality reasons do not increase persuasion relative to no reasons, bad reasons may actively undermine it. Our findings suggest that users defer to the LLM on a response-by-response basis, not based on past performance or the presence of high-quality reasons alone. That people defer to AI moral advice, even if not blindly, raises concerns about the effects of AI moral advisors - a heuristic of “this advice seems good enough” is not the way we should approach moral advice.

Item Type: Article
DOI/Identification number: 10.1016/j.cognition.2026.106504
Uncontrolled keywords: artificial moral advisors; AI persuasion; experimental epistemology; everyday moral dilemmas
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Institutional Unit: Schools > School of Psychology
Former Institutional Unit:
There are no former institutional units.
Funders: Engineering and Physical Sciences Research Council (https://ror.org/0439y7842)
Leverhulme Trust (https://ror.org/012mzw131)
Depositing User: Jim Everett
Date Deposited: 26 Feb 2026 09:40 UTC
Last Modified: 25 Mar 2026 03:51 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/113194 (The current URI for this page, for reference purposes)

University of Kent Author Information

Landes, Ethan.

Creator's ORCID: https://orcid.org/0000-0002-1186-1717
CReDIT Contributor Roles:

Everett, Jim A.C..

Creator's ORCID: https://orcid.org/0000-0003-2801-5426
CReDIT Contributor Roles: Formal analysis, Writing - review and editing, Conceptualisation, Project administration, Funding acquisition
  • Depositors only (login required):

Total unique views of this page since July 2020. For more details click on the image.