Skip to main content
Kent Academic Repository

People defer to AI moral advice, but not blindly

Landes, Ethan, Francis, Kathryn, Everett, Jim A.C. (2026) People defer to AI moral advice, but not blindly. Cognition, . ISSN 0010-0277. E-ISSN 1873-7838. (In press) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:113194)

PDF Author's Accepted Manuscript
Language: English

Restricted to Repository staff only

[thumbnail of Landes et al 2026 Cognition .pdf]

Abstract

As AI large language models (LLMs) become increasingly embedded in everyday technologies, should we be concerned about their capacity to influence human beliefs - particularly in the moral domain? Being persuaded because one is convinced by the LLM generated reasons can support the moral and intellectual growth of users, while being persuaded because one defers to the LLM can prevent, or even reverse, growth and understanding. In three studies, we investigate whether and how people revise their moral judgments after receiving advice from LLMs. In Study 1, we find that despite rating human advisors as more trustworthy, participants were equally persuaded by LLMs in everyday moral dilemmas. In Study 2, we used a methodologically realistic paradigm in which participants interacted with a genuine LLM, finding that the LLM’s past performance and judged trustworthiness did not have an effect on its persuasiveness in everyday moral dilemmas. In Study 3, participants interacted with an LLM that defended its moral recommendation with good reasons, no reasons, or bad (i.e., patently absurd) reasons. While high-quality reasons do not increase persuasion relative to no reasons, bad reasons may actively undermine it. Our findings suggest that users defer to the LLM on a response-by-response basis, not based on past performance or the presence of high-quality reasons alone. That people defer to AI moral advice, even if not blindly, raises concerns about the effects of AI moral advisors - a heuristic of “this advice seems good enough” is not the way we should approach moral advice.

Item Type: Article
Uncontrolled keywords: artificial moral advisors; AI persuasion; experimental epistemology; everyday moral dilemmas
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Institutional Unit: Schools > School of Psychology
Former Institutional Unit:
There are no former institutional units.
Funders: Engineering and Physical Sciences Research Council (https://ror.org/0439y7842)
Leverhulme Trust (https://ror.org/012mzw131)
Depositing User: Jim Everett
Date Deposited: 26 Feb 2026 09:40 UTC
Last Modified: 02 Mar 2026 15:44 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/113194 (The current URI for this page, for reference purposes)

University of Kent Author Information

Everett, Jim A.C..

Creator's ORCID: https://orcid.org/0000-0003-2801-5426
CReDIT Contributor Roles: Project administration, Conceptualisation, Writing - review and editing, Funding acquisition, Formal analysis
  • Depositors only (login required):

Total unique views of this page since July 2020. For more details click on the image.