Gupta, Shivangi, Arief, Budi, de Lemos, Rogério (2026) Agentic AI vs Non-Agentic AI: Motivation, Security Implications, and Research Foundations. In: 2026 21st European Dependable Computing Conference Companion Proceedings (EDCC-C). IEEE CPS (In press) (KAR id:114337)
|
PDF
Pre-print
Language: English |
|
|
Download this file (PDF/173kB) |
Preview |
| Request a format suitable for use with assistive technology e.g. a screenreader | |
Abstract
A significant change in the development and application of artificial intelligence (AI) systems is the transition from non-agentic to agentic AI. Non-agentic AI systems, such as prompt-based language models and classical machine learning models, operate reactively, producing outputs only in response to inputs. They lack long-term memory, long-term goals, or the capacity to act independently. Agentic AI systems, on the other hand, are made to act more autonomously through goal-setting, multi-step planning, tool use, memory storage, and action execution in physical or virtual environments. This paper aims to explain the emergence of agentic AI, distinguish it from non-agentic AI, and examine the new security and governance challenges arising from this novel mode of operation. The approach used in this paper includes research about intelligent agents, large language models (LLMs) based agents, AI security, and governance frameworks. The paper also highlights how autonomous behaviour increases AI attack surface, shifts security concerns, and focuses from isolated model errors to risks involving decision-making logic, persistent memory, delegated permissions, and long-running agent behaviour. Finally, the paper argues that although agentic AI may increase the threat surface, it can be deployed responsibly provided that appropriate system- level safeguards are in place. This highlights the need for new or extended security and authorisation frameworks focusing specifically on agentic AI.
| Item Type: | Conference proceeding |
|---|---|
| Uncontrolled keywords: | Agentic AI, Autonomous systems, Trusted Delegation, AI safety, ML models |
| Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.87 Neural computers, neural networks |
| Institutional Unit: |
Schools > School of Computing Institutes > Institute of Cyber Security for Society |
| Former Institutional Unit: |
There are no former institutional units.
|
| Funders: | University of Kent (https://ror.org/00xkeyj56) |
| Depositing User: | Rogerio De Lemos |
| Date Deposited: | 05 May 2026 02:53 UTC |
| Last Modified: | 07 May 2026 13:41 UTC |
| Resource URI: | https://kar.kent.ac.uk/id/eprint/114337 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):

https://orcid.org/0000-0002-1830-1587
Total Views
Total Views