Blog

How the OWASP Top 10 Risks for LLMs Evolved from 2023 to 2025: Lessons and Implications

The new 2025 OWASP Top 10 Risks for Large Language Models (LLMs) highlights critical shifts within AI security. Here's a summary of new, expanded or updated risks that are particularly interesting.

The release of the updated OWASP Top 10 Risks for Large Language Models (LLMs) 2025 indicates how LLMs are increasingly deployed but also highlights the emerging vulnerabilities observed by users in actual deployments.

As someone deeply involved within AI security research, my observation is that the updates between the 2023 and 2025 versions are not just a reflection of technological shifts, but also a call to action for developers and security professionals. Below I have summarized some of the larger changes, and their respective implications for the industry.



Key Changes from 2023 to 2025

From Denial of Service to Unbounded Consumption

In 2023, Denial of Service (DoS) was narrowly defined as resource exhaustion produced by malicious queries. By 2025, it has evolved into Unbounded Consumption, a broader risk encompassing unexpected operational costs and resource mismanagement. This expansion reflects observed cases whereby LLMs have inadvertently incurred significant infrastructure costs due to misaligned resource allocation, or specific LLM inputs deliberately designed to induce a high resource load—an issue of particular concern for LLM deployment at scale.

System Prompt Leakage: A New Entry in 2025

Various systems assumed that prompts were securely isolated, however incidents of sensitive prompt data being inadvertently exposed proved otherwise. Some providers got ahead of this problem by publicly releasing this information. However it has been found that several system prompts contained sensitive information, and was used for reconnaissance to identify its blindspots and susceptibility to other risks such as prompt injection and jailbreaking. 

Expanded Focus on Excessive Agency

Initially identified in 2023 as a nascent concern, the Excessive Agency risk now highlights the unintended consequences of using LLMs as software agents to perform actions. With the proliferation of agent-based systems, this entry underscores the dangers of granting LLMs unchecked permissions, ranging from unauthorized purchases to altering system states without oversight.

Vector and Embedding Weaknesses

The 2025 list introduces targeted guidance for Retrieval-Augmented Generation (RAG) and embedding-based methods. These techniques are increasingly key to assist with more grounding within LLM outputs, however they also introduce systems to novel attack vectors. The guidance helps mitigate risks such as embedding manipulation.

 

The Data Speaks for Itself

Metrics from the field illustrate why these updates are critical:

Cost Overruns: In 2024, enterprises reported that 15% of operational costs for LLMs stemmed from uncontrolled resource usage, a figure that aligns with the expanded definition of Unbounded Consumption.


Breach Statistics: There were over 30 documented cases in 2024 involving System Prompt Leakage, exposing sensitive data such as API keys and operational workflows.


Incident Growth: Security teams have seen a 40% increase in attacks targeting RAG pipelines, particularly through compromised embeddings.

 

Implications for the Industry

The evolution of this list is a reminder that securing LLMs is not a one-time activity but an ongoing effort. The 2025 updates call for proactive measures:

Continuous Security Testing and Red Teaming: Simulating attacks such as prompt injections or embedding manipulation should be standard practice.


Establishing More Rigorous Cost Control Mechanisms: Monitoring and throttling resource-intensive operations can prevent runaway expenses.


Human-in-the-Loop Systems: Critical decisions and high-risk operations must involve human oversight to mitigate the risks associated with excessive agency.


A Community Effort: The 2025 Top 10 is a result of extensive collaboration across sectors, reflecting the collective wisdom of red teamers, developers, researchers, and security professionals. 

 

A Milestone and Roadmap

The 2025 OWASP Top 10 serves as both a milestone and a roadmap. By adopting such recommendations, we can foster safer AI applications and build trust within LLM technologies. I would encourage everyone involved within AI development and deployment to review the updated list and integrate its insights into your security and risk strategies.

Download the 2025 OWASP Top 10 here. 

Similar posts