Agentic AI in DevOps: Redefining Autonomy, Risk and Resilience in 2025
- marcus670
- May 7
- 3 min read
Agentic AI is fast becoming one of the most transformative forces in DevOps. These autonomous systems go beyond traditional automation by interpreting intent, planning actions, and dynamically responding to feedback. In 2025, agentic AI is no longer theoretical. It is actively deployed in software pipelines, handling tasks such as infrastructure provisioning, compliance checking, and incident response.
This evolution is not just about tools — it challenges how we structure responsibility, design systems, and balance control with autonomy. For DevOps professionals, the arrival of agentic AI demands a rethinking of how core principles like flow, feedback, and learning are implemented and maintained.

Reaffirming DevOps Principles in the Age of Autonomy
Rooted in systems thinking, DevOps is guided by three foundational principles: improving work flow, establishing fast feedback loops, and cultivating a culture of continual experimentation and learning. These tenets, championed by Gene Kim, Jez Humble, and Nicole Forsgren, provide the lens through which agentic AI should be adopted. The goal is not just to add intelligence but to ensure it aligns with a holistic, cross-functional approach to delivering value safely and quickly.
From Automation to Agency: AI as a Responsible Actor
AI agents today are being integrated into CI/CD workflows to assist with deployments, enforce policies, and even initiate rollbacks. Their ability to reason and act independently makes them qualitatively different from conventional automation. However, this also means they require accountability.
These agents must be treated as first-class participants in operational systems — complete with scoped permissions, audit trails, and defined responsibilities. Security experts at the 2025 RSA Conference called for rigorous identity and access management frameworks to ensure AI operates transparently and within trusted boundaries. This emphasis on governance supports the feedback loops DevOps relies on to detect, learn from, and adapt to system behaviour.
Building Adaptive Infrastructure with Pulumi
The rise of agentic AI coincides with the growing popularity of infrastructure-as-code tools like Pulumi, which allow engineers to define and manage cloud infrastructure using languages such as TypeScript and Python. This makes infrastructure not only programmable but also accessible to intelligent agents.
Pulumi’s policy-as-code framework, CrossGuard, lets teams encode guardrails and compliance rules that apply to both humans and AI. As agents interact with infrastructure, these policies prevent configuration drift and security violations, enabling autonomy without sacrificing control. The result is a system that supports rapid, safe delivery — true to DevOps’ First Way.
Securing by Design: IriusRisk and Real-Time Threat Modelling
Security in the age of agentic AI must start earlier in the lifecycle. IriusRisk brings threat modelling into the development pipeline, automatically generating architecture-based risk profiles as systems evolve. When integrated with AI agents, this allows for real-time detection of security concerns and even automated remediation proposals.
Instead of bottlenecking delivery, security becomes a continuous, collaborative process. Agents can trigger model updates, assess risks, and suggest improvements — enabling fast, meaningful feedback while supporting a culture of learning and shared responsibility.
Platform Engineering: The Operating Environment for AI Agents
Internal developer platforms (IDPs) provide the scaffolding that allows AI agents to operate safely at scale. These platforms enforce standardisation, embed compliance, and accelerate developer feedback — all while giving agents the controlled environments they need to perform tasks reliably.
By integrating Pulumi and IriusRisk within these platforms, organisations ensure that agents and developers alike are aligned under the same governance and policy structures. This reduces risk and reinforces DevOps’ systems-thinking ethos, where tools, people, and processes work in tandem.
Meeting the Compliance Challenge
With the rise of agentic systems, regulatory frameworks like the UK’s Cyber Security and Resilience Bill are placing greater emphasis on explainability and accountability. DevOps teams must now monitor not just what AI agents do, but why they do it.
This demands new levels of observability. Policies, actions, and learning models must be transparent and traceable. For teams already practising DevOps at a high level, this is a natural extension of the commitment to system-wide visibility and continuous improvement.
Working Alongside Intelligence: A Shared Future
As agentic AI becomes more capable, the role of DevOps engineers is shifting from operators to designers of intelligent systems. These engineers will increasingly collaborate with agents — delegating operational tasks while retaining strategic oversight.
The key to success lies in aligning these agents with DevOps values. That means embedding them in environments that support transparency, enforcing policies that promote safe behaviour, and continuously refining both the systems and the agents themselves.
The future of DevOps is not just about speed. It is about adaptability, trust, and shared autonomy. By staying grounded in its principles, DevOps can thrive in an era where intelligence is no longer just artificial — it is agentic.





Comments