AI, Automation, and the Risk Gap: Why 71% say reducing risk is hard – and how agentic AI can help
Despite years of investment in vulnerability management tools and processes, most security teams still struggle to reduce real risk.
A new study from Enterprise Strategy Group (ESG), presented by Principal Analyst Tyler Shields and sponsored by Nucleus Security, sheds light on why.
The findings suggest that the underlying model of how we discover, prioritize, and fix exposures is no longer adequate – and in many cases, actively working against defenders.
Exposure vs. Vulnerability: The Language Gap That’s Costing Us
Security professionals often use the terms “vulnerability” and “exposure” interchangeably, but Shields argues that this imprecision is contributing to operational failure.
- Vulnerabilities refer to known weaknesses in software or systems, typically documented as CVEs.
- Exposures include any condition that increases risk – whether or not there’s a known CVE. Misconfigurations, identity misuse, and externally accessible assets all fall into this broader category.
The shift toward exposure management reflects the realities of modern infrastructure. Risk isn’t confined to known bugs. Security teams need to account for everything that creates an opening – regardless of whether it shows up in a scanner report.
Why Risk Reduction Is Getting Harder, Not Easier
According to ESG’s survey, 71% of organizations say risk reduction has become more difficult over the past two years. Several reasons emerged:
- Cloud complexity: Assets are now dynamic, short-lived, and harder to inventory. Traditional models of asset management no longer apply.
- Tool sprawl: Enterprises often run dozens of disconnected security tools. Each surfaces findings, but few offer a unified view of what matters most.
- Fixing is the bottleneck: Detection is abundant. Remediation is not. Most teams are overwhelmed by volume and default to triaging only the highest-severity items – often without sufficient context.
Exposure Cycles Are Too Slow to Matter
ESG asked respondents how often they complete a full threat and exposure analysis across their environment. The results were stark:
- Only 3% reported continuous exposure management.
- 80% said they conduct analysis monthly or less frequently.
For attackers operating on timelines measured in hours, these cycles are far too slow. Shields notes that unless organizations find ways to identify and address risk continuously, they will remain at a tactical disadvantage.
Prioritization Is Still Shallow and Largely Context-Blind
While the volume of identified issues continues to grow, prioritization practices have not kept pace. Most organizations still rely on severity scores, exploitability, and reachability as their primary triage methods. Very few are incorporating context such as:
- Business impact
- Asset criticality
- Environmental risk factors
This disconnect leads to inefficient workflows – treating every CVE the same, regardless of where it resides or what it affects.
Without integrated business and asset context, prioritization remains guesswork.
DIY Approaches Dominate But Often Fail
Many security teams attempt to build their own exposure management workflows using spreadsheets, SIMs, CMDBs, or generic data platforms. Shields refers to this as the “DIY fabric” – stitching together data sources in-house to create context.
While technically possible, this path is expensive and fragile. Common problems include:
- Poor data normalization and deduplication
- Constant integration maintenance
- Lack of staff with the necessary data engineering skills
- High long-term cost relative to commercial platforms
Platform Adoption Is Still Low But That’s Changing
Only 26% of surveyed organizations have adopted a dedicated threat and exposure management platform. The main blockers?
- Integration complexity (49%)
- Implementation burden (47%)
Shields believes this will shift rapidly as buyers seek ways to scale exposure management without building from scratch. The platforms that succeed will focus less on dashboards and more on practical integration, normalization, and operational support.
What Role Does Automation Actually Play?
Much of the discussion around AI and automation is speculative or overly broad. Shields takes a more measured view: automation is essential, but must be anchored in real process improvement. The two most pressing areas for automation are:
- Prioritization: Automating how context is applied to exposures so analysts focus on the right issues.
- Remediation: Streamlining or directly executing fixes where appropriate.
Notably, 94% of respondents said they want autonomous remediation. Yet very few have adopted automated patching across critical systems.
The gap reflects a lack of trust in automation – not a lack of interest. Shields expects a gradual adoption curve, starting with low-risk systems and building confidence over time.
What Comes Next: Merging Pre- and Post-Breach Intelligence
The final section of the webinar looked ahead to where the market is heading. Shields anticipates a convergence of left-of-boom (prevention) and right-of-boom (detection and response) data.
For example:
- Post-breach forensics could inform pre-breach prioritization logic.
- Historical exploit paths could shape more intelligent risk scoring.
- Real-world attacker behavior could refine remediation strategies.
This integration will enable a more accurate, adaptive view of organizational risk – one that reflects both theoretical exposure and observed exploitation.
Recommendations for Security Leaders
If you’re still managing exposures through spreadsheets, or relying on basic vuln scanners, Shields recommends the following steps:
- Start with asset inventory: You can’t manage what you can’t define.
- Evaluate platforms that integrate and normalize your data: Prioritize ease of deployment and extensibility.
- Stop prioritizing in isolation: Add business and infrastructure context.
- Automate small, low-risk fixes first: Build automation gradually.
- Track actual risk reduction outcomes: Not just CVE counts.
Closing Thoughts
Threat exposure management is not a new concept, but it’s only now beginning to get the attention it deserves as a distinct discipline. Shields’ research shows that most security programs are still operating under outdated assumptions – over-relying on severity scores, underutilizing context, and exhausting their teams with manual work.
Fixing that requires more than a new tool. It demands a shift in how organizations define, measure, and operationalize risk.
See Nucleus in Action
Discover how unified, risk-based automation can transform your vulnerability management.