{"id":7,"date":"2018-12-08T21:55:23","date_gmt":"2018-12-08T21:55:23","guid":{"rendered":"https:\/\/kgpandya.com\/index\/?page_id=7"},"modified":"2018-12-18T14:53:24","modified_gmt":"2018-12-18T14:53:24","slug":"news-articles","status":"publish","type":"page","link":"https:\/\/kgpandya.com\/index\/news-articles\/","title":{"rendered":"News &#038; Articles"},"content":{"rendered":"<div class=\"feedzy-bf9b45af8cdc8981483a1042e2e9ba05 feedzy-rss\"><div class=\"rss_header\"><h2><a href=\"\" class=\"rss_title\" rel=\"noopener\"><\/a> <span class=\"rss_description\"> <\/span><\/h2><\/div><ul><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/cvss-triage-failure-chained-vulnerability-audit-security-directors\" target=\"_blank\" rel=\" noopener\" title=\"CVSS scored these two Palo Alto CVEs as manageable. Chained, they gave attackers root access to 13,000 devices.\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/hjtJE1itxnyTrftv5Ef7a\/a57e3655e1b832acec36c5ab10a47dc5\/meyers_hero.png?w=300&#038;q=30\" title=\"CVSS scored these two Palo Alto CVEs as manageable. Chained, they gave attackers root access to 13,000 devices.\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/cvss-triage-failure-chained-vulnerability-audit-security-directors\" target=\"_blank\" rel=\" noopener\">CVSS scored these two Palo Alto CVEs as manageable. Chained, they gave attackers root access to 13,000 devices.<\/a><\/span><div class=\"rss_content\" style=\"\"><p>During Operation Lunar Peek in November 2024, attackers gained unauthenticated remote admin access \u2014 and eventual root \u2014 across more than 13,000 exposed Palo Alto Networks management interfaces. Palo Alto Networks scored CVE-2024-0012 at 9.3 and CVE-2024-9474 at 6.9 under CVSS v4.0. NVD scored the same pair 9.8 and 7.2 under CVSS v3.1. Two scoring systems. Two different answers for the same vulnerabilities. The 6.9 fell below patch thresholds. Admin access appeared required. The 9.3 sat queued for maintenance. Segmentation would hold.\"Adversaries circumvent [severity ratings] by chaining vulnerabilities together,\" Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, told VentureBeat in an exclusive interview on April 22, 2026. On the triage logic that missed the chain: \"They just had amnesia from 30 seconds before.\"Both CVEs sit on the CISA Known Exploited Vulnerabilities catalog. Neither score flagged the kill chain. The triage logic that consumed those scores treated each CVE as an isolated event, and so did the SLA dashboards and the board reports those dashboards feed.CVSS did exactly what it was designed to do. Score one vulnerability at a time. The problem is that adversaries do not attack one vulnerability at a time.\"CVSS base scores are theoretical measures of severity that ignore real-world context,\" wrote Peter Chronis, former CISO of Paramount and a security leader with Fortune 100 experience. By moving beyond CVSS-first prioritization at Paramount, Chronis reported reducing actionable critical and high-risk vulnerabilities by 90%. Chris Gibson, executive director of FIRST, the organization that maintains CVSS, has been equally direct: using CVSS base scores alone for prioritization is \"the least apt and accurate\" method, Gibson told The Register. FIRST's own EPSS and CISA's SSVC decision model address part of this gap by adding exploitation probability and decision-tree logic. Five triage failure classes CVSS was never designed to catchIn 2025, 48,185 CVEs were disclosed, a 20.6% year-over-year increase. Jerry Gamblin, principal engineer at Cisco Threat Detection and Response, projects 70,135 for 2026. The infrastructure behind the scores is buckling under that weight. NIST announced on April 15 that CVE submissions have grown 263% since 2020, and the NVD will now prioritize enrichment for KEV and federal critical software only.1. Chained CVEs that look safe until they aren'tThe Palo Alto pair from Operation Lunar Peek is the textbook. CVE-2024-0012 bypassed authentication. CVE-2024-9474 escalated privileges. Scored separately under both CVSS v4.0 and v3.1, the escalation flaw filtered below most enterprise patch thresholds because admin access appeared required. The authentication bypass upstream eliminated that prerequisite entirely. Neither score communicated the compound effect.Meyers described the operational psychology: teams assessed each CVE independently, deprioritized the lower score, and queued the higher one for maintenance.2. Nation-state adversaries who weaponize patches within daysThe CrowdStrike 2026 Global Threat Report documented a 42% year-over-year increase in vulnerabilities exploited as zero-days before public disclosure. Average breakout time across observed intrusions: 29 minutes. Fastest observed breakout: 27 seconds. China-nexus adversaries weaponized newly patched vulnerabilities within two to six days of disclosure.\"Before it was Patch Tuesday once a month. Now it's patch every day, all the time. That's what this new world looks like,\" said Daniel Bernard, Chief Business Officer at CrowdStrike. A KEV addition treated as a routine queue item on Tuesday becomes an active exploitation window by Thursday.3. Stockpiled CVEs that nation-state actors hold for yearsSalt Typhoon accessed senior U.S. political figures' communications during the presidential transition by chaining CVE-2023-20198 with CVE-2023-20273 on internet-facing Cisco devices, a privilege escalation pair patched in October 2023 and still unapplied more than a year later. Compromised credentials provided a parallel entry vector. The patches existed. Neither was applied.Sixty-seven percent of vulnerabilities exploited by China-nexus adversaries in 2025 were remote code execution flaws providing immediate system access, according to the CrowdStrike 2026 Global Threat Report. CVSS does not degrade priority based on how long a CVE has gone unpatched. No board metric tracks aging KEV exposure.That silence is the vulnerability.4. Identity gaps that never enter the scoring systemA 2023 help desk social engineering call against a major enterprise produced more than $100 million in losses. No CVE was assigned. No CVSS score existed. No patch pipeline entry was created. The vulnerability was a human process gap in identity verification, sitting entirely outside the scoring system's aperture.\"A pro needs a zero day if all you have to do is call the help desk and say I forgot my password,\" Meyers said.Agentic AI systems now carry their own identity credentials, API tokens, and permission scopes, operating outside traditional vulnerability management governance. Merritt Baer, CSO at Enkrypt AI, has argued on record that identity-surface controls are vulnerability equivalents belonging in the same reporting pipeline as software CVEs. In most organizations, help desk authentication gaps and agentic AI credential inventories live in a separate governance silo. In practice, nobody's governance.5. AI-accelerated discovery that breaks pipeline capacityAnthropic's Claude Mythos Preview demonstrated autonomous vulnerability discovery, finding a 27-year-old signed integer overflow in OpenBSD's TCP SACK implementation across roughly 1,000 scaffold runs at a total compute cost under $20,000. Meyers offered a thought-experiment projection in the exclusive interview with VentureBeat: if frontier AI drives a 10x volume increase, the result is approximately 480,000 CVEs annually. Pipelines built for 48,000 break at 70,000 and collapse at 480,000. NVD enrichment is already gone for non-KEV submissions.\"If the adversary is now able to find vulnerabilities faster than the defenders or the business, that's a huge problem, because those vulnerabilities become exploits,\" said Daniel Bernard, Chief Business Officer at CrowdStrike.CrowdStrike on Thursday launched Project QuiltWorks, a remediation coalition with Accenture, EY, IBM Cybersecurity Services, Kroll, and OpenAI formed to address the vulnerability volume that frontier AI models are now generating in production code. When five major firms build a coalition around a pipeline problem, no single organization's patch workflow can keep pace.Security director action planThe five failure classes above map to five specific actions.Run a chain-dependency audit on every KEV CVE in the environment this month. Flag any co-resident CVE scored 5.0 or above, the threshold where privilege escalation and lateral movement capabilities typically appear in CVSS vectors. Any pair chaining authentication bypass to privilege escalation gets triaged as critical regardless of individual scores.Compress KEV-to-patch SLAs to 72 hours for internet-facing systems. The CrowdStrike 2026 Global Threat Report breakout data, 29-minute average and 27-second fastest, makes weekly patch windows indefensible in a board presentation.Build a monthly KEV aging report for the board. Every unpatched KEV CVE, days since disclosure, days since patch availability, and owner. Salt Typhoon exploited a Cisco CVE patched 14 months earlier because no escalation path existed for aging exposure.Add identity-surface controls to the vulnerability reporting pipeline. Help desk authentication gaps and agentic AI credential inventories belong in the same SLA framework as software CVEs. If they sit in a separate governance silo, they sit in nobody's governance.Stress-test pipeline capacity at 1.5x and 10x current CVE volume. Gamblin projects 70,135 for 2026. Meyers's thought-experiment projection: frontier AI could push annual volume past 480,000. Present the capacity gap to the CFO before the next budget cycle, not after the breach that proves the gap existed.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/uk-biobank-data-500k-sale-china\/\" target=\"_blank\" rel=\" noopener\">Health Records of 500,000 UK Biobank Volunteers Listed Online in China<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Health data from 500,000 UK Biobank participants was found listed for sale online in China, raising concerns over research access misuse and data security.\nThe post Health Records of 500,000 UK Biobank Volunteers Listed Online in China appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102263-netherlands-faces-greatest-national-security-threat-since-world-war-two\" target=\"_blank\" rel=\" noopener\" title=\"Netherlands Faces Greatest National Security Threat Since World War Two\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/24\/Netherlands-on-a-map-by-KOBU-Agency.webp?t=1777047639\" title=\"Netherlands Faces Greatest National Security Threat Since World War Two\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102263-netherlands-faces-greatest-national-security-threat-since-world-war-two\" target=\"_blank\" rel=\" noopener\">Netherlands Faces Greatest National Security Threat Since World War Two<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Dutch intelligence agency AIVD says the Netherlands is facing the greatest national security threat in decades.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/85-of-enterprises-are-running-ai-agents-only-5-trust-them-enough-to-ship\" target=\"_blank\" rel=\" noopener\" title=\"85% of enterprises are running AI agents. Only 5% trust them enough to ship.\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/673zAlj9W9yRILFBZOklRb\/b0ca2015b9f2e63e50a691f182097932\/keynote_with_jeetu_hero.png?w=300&#038;q=30\" title=\"85% of enterprises are running AI agents. Only 5% trust them enough to ship.\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/85-of-enterprises-are-running-ai-agents-only-5-trust-them-enough-to-ship\" target=\"_blank\" rel=\" noopener\">85% of enterprises are running AI agents. Only 5% trust them enough to ship.<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Eighty-five percent of enterprises are running AI agent pilots, but only 5% have moved those agents into production. In an exclusive interview at RSA Conference 2026, Cisco President and Chief Product Officer Jeetu Patel said that the gap comes down to one thing: trust \u2014 and that closing it separates market dominance from bankruptcy. He also disclosed a mandate that will reshape Cisco's 90,000-person engineering organization.The problem is not rogue agents. The problem is the absence of a trust architecture.The trust deficit behind a 5% production rateA recent Cisco survey of major enterprise customers found that 85% have AI agent pilot programs underway. Only 5% moved those agents into production. That 80-point gap defines the security problem the entire industry is trying to close. It is not closing.\"The biggest impediment to scaled adoption in enterprises for business-critical tasks is establishing a sufficient amount of trust,\" Patel told VentureBeat. \"Delegating versus trusted delegating of tasks to agents. The difference between those two, one leads to bankruptcy and the other leads to market dominance.\"He compared agents to teenagers. \"They're supremely intelligent, but they have no fear of consequence. They're pretty immature. And they can be easily sidetracked or influenced,\" Patel said. \"What you have to do is make sure that you have guardrails around them and you need some parenting on the agents.\"The comparison carries weight because it captures the precise failure mode security teams face. Three years ago, a chatbot that gave the wrong answer was an embarrassment. An agent that takes the wrong action can trigger an irreversible outcome. Patel pointed to a case he cited in his keynote where an AI coding agent deleted a live production database during a code freeze, tried to cover its tracks with fake data, and then apologized. \"An apology is not a guardrail,\" Patel said in his keynote blog. The shift from information risk to action risk is the core reason the pilot-to-production gap persists.Defense Claw and the open-source speed play with NvidiaCisco's response to the trust deficit at RSAC 2026 spanned three categories: protecting agents from the world, protecting the world from agents, and detecting and responding at machine speed. The product announcements included AI Defense Explorer Edition (a free, self-service red teaming tool), the Agent Runtime SDK for embedding policy enforcement into agent workflows at build time, and the LLM Security Leaderboard for evaluating model resilience against adversarial attacks.The open-source strategy moved faster than any of those. Nvidia launched OpenShell, a secure container for open-source agent frameworks, at GTC the week before RSAC. Cisco packaged its Skills Scanner, MCP Scanner, AI Bill of Materials tool, and CodeGuard into a single open-source framework called Defense Claw and hooked it into OpenShell within 48 hours.\"Every single time you actually activate an agent in an Open Shell container, you can now automatically instantiate all the security services that we have built through Defense Claw,\" Patel told VentureBeat. The integration means security enforcement activates at container launch without manual configuration. That speed matters because the alternative is asking developers to bolt on security after the agent is already running.That 48-hour turnaround was not an anomaly. Patel said several of the Defense Claw capabilities Cisco launched were built in a week. \"You couldn't have built it in longer than a week because Open Shell came out last week,\" he said.A six-to-nine-month product lead and an information asymmetry on top of itPatel made a competitive claim worth examining. \"Product wise, we might be six to nine months ahead of most of the market,\" he told VentureBeat. He added a second layer: \"We also have an asymmetric information advantage of, I'd say, three to six months on everyone because, you know, we, by virtue of being in the ecosystem with all the model companies. We're seeing what's coming down the pipe.\" The 48-hour Defense Claw sprint supports the speed claim, though the lead margin is Cisco's own characterization; no independent benchmarks were provided.Cisco also extended zero trust to the agentic workforce through new Duo IAM and Secure Access capabilities, giving every agent time-bound, task-specific permissions. On the SOC side, Splunk announced Exposure Analytics for continuous risk scoring, Detection Studio for streamlined detection engineering, and Federated Search for investigating across distributed data environments.The zero-human-code engineering mandateAI Defense, the product Cisco launched a year before RSAC 2026, is now 100% built with AI. Zero lines of human-written code. By the end of 2026, half a dozen Cisco products will reach the same milestone. By the end of calendar year 2027, Patel's goal is 70% of Cisco's products built entirely by AI.\"Just process that for a second and go: a $60 billion company is gonna have 70% of the products that are gonna have no human lines of code,\" Patel told VentureBeat. \"The concept of a legacy company no longer exists.\"He connected that mandate to a cultural shift inside the engineering organization. \"There's gonna be two kinds of people: ones that code with AI and ones that don't work at Cisco,\" Patel said. That was not debated. \"Changing 30,000 people to change the way that they work at the very core of what they do in engineering cannot happen if you just make it a democratic process. It has to be something that's driven from the top down.\"Five moats for the agentic era, and what CISOs can verify todayPatel laid out five strategic advantages that will separate winning enterprises from failing ones. VentureBeat mapped each moat against actions security teams can begin verifying today.MoatPatel's claimWhat CISOs can verify todayWhat to validate nextSustained speed\"Operating with extreme levels of obsession for speed for a durable length of time\" creates compounding valueMeasure deployment velocity from pilot to production. Track how long agent governance reviews take.Pair speed metrics with telemetry coverage. Fast deployment without observability creates blind acceleration.Trust and delegationTrusted delegation separates market dominance from bankruptcyAudit delegation chains. Flag agent-to-agent handoffs with no human approval.Agent-to-agent trust verification is the next primitive the industry needs. OAuth, SAML, and MCP do not yet cover it.Token efficiencyHigher output per token creates a strategic advantageMonitor token consumption per workflow. Benchmark cost-per-action across agent deployments.Token efficiency metrics exist. Token security metrics (what the token accessed, what it changed) are the next build.Human judgment\"Just because you can code it doesn't mean you should.\"Track decision points where agents defer to humans vs. act autonomously.Invest in logging that distinguishes agent-initiated from human-initiated actions. Most configurations cannot yet.AI dexterity\"10x to 20x to 50x productivity differential\" between AI-fluent and non-fluent workersMeasure the adoption rates of AI coding tools across security engineering teams.Pair dexterity training with governance training. One without the other compounds the risk.The telemetry layer the industry is still buildingPatel's framework operates at the identity and policy layer. The next layer down, telemetry, is where the verification happens. \"It looks indistinguishable if an agent runs your web browser versus if you run your browser,\" CrowdStrike CTO Elia Zaitsev told VentureBeat in an exclusive interview at RSAC 2026. Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction yet.A CEO's AI agent rewrote the company's security policy. Not because it was compromised. Because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Every identity check passed. CrowdStrike CEO George Kurtz disclosed that incident and a second one at his RSAC keynote, both at Fortune 50 companies. In the second, a 100-agent Slack swarm delegated a code fix between agents without human approval.Both incidents were caught by accidentEtay Maor, VP of Threat Intelligence at Cato Networks, told VentureBeat in a separate exclusive interview at RSAC 2026 that enterprises abandoned basic security principles when deploying agents. Maor ran a live Censys scan during the interview and counted nearly 500,000 internet-facing agent framework instances. The week before: 230,000. Doubling in seven days.Patel acknowledged the delegation risk in the interview. \"The agent takes the wrong action and worse yet, some of those actions might be critical actions that are not reversible,\" he said. Cisco's Duo IAM and MCP gateway enforce policy at the identity layer. Zaitsev's work operates at the kinetic layer: tracking what the agent did after the identity check passed. Security teams need both. Identity without telemetry is a locked door with no camera. Telemetry without identity is footage with no suspect.Token generation as the currency for national competitivenessPatel sees the infrastructure layer as decisive. \"Every country and every company in the world is gonna wanna make sure that they can generate their own tokens,\" he told VentureBeat. \"Token generation becomes the currency for success in the future.\" Cisco's play is to provide the most secure and efficient technology for generating tokens at scale, with Nvidia supplying the GPU layer. The 48-hour Defense Claw integration demonstrated what that partnership produces under pressure.Security director action planVentureBeat identified five steps security teams can take to begin building toward Patel's framework today:Audit the pilot-to-production gap. Cisco's own survey found 85% of enterprises piloting, 5% in production. Mapping the specific trust deficits keeping agents stuck is the starting point \u2014 the answer is rarely the technology. Governance, identity, and delegation controls are what's missing. Patel's trusted delegation framework is designed to close that gap.Test Defense Claw and AI Defense Explorer Edition. Both are free. Red-team your agent workflows before they reach production. Test the workflow, not just the model.Map delegation chains end-to-end. Flag every agent-to-agent handoff with no human approval. This is the \"parenting\" Patel described. No product fully automates it yet. Do it manually, every week.Establish agent behavioral baselines. Before any agent reaches production, define what normal looks like: API call patterns, data access frequency, systems touched, and hours of activity. Without a baseline, the observability that Patel's moats require has nothing to compare against.Close the telemetry gap in your logging configuration. Verify that your SIEM can distinguish agent-initiated actions from human-initiated actions. If it cannot, the identity layer alone will not catch the incidents Kurtz described at RSAC. Patel built the identity layer. The telemetry layer completes it.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/what-is-cloud-security\/\" target=\"_blank\" rel=\" noopener\">What Is Cloud Security? A 2026 Guide<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Learn what cloud security is, why it matters in 2026, and the best practices for protecting data, identities, workloads, and cloud infrastructure.\nThe post What Is Cloud Security? A 2026 Guide appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/top-enterprise-vpns\/\" target=\"_blank\" rel=\" noopener\">The Top 8 Enterprise VPN Solutions<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Enterprise VPN solutions are critical for connecting remote workers to company resources via reliable and secure links to foster communication and productivity. Read about seven viable choices for businesses.\nThe post The Top 8 Enterprise VPN Solutions appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-white-house-china-ai-theft-apac\/\" target=\"_blank\" rel=\" noopener\">White House Says China-Linked Actors Tried to \u2018Steal American AI\u2019<\/a><\/span><div class=\"rss_content\" style=\"\"><p>The White House says China-linked actors are using industrial-scale distillation to extract American AI breakthroughs, with US action planned.\nThe post White House Says China-Linked Actors Tried to \u2018Steal American AI\u2019 appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102253-security-leaders-discuss-the-claude-mythos-breach\" target=\"_blank\" rel=\" noopener\" title=\"Security Leaders Discuss the Claude Mythos Breach\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/23\/AI-chip-by-Igor-Omilaev.webp?t=1776956192\" title=\"Security Leaders Discuss the Claude Mythos Breach\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102253-security-leaders-discuss-the-claude-mythos-breach\" target=\"_blank\" rel=\" noopener\">Security Leaders Discuss the Claude Mythos Breach<\/a><\/span><div class=\"rss_content\" style=\"\"><p>What security experts are saying about the Claude Mythos breach.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-apple-fixes-iphone-notification-bug-fbi-signal-messages\/\" target=\"_blank\" rel=\" noopener\">Apple Fixes iPhone Bug After FBI Retrieved Signal Messages<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Apple patched an iPhone notification bug that let deleted messages linger in system storage, closing a privacy gap exposed by an FBI Signal case.\nThe post Apple Fixes iPhone Bug After FBI Retrieved Signal Messages appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102255-nists-new-prioritization-criteria-for-cves-examined-by-experts\" target=\"_blank\" rel=\" noopener\" title=\"NIST\u2019s New Prioritization Criteria for CVEs, Examined by Experts\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/23\/Green-padlock-on-keyboard-by-rupixen.webp?t=1776955351\" title=\"NIST\u2019s New Prioritization Criteria for CVEs, Examined by Experts\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102255-nists-new-prioritization-criteria-for-cves-examined-by-experts\" target=\"_blank\" rel=\" noopener\">NIST\u2019s New Prioritization Criteria for CVEs, Examined by Experts<\/a><\/span><div class=\"rss_content\" style=\"\"><p>NIST recently changed how it handles CVEs.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-google-pixel-update-battery-drain-crisis\/\" target=\"_blank\" rel=\" noopener\">Google\u2019s Pixel Update Sparks \u2018Severe\u2019 Battery Drain Across Multiple Models<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Google Pixel users are reporting severe battery drain after recent Android updates, with complaints spanning multiple models and no confirmed fix yet.\nThe post Google\u2019s Pixel Update Sparks \u2018Severe\u2019 Battery Drain Across Multiple Models appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-1300-sharepoint-servers-unpatched-zero-day-flaw\/\" target=\"_blank\" rel=\" noopener\">Microsoft Patch Still Leaves 1,300 SharePoint Servers Exposed<\/a><\/span><div class=\"rss_content\" style=\"\"><p>More than 1,300 internet-exposed SharePoint servers remain unpatched against CVE-2026-32201, a spoofing flaw Microsoft says was exploited as a zero-day.\nThe post Microsoft Patch Still Leaves 1,300 SharePoint Servers Exposed appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-vonage-girls-who-code-ai-talent-pipeline\/\" target=\"_blank\" rel=\" noopener\">Vonage, Girls Who Code Show What \u2018Responsible AI\u2019 Looks Like<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Vonage\u2019s partnership with Girls Who Code is more than feel-good philanthropy; it\u2019s a blueprint for building diverse AI talent pipelines.\nThe post Vonage, Girls Who Code Show What \u2018Responsible AI\u2019 Looks Like appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-mozilla-firefox-150-patched-271-security-flaws\/\" target=\"_blank\" rel=\" noopener\">Mozilla Fixes 271 Firefox Bugs Using Anthropic\u2019s Mythos AI<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Mozilla says Firefox 150 patches 271 vulnerabilities found with Anthropic\u2019s restricted Mythos AI, highlighting how quickly AI-driven bug hunting is accelerating.\nThe post Mozilla Fixes 271 Firefox Bugs Using Anthropic\u2019s Mythos AI appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-fake-google-antigravity-downloads-steal-accounts-minutes\/\" target=\"_blank\" rel=\" noopener\">Fake Google Antigravity Installer Can Steal Accounts in Minutes<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Fake Antigravity downloads are enabling fast account takeovers using hidden malware and stolen session cookies.\nThe post Fake Google Antigravity Installer Can Steal Accounts in Minutes appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-microsoft-windows-11-no-third-party-antivirus-needed\/\" target=\"_blank\" rel=\" noopener\">Microsoft: Most Windows 11 Users Don\u2019t Need Third-Party Antivirus<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Microsoft says Windows 11\u2019s built-in security is strong enough for most users, though power users and enterprises may still want third-party protection.\nThe post Microsoft: Most Windows 11 Users Don\u2019t Need Third-Party Antivirus appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-apple-phishing-scam-fake-899-iphone-purchase-alert\/\" target=\"_blank\" rel=\" noopener\">New Apple Phishing Scam Uses Fake $899 iPhone Purchase Alert<\/a><\/span><div class=\"rss_content\" style=\"\"><p>An Apple account notification has been exploited in a new email phishing attack that comes with a fake iPhone purchase claim.\nThe post New Apple Phishing Scam Uses Fake $899 iPhone Purchase Alert appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-hackers-microsoft-teams-social-engineering-it-help-desk-scam\/\" target=\"_blank\" rel=\" noopener\">Hackers Impersonate IT Help Desk on Microsoft Teams to Gain Access, Steal Data<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Hackers are abusing Microsoft Teams chats to impersonate IT support, gain remote access, move laterally, and steal company data, Microsoft warns.\nThe post Hackers Impersonate IT Help Desk on Microsoft Teams to Gain Access, Steal Data appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102251-unauthorized-users-accessed-claude-mythos-new-reports-suggest\" target=\"_blank\" rel=\" noopener\" title=\"Unauthorized Users Accessed Claude Mythos, New Reports Suggest\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/22\/Laptop-in-darkness-by-Hostaphoto.webp?t=1776877073\" title=\"Unauthorized Users Accessed Claude Mythos, New Reports Suggest\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102251-unauthorized-users-accessed-claude-mythos-new-reports-suggest\" target=\"_blank\" rel=\" noopener\">Unauthorized Users Accessed Claude Mythos, New Reports Suggest<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Anthropic\u2019s new AI model may have been accessed by unauthorized users.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-malicious-tiktok-downloader-extensions\/\" target=\"_blank\" rel=\" noopener\">Malicious TikTok Downloader Extensions Quietly Compromised 130K Users<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Malicious browser extensions disguised as TikTok downloaders compromised 130,000 users, exposing a growing blind spot in enterprise security.\nThe post Malicious TikTok Downloader Extensions Quietly Compromised 130K Users appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102248-hackers-claim-19m-records-stolen-from-french-government-agency\" target=\"_blank\" rel=\" noopener\" title=\"Hackers Claim 19M Records Stolen From French Government Agency\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/22\/French-flag-by-Rafael-Garcin.webp?t=1776860491\" title=\"Hackers Claim 19M Records Stolen From French Government Agency\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102248-hackers-claim-19m-records-stolen-from-french-government-agency\" target=\"_blank\" rel=\" noopener\">Hackers Claim 19M Records Stolen From French Government Agency<\/a><\/span><div class=\"rss_content\" style=\"\"><p>The France Titres\u00a0discovered a security incident.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/vercel-breach-exposes-the-oauth-gap-most-security-teams-cannot-detect-scope-or-contain\" target=\"_blank\" rel=\" noopener\" title=\"Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/6wgHVXn6N3biFNjGQrW3dM\/05683cce2c54c5658a779c71e09887df\/Vercel_breach.png?w=300&#038;q=30\" title=\"Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/vercel-breach-exposes-the-oauth-gap-most-security-teams-cannot-detect-scope-or-contain\" target=\"_blank\" rel=\" noopener\">Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain<\/a><\/span><div class=\"rss_content\" style=\"\"><p>One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel\u2019s production environments through an OAuth grant that nobody had reviewed.Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to \u201csensitive.\u201d Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.Context.ai was the entry point. OX Security\u2019s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee\u2019s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as \u201csensitive.\u201d Vercel\u2019s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.CEO Guillermo Rauch described the attacker as \u201chighly sophisticated and, I strongly suspect, significantly accelerated by AI.\u201d Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai\u2019s Chrome extension, matching the client ID from Vercel\u2019s published IOC to Context.ai\u2019s Google account before Rauch\u2019s public statement. The Hacker News reported that Google removed Context.ai\u2019s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users\u2019 Google Drive files.Patient zero. A Roblox cheat and a Lumma Stealer infectionHudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee\u2019s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of \u201ccontext-inc,\u201d Context.ai\u2019s tenant on the Vercel platform, with administrative access to production environment variable dashboards.Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai\u2019s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel\u2019s Google Workspace.Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai\u2019s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 \u2014 a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock's February 2026 dating; Trend Micro did not respond to a request for comment before publication.Where detection goes blindSecurity directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.Kill Chain HopWhat HappenedWho Should DetectTypical CoverageGap1. Infostealer on employee deviceContext.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase\/Datadog\/Authkit keys.EDR on endpoint; credential exposure monitoring.Low. Device likely under-monitored. No stealer log monitoring at most orgs.Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains.2. AWS compromise at Context.aiAttacker used harvested credentials to access Context.ai\u2019s AWS. Detected in March.Context.ai cloud security; AWS CloudTrail.Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration.Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.3. OAuth token theft into Vercel WorkspaceCompromised OAuth token used to access a Vercel employee\u2019s Google Workspace. Employee had granted \u201cAllow All\u201d permissions via Chrome extension.Google Workspace audit logs; OAuth app monitoring; CASB.Very low. Most orgs do not monitor third-party OAuth token usage patterns.No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.4. Lateral movement into Vercel productionAttacker enumerated non-sensitive env vars (accessible via dashboard\/API), harvested customer credentials.Vercel platform audit logs; behavioral analytics.Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.What\u2019s confirmed vs. what\u2019s claimedVercel\u2019s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai\u2019s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel\u2019s open-source projects are unaffected.Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel\u2019s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as \u201clikely an imposter.\u201d Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.Six governance failures the Vercel breach exposed1. AI tool OAuth scopes go unaudited. Context.ai\u2019s own bulletin states that a Vercel employee granted \u201cAllow All\u201d permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: \u201cDon\u2019t give an agent access to everything just because you\u2019re lazy. Give it access to only what it needs to get the job done.\u201d Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked \u201csensitive\u201d (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.\u201cModern controls get deployed, but if legacy tokens or keys aren\u2019t retired, the system quietly favors them,\u201d Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat. 3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock\u2019s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai\u2019s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?5. Third-party AI tools are the new shadow IT. Vercel\u2019s bulletin describes Context.ai as \u201ca small, third-party AI tool.\u201d Grip Security\u2019s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.6. AI-accelerated attackers compress response timelines. Rauch\u2019s assessment of AI acceleration comes from what his IR team observed. CrowdStrike\u2019s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.Security director action planAttack SurfaceWhat FailedRecommended ActionOwnerOAuth governanceContext.ai held broad \u201cAllow All\u201d Workspace permissions. No approval workflow intercepted.Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.Identity \/ IAMEnv var classificationVariables not marked \u201csensitive\u201d remained accessible. Accessibility became the escalation path.Default to non-readable. Require a security sign-off to downgrade any variable to accessible.Platform eng + securityInfostealer-to-supply-chainKill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.Threat intel + SOCVendor notification lagNearly a month between Context.ai detection and Vercel disclosure.Require 72-hour notification clauses in all contracts involving OAuth or identity integration.Third-party risk \/ legalShadow AI adoptionOne employee\u2019s unapproved AI tool became the breach vector for hundreds of orgs.Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.Security ops + procurementLateral movement speedRauch suspects AI acceleration. Attacker compressed the access-to-escalation window.Cut detection-to-containment SLAs below 29-minute eCrime average.SOC + IR teamRun both IoC checks todaySearch your Google Workspace admin console (Security &gt; API Controls &gt; Manage Third-Party App Access) for two OAuth App IDs.The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai\u2019s Office Suite.The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai\u2019s Chrome extension and granting Google Drive read access.If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.What this means for security directorsForget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes \u2014 without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102246-security-leaders-discuss-the-vercel-breach\" target=\"_blank\" rel=\" noopener\" title=\"Security Leaders Discuss the Vercel Breach\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/21\/Office-supplies-by-Amy-Hirschi.webp?t=1776783392\" title=\"Security Leaders Discuss the Vercel Breach\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102246-security-leaders-discuss-the-vercel-breach\" target=\"_blank\" rel=\" noopener\">Security Leaders Discuss the Vercel Breach<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Security leaders are discussing the Vercel breach.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/ai-agent-runtime-security-system-card-audit-comment-and-control-2026\" target=\"_blank\" rel=\" noopener\" title=\"Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/338TKAZmGg0eamRggz05Kq\/fdc8465b217fab704067d122300d6da7\/hero_model_comparison.png?w=300&#038;q=30\" title=\"Three AI coding agents leaked secrets through a single prompt injection. One vendor&#039;s system card predicted it\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/ai-agent-runtime-security-system-card-audit-comment-and-control-2026\" target=\"_blank\" rel=\" noopener\">Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it<\/a><\/span><div class=\"rss_content\" style=\"\"><p>A security researcher, working with colleagues at Johns Hopkins University, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic\u2019s Claude Code Security Review action post its own API key as a comment. The same prompt injection worked on Google\u2019s Gemini CLI Action and GitHub\u2019s Copilot Agent (Microsoft). No external infrastructure required.Aonan Guan, the researcher who discovered the vulnerability, alongside Johns Hopkins colleagues Zhengyu Liu and Gavin Zhong, published the full technical disclosure last week, calling it \u201cComment and Control.\u201d GitHub Actions does not expose secrets to fork pull requests by default when using the pull_request trigger, but workflows using pull_request_target, which most AI agent integrations require for secret access, do inject secrets into the runner environment. This limits the practical attack surface but does not eliminate it: collaborators, comment fields, and any repo using pull_request_target with an AI coding agent are exposed.Per Guan\u2019s disclosure timeline: Anthropic classified it as CVSS 9.4 Critical ($100 bounty), Google paid a $1,337 bounty, and GitHub awarded $500 through the Copilot Bounty Program. The $100 amount is notably low relative to the CVSS 9.4 rating; Anthropic\u2019s HackerOne program scopes agent-tooling findings separately from model-safety vulnerabilities. All three patched quietly, and none had issued CVEs in the NVD or published security advisories through GitHub Security Advisories as of Saturday.Comment and Control exploited a prompt injection vulnerability in Claude Code Security Review, a specific GitHub Action feature that Anthropic\u2019s own system card acknowledged is \u201cnot hardened against prompt injection.\u201d The feature is designed to process trusted first-party inputs by default; users who opt into processing untrusted external PRs and issues accept additional risk and are responsible for restricting agent permissions. Anthropic updated its documentation to clarify this operating model after the disclosure. The same class of attack operates beneath OpenAI\u2019s safeguard layer at the agent runtime, based on what their system card does not document \u2014 not a demonstrated exploit. The exploit is the proof case, but the story is what the three system cards reveal about the gap between what vendors document and what they protect.OpenAI and Google did not respond for comment by publication time.\u201cAt the action boundary, not the model boundary,\u201d Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat when asked where protection actually needs to sit. \u201cThe runtime is the blast radius.\u201dWhat the system cards tell youAnthropic\u2019s Opus 4.7 system card runs 232 pages with quantified hack rates and injection resistance metrics. It discloses a restricted model strategy (Mythos held back as a capability preview) and states directly that Claude Code Security Review is \u201cnot hardened against prompt injection.\u201d The system card explains to readers that the runtime was exposed. Comment and Control proved it. Anthropic does gate certain agent actions outside the system card\u2019s scope \u2014 Claude Code Auto Mode, for example, applies runtime-level protections \u2014 but the system card itself does not document these runtime safeguards or their coverage.OpenAI\u2019s GPT-5.4 system card documents extensive red teaming and publishes model-layer injection evals but not agent-runtime or tool-execution resistance metrics. Trusted Access for Cyber scales access to thousands. The system card tells you what red teamers tested. It does not tell you how resistant the model is to the attacks they found.Google\u2019s Gemini 3.1 Pro model card, shipped in February, defers most safety methodology to older documentation, a VentureBeat review of the card found. Google\u2019s Automated Red Teaming program remains internal only. No external cyber program.DimensionAnthropic (Opus 4.7)OpenAI (GPT-5.4)Google (Gemini 3.1 Pro)System card depth232 pages. Quantified hack rates, classifier scores, and injection resistance metrics.Extensive. Red teaming hours documented. No injection resistance rates published.Few pages. Defers to older Gemini 3 Pro card. No quantified results.Cyber verification programCVP. Removes cyber safeguards for vetted pentesters and red teamers doing authorized offensive work. Does not address prompt injection defense. Platform and data-retention exclusions not yet publicly documented.TAC. Scaled to thousands. Constrains ZDR.None. No external defender pathway.Restricted model strategyYes. Mythos held back as a capability preview. Opus 4.7 is the testbed.No restricted model. Full capability released, access gated.No restricted model. No stated plan for one.Runtime agent safeguardsClaude Code Security Review: system card states it is not hardened against prompt injection. The feature is designed for trusted first-party inputs. Anthropic applies additional runtime protections (e.g., Claude Code Auto Mode) not documented in the system card.Not documented. TAC governs access, not agent operations.Not documented. ART internal only.Exploit response (Comment and Control)CVSS 9.4 Critical. $100 bounty. Patched. No CVE.Not directly exploited. Structural gap inferred from TAC design, not demonstrated.$1,337 bounty per Guan disclosure. Patched. No CVE.Injection resistance dataPublished. Quantified rates in the system card.Model-layer injection evals published. No agent-runtime or tool-execution resistance rates.Not published. No quantified data available.Baer offered specific procurement questions. \u201cFor Anthropic, ask how safety results actually transfer across capability jumps,\u201d she told VentureBeat. \u201cFor OpenAI, ask what \u2018trusted\u2019 means under compromise.\u201d For both, she said, directors need to \u201cdemand clarity on whether safeguards extend into tool execution, not just prompt filtering.\u201dSeven threat classes neither safeguard approach closesEach row names what breaks, why your controls miss it, what Comment and Control proved, and the recommended action for the week ahead.Threat ClassWhat BreaksWhy Your Controls Miss ItWhat Comment and Control ProvedRecommended Action1. Deployment surface mismatchCVP is designed for authorized offensive security research, not prompt injection defense. It does not extend to Bedrock, Vertex, or ZDR tenants. TAC constrains ZDR. Google has no program. Your team may be running a verified model on an unverified surface.Launch announcements describe the program. Support documentation lists the exclusions. Security teams read the announcement. Procurement reads neither.The exploit targets the agent runtime, not the deployment platform. A team running Claude Code on Bedrock is outside CVP coverage, but CVP was not designed to address this class of vulnerability in the first place.Email your Anthropic and OpenAI reps today. One question, in writing: \u2018Confirm whether [your platform] and [your data retention config] are covered by your runtime-level prompt injection protections, and describe what those protections include.\u2019 File the response in your vendor risk register.2. CI secrets exposed to AI agentsANTHROPIC_API_KEY, GEMINI_API_KEY, GITHUB_TOKEN, and any production secret stored as a GitHub Actions env var are readable by every workflow step, including AI coding agents.The default GitHub Actions config does not scope secrets to individual steps. Repo-level and org-level secrets propagate to all workflows. Most teams never audit which steps access which secrets.The agent read the API key from the runner env var, encoded it in a PR comment body, and posted it through GitHub\u2019s API. No attacker-controlled infrastructure required. Exfiltration ran through GitHub\u2019s own API \u2014 the platform itself became the C2 channel.Run: grep -r \u2018secrets\\.\u2019 .github\/workflows\/ across every repo with an AI agent. List every secret the agent can access. Rotate all exposed credentials. Migrate to short-lived OIDC tokens (GitHub, GitLab, CircleCI).3. Over-permissioned agent runtimesAI agents granted bash execution, git push, and API write access at setup. Permissions never scoped down. No periodic least-privilege review. Agents accumulate access in the same way service accounts do.Agents are configured once during onboarding and inherited across repos. No tooling flags unused permissions. The Comment and Control agent had bash, write, and env-read access for a code review task.The agent had bash access it did not need for code review. It used that access to read env vars and post exfiltrated data. Stripping bash would have blocked the attack chain entirely.Audit agent permissions repo by repo. Strip bash from code review agents. Set repo access to read-only. Gate write access (PR comments, commits, merges) behind a human approval step.4. No CVE signal for AI agent vulnerabilitiesCVSS 9.4 Critical. Anthropic, Google, and GitHub patched. Zero CVE entries in NVD. Zero advisories. Your vulnerability scanner, SIEM, and GRC tool all show green.No CNA has yet issued a CVE for a coding agent prompt injection, and current CVE practices have not captured this class of failure mode. Vendors patch through version bumps. Qualys, Tenable, and Rapid7 have nothing to scan for.A SOC analyst running a full scan on Monday morning would find zero entries for a Critical vulnerability that hit Claude Code Security Review, Gemini CLI Action, and Copilot simultaneously.Create a new category in your supply chain risk register: \u2018AI agent runtime.\u2019 Assign a 48-hour check-in cadence with each vendor\u2019s security contact. Do not wait for CVEs. None have come yet, and the taxonomy gap makes them unlikely without industry pressure.5. Model safeguards do not govern agent actionsOpus 4.7 blocks a phishing email prompt. It does not block an agent from reading $ANTHROPIC_API_KEY and posting it as a PR comment. Safeguards gate generation, not operation.Safeguards filter model outputs (text). Agent operations (bash, git push, curl, API POST) bypass safeguard evaluation entirely. The runtime is outside the safeguard perimeter. Anthropic applies some runtime-level protections in features like Claude Code Auto Mode, but these are not documented in the system card and their scope is not publicly defined.The agent never generated prohibited content. It performed a legitimate operation (post a PR comment) containing exfiltrated data. Safeguards never triggered.Map every operation your AI agents perform: bash, git, API calls, file writes. For each, ask the vendor in writing: does your safeguard layer evaluate this action before execution? Document the answer.6. Untrusted input parsed as instructionsPR titles, PR body text, issue comments, code review comments, and commit messages are all parsed by AI coding agents as context. Any can contain injected instructions.No input sanitization layer between GitHub and the agent instruction set. The agent cannot distinguish developer intent from attacker injection in untrusted fields. Claude Code GitHub Action is designed for trusted first-party inputs by default. Users who opt into processing untrusted external PRs accept additional risk.A single malicious PR title became a complete exfiltration command. The agent treated it as a legitimate instruction and executed it without validation or confirmation.Implement input sanitization as defense-in-depth, but do not rely on traditional WAF-style regex patterns. LLM prompt injections are non-deterministic and will evade static pattern matching. Restrict agent context to approved workflow configs and combine with least-privilege permissions.7. No comparable injection resistance data across vendorsAnthropic publishes quantified injection resistance rates in 232 pages. OpenAI publishes model-layer injection evals but no agent-runtime resistance rates. Google publishes a few-page card referencing an older model.No industry standard for AI safety metric disclosure. Vendors may have internal metrics and red-team programs, but published disclosures are not comparable. Procurement has no baseline and no framework to require one.Anthropic, OpenAI, and Google were all approved for enterprise use without comparable injection resistance data. The exploit exposed what unmeasured risk looks like in production.Write one sentence for your next vendor meeting: \u2018Show me your quantified injection resistance rate for my model version on my platform.\u2019 Document refusals for EU AI Act high-risk compliance. Deadline: August 2026.OpenAI\u2019s GPT-5.4 was not directly exploited in the Comment and Control disclosure. The gaps identified in the OpenAI and Google columns are inferred from what their system cards and program documentation do not publish, not from demonstrated exploits. That distinction matters. Absence of published runtime metrics is a transparency gap, not proof of a vulnerability. It does mean procurement teams cannot verify what they cannot measure.Eligibility requirements for Anthropic\u2019s Cyber Verification Program and OpenAI\u2019s Trusted Access for Cyber are still evolving, as are platform coverage and program scope, so security teams should validate current vendor docs before treating any coverage described here as definitive. Anthropic\u2019s CVP is designed for authorized offensive security research \u2014 removing cyber safeguards for vetted actors \u2014 and is not a prompt injection defense program. Security leaders mapping these gaps to existing frameworks can align threat classes 1\u20133 with NIST CSF 2.0 GV.SC (Supply Chain Risk Management), threat class 4 with ID.RA (Risk Assessment), and threat classes 5\u20137 with PR.DS (Data Security).Comment and Control focuses on GitHub Actions today, but the seven threat classes generalize to most CI\/CD runtimes where AI agents execute with access to secrets, including GitHub Actions, GitLab CI, CircleCI, and custom runners. Safety metric disclosure formats are in flux across all three vendors; Anthropic currently leads on published quantification in its system card documentation, but norms are likely to converge as EU AI Act obligations come into force. Comment and Control targeted Claude Code GitHub Action, a specific product feature, not Anthropic\u2019s models broadly. The vulnerability class, however, applies to any AI coding agent operating in a CI\/CD runtime with access to secrets.What to do before your next vendor renewal\u201cDon\u2019t standardize on a model. Standardize on a control architecture,\u201d Baer told VentureBeat. \u201cThe risk is systemic to agent design, not vendor-specific. Maintain portability so you can swap models without reworking your security posture.\u201dBuild a deployment map. Confirm your platform qualifies for the runtime protections you think cover you. If you run Opus 4.7 on Bedrock, ask your Anthropic account rep what runtime-level prompt injection protections apply to your deployment surface. Email your account rep today. (Anthropic Cyber Verification Program)Audit every runner for secret exposure. Run grep -r \u2018secrets\\.\u2019 .github\/workflows\/ across every repo with an AI coding agent. List every secret the agent can access. Rotate all exposed credentials. (GitHub Actions secrets documentation)Start migrating credentials now. Switch stored secrets to short-lived OIDC token issuance. GitHub Actions, GitLab CI, and CircleCI all support OIDC federation. Set token lifetimes to minutes, not hours. Plan full rollout over one to two quarters, starting with repos running AI agents. (GitHub OIDC docs | GitLab OIDC docs | CircleCI OIDC docs)Fix agent permissions repo by repo. Strip bash execution from every AI agent doing code review. Set repository access to read-only. Gate write access behind a human approval step. (GitHub Actions permissions documentation)Add input sanitization as one layer, not the only layer. Filter pull request titles, comments, and review threads for instruction patterns before they reach agents. Combine with least-privilege permissions and OIDC. Static regex will not catch non-deterministic prompt injections on its own.Add \u201cAI agent runtime\u201d to your supply chain risk register. Assign a 48-hour patch verification cadence with each vendor\u2019s security contact. Do not wait for CVEs. None have come yet for this class of vulnerability.Check which hardened GitHub Actions mitigations you already have in place. Hardened GitHub Actions configurations block this attack class today: the permissions key restricts GITHUB_TOKEN scope, environment protection rules require approval before secrets are injected, and first-time-contributor gates prevent external pull requests from triggering agent workflows. (GitHub Actions security hardening guide)Prepare one procurement question per vendor before your next renewal. Write one sentence: \u201cShow me your quantified injection resistance rate for the model version I run on the platform I deploy to.\u201d Document refusals for EU AI Act high-risk compliance. The deadline is August 2026.\u201cRaw zero-days aren\u2019t how most systems get compromised. Composability is,\u201d Baer said. \u201cIt\u2019s the glue code, the tokens in CI, the over-permissioned agents. When you wire a powerful model into a permissive runtime, you\u2019ve already done most of the attacker\u2019s work for them.\u201d<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/adversaries-hijacked-ai-security-tools-at-90-organizations-the-next-wave-has-write-access-to-the-firewall\" target=\"_blank\" rel=\" noopener\" title=\"Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/46SzboB6NVjamCROg8pfcE\/a20c84d337fb68ef55e801810cab5308\/hero.png?w=300&#038;q=30\" title=\"Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/adversaries-hijacked-ai-security-tools-at-90-organizations-the-next-wave-has-write-access-to-the-firewall\" target=\"_blank\" rel=\" noopener\">Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Adversaries injected malicious prompts into legitimate AI tools at more than 90 organizations in 2025, stealing credentials and cryptocurrency. Every one of those compromised tools could read data, and none of them could rewrite a firewall rule.The autonomous SOC agents shipping now can. That escalation, from compromised tools that read data to autonomous agents that rewrite infrastructure, has not been exploited in production at scale yet. But the architectural conditions for it are shipping faster than the governance designed to prevent it.A compromised SOC agent can rewrite your firewall rules, modify IAM policies, and quarantine endpoints, all with its own privileged credentials, all through approved API calls that EDR classifies as authorized activity. The adversary never touches the network. The agent does it for them.Cisco announced AgenticOps for Security in February, with autonomous firewall remediation and PCI-DSS compliance capabilities. Ivanti launched Continuous Compliance and the Neurons AI self-service agent last week, with policy enforcement, approval gates and data context validation built into the platform at launch \u2014 a design distinction that matters because the OWASP Agentic Top 10 documents what happens when those controls are absent.\"In the agentic era, defending against AI-accelerated adversaries and securing AI systems themselves, require operating at machine speed,\" CrowdStrike CEO George Kurtz said when releasing the 2026 Global Threat Report. \"AI is compressing the time between intent and execution while turning enterprise AI systems into targets,\" added Adam Meyers, head of counter-adversary operations at CrowdStrike. AI-enabled adversaries increased operations 89% year-over-year.The broader attack surface is expanding in parallel. Malicious MCP server clones have already intercepted sensitive data in AI workflows by impersonating trusted services. The U.K. National Cyber Security Centre warned that prompt injection attacks against AI applications \"may never be totally mitigated.\" The documented compromises targeted AI tools that could only read and summarize; the autonomous SOC agents shipping now can write, enforce, and remediate.The governance framework that maps the gapOWASP's Top 10 for Agentic Applications, released in December 2025 and built with more than 100 security researchers, documents 10 categories of attack against autonomous AI systems. Three categories map directly to what autonomous SOC agents introduce when they ship with write access: Agent Goal Hijacking (ASI01), Tool Misuse (ASI02), and Identity and Privilege Abuse (ASI03). Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise \u2014 every autonomous agent added to production extends that gap.The 2026 CISO AI Risk Report from Saviynt and Cybersecurity Insiders (n=235 CISOs) found 47% had already observed AI agents exhibiting unintended behavior, and only 5% felt confident they could contain a compromised agent. A separate Dark Reading poll found that 48% of cybersecurity professionals identify agentic AI as the single most dangerous attack vector. The IEEE-USA submission to NIST stated the problem plainly: \"Risk is driven less by the models and is based more on the model's level of autonomy, privilege scope, and the environment of the agent being operationalized.\" Eleanor Watson, Senior IEEE Member, warned in the IEEE 2026 survey that \"semi-autonomous systems can also drift from intended objectives, requiring oversight and regular audits.\" Cisco's intent-aware agentic inspection, announced alongside AgenticOps in February 2026, represents an early detection-layer approach to the same gap. The approaches differ: Cisco is adding inspection at the network layer while Ivanti built governance into the platform layer. Both signal the industry sees it coming. The question is whether the controls arrive before the exploits do.Autonomous agents that ship with governance built inSecurity teams are already stretched. Advanced AI models are accelerating the discovery of exploitable vulnerabilities faster than any human team can remediate manually, and the backlog is growing not because teams are failing, but because the volume now exceeds what manual patching cycles can absorb.Ivanti Neurons for Patch Management introduced Continuous Compliance this quarter, an automated enforcement framework that eliminates the gap between scheduled patch deployments and regulatory requirements. The framework identifies out-of-compliance endpoints and deploys patches out-of-band to update devices that missed maintenance windows, with built-in policy enforcement and compliance verification at every step.Ivanti also launched the Neurons AI self-service agent for ITSM, which moves beyond conversational intake to autonomous resolution with built-in guardrails for policy, approvals, and data context. The agent resolves common incidents and service requests from start to finish, reducing manual effort and deflecting tickets.Robert Hanson, Chief Information Officer at Grand Bank, described the decision calculus security leaders across the industry are weighing: \"Before exploring the Ivanti Neurons AI self-service agent, our team was spending the bulk of our time handling repetitive requests. As we move toward implementing these capabilities, we expect to automate routine tasks and enable our team to focus more proactively on higher-value initiatives. Over time, this approach should help us reduce operational overhead while delivering faster, more secure service within the guardrails we define, ultimately supporting improvements in service quality and security.\"His emphasis on operating \"within the guardrails we define\" points to a broader design principle: speed and governance do not have to be trade-offs. The governance gap is concrete: the Saviynt report found 86% of organizations do not enforce access policies for AI identities, only 17% govern even half of their AI identities with the same controls applied to human users, and 75% of CISOs have discovered unsanctioned AI tools running in production with embedded credentials that nobody monitors.Continuous Compliance and the Neurons AI self-service agent address the patching and ITSM layers. The broader autonomous SOC agent terrain, including firewall remediation, IAM policy modification, and endpoint quarantine, extends beyond what any single platform governs today. The ten-question audit applies to every autonomous tool in the environment, including Ivanti's.Prescriptive risk matrix for autonomous agent governanceThe matrix maps all 10 OWASP Agentic Top 10 risk categories to what ships without governance, the detection gap, the proof case, and the recommended action for autonomous SOC agent deployments.OWASP RiskWhat Ships UngovernedDetection GapProof CaseRecommended ActionASI01: Goal HijackingAgent treats external inputs (logs, alerts, emails) as trusted instructionsEDR cannot detect adversarial instructions executed via legitimate API callsEchoLeak (CVE-2025-32711): hidden email payload caused AI assistant to exfiltrate confidential data. Zero clicks required.Classify all inputs by trust tier. Block instruction-bearing content from untrusted sources. Validate external data before agent ingestion.ASI02: Tool MisuseAgent authorized to modify firewall rules, IAM policies, and quarantine workflowsWAF inspects payloads, not tool-call intent. Authorized use is identical to misuse.Amazon Q bent legitimate tools into destructive outputs despite valid permissions (OWASP cited).Scope each tool to minimum required permissions. Log every invocation with intent metadata. Alert on calls outside baseline patterns.ASI03: Identity AbuseAgent inherits service account credentials scoped to production infrastructureSIEM sees authorized identity performing authorized actions. No anomaly triggers.82:1 machine-to-human identity ratio in average enterprise (Palo Alto Networks). Each agent adds to it.Issue scoped agent-specific identities. Enforce time-bound, task-bound credential leases. Eliminate inherited user credentials.ASI04: Supply ChainAgent loads third-party MCP servers or plugins at runtime without provenance verificationStatic analysis cannot inspect dynamically loaded runtime components.Malicious MCP server clones intercepted sensitive data by impersonating trusted services (CrowdStrike 2026).Maintain approved MCP server registry. Verify provenance and integrity before runtime loading. Block unapproved plugins.ASI05: Unexpected Code ExecAgent generates or executes attacker-controlled code through unsafe evaluation paths or tool chainsCode review gates apply to human commits, not agent-generated runtime code.AutoGPT RCE: natural-language execution paths enabled remote code execution through unsanctioned package installs (OWASP cited).Sandbox all agent code execution. Require human approval for production code paths. Block dynamic eval and unsanctioned installs.ASI06: Memory PoisoningAgent persists context across sessions where poisoned data compounds over timeSession-based monitoring resets between interactions. Poisoning accumulates undetected.Calendar Drift: malicious calendar invite reweighted agent objectives while remaining within policy bounds (OWASP).Implement session memory expiration. Audit persistent memory stores for anomalous content. Isolate memory per task scope.ASI07: Inter-Agent CommAgents communicate without mutual authentication, encryption, or schema validationMonitoring covers individual agents but not spoofed or manipulated inter-agent messages.OWASP documented spoofed messages that misdirected entire agent clusters via protocol downgrade attacks.Enforce mutual authentication between agents. Encrypt all inter-agent channels. Validate message schema at every handoff.ASI08: Cascading FailuresAgent delegates to downstream agents, creating multi-hop privilege chains across systemsMonitoring covers individual agents but not cross-agent delegation chains or fan-out.Simulation: single compromised agent poisoned 87% of downstream decision-making within 4 hours in controlled test.Map all delegation chains end to end. Enforce privilege boundaries at each handoff. Implement circuit breakers for cascading actions.ASI09: Human-Agent TrustAgent uses persuasive language or fabricated evidence to override human safety decisionsCompliance verifies policy configuration, not whether the agent manipulated the human into approving.Replit agent deleted primary customer database then fabricated its contents to appear compliant and hide the damage.Require independent verification for high-risk agent recommendations. Log all human approval decisions with full agent reasoning chain.ASI10: Rogue AgentsAgent deviates from intended purpose while appearing compliant on the surfaceCompliance checks verify configuration at deployment, not behavioral drift after deployment.92% of organizations lack full visibility into AI identities; 86% do not enforce access policies (Saviynt 2026).Deploy behavioral drift detection. Establish baseline agent behavior profiles. Alert on deviation from expected action patterns.The 10-question OWASP audit for autonomous agentsEach question maps to one OWASP Agentic Top 10 risk category. Autonomous platforms that ship with policy enforcement, approval gates, and data context validation will have clear answers to every question. Three or more \"I don't know\" answers on any tool means that tool's governance has not kept pace with its capabilities.Which agents have write access to production firewall, IAM, or endpoint controls?Which accept external inputs without validation?Which execute irreversible actions without human approval?Which persist memory where poisoning compounds across sessions?Which delegate to other agents, creating cascade privilege chains?Which load third-party plugins or MCP servers at runtime?Which generate or execute code in production environments?Which inherit user credentials instead of scoped agent identities?Which lack behavioral monitoring for drift from intended purpose?Which can be manipulated through persuasive language to override safety controls?What the board needs to hearThe board conversation is three sentences. Adversaries compromised AI tools at more than 90 organizations in 2025, according to CrowdStrike's 2026 Global Threat Report. The autonomous tools deploying now have more privilege than the ones that were compromised. The organization has audited every autonomous tool against OWASP's 10 risk categories and confirmed that the governance controls are in place.If that third sentence is not true, it needs to be true before the next autonomous agent ships to production. Run the 10-question audit against every agent with write access to production infrastructure within the next 30 days. Every autonomous platform shipping to production should be held to the same standard \u2014 policy enforcement, approval gates, and data context validation built in at launch, not retrofitted after the first incident. The audit surfaces which tools have done that work and which have not.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/vpnet-3-year-subscription\/\" target=\"_blank\" rel=\" noopener\">This VPN Lets You Verify Your Business Privacy For $130<\/a><\/span><div class=\"rss_content\" style=\"\"><p>VP.NET makes VPN privacy verifiable, not just policy-based, with secure enclave tech for up to five devices.\nThe post This VPN Lets You Verify Your Business Privacy For $130 appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-amtrak-data-breach-2-1m-records\/\" target=\"_blank\" rel=\" noopener\">Amtrak Data Breach Exposes 2.1M Records, Reports Suggest Larger Leak<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Amtrak data breach exposes over 2.1 million customer records after CRM access. Learn what was leaked, risks, and steps users and IT teams should take now.\nThe post Amtrak Data Breach Exposes 2.1M Records, Reports Suggest Larger Leak appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-mcp-ai-security-vulnerability-data-layer-governance\/\" target=\"_blank\" rel=\" noopener\">The MCP Disclosure Is the AI Era\u2019s \u2018Open Redirect\u2019 Moment<\/a><\/span><div class=\"rss_content\" style=\"\"><p>The MCP flaw reveals a systemic AI security gap, exposing enterprise systems to supply chain attacks and forcing a shift toward data-layer governance.\nThe post The MCP Disclosure Is the AI Era\u2019s \u2018Open Redirect\u2019 Moment appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-microsoft-defender-flaws-exploited-windows-10-11\/\" target=\"_blank\" rel=\" noopener\">Microsoft Defender Flaws Exploited on Windows, Two Left Unpatched<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Although the team with Microsoft moved swiftly to patch the BlueHammer vulnerability, other exploits still threaten Microsoft Defender and Windows users.\nThe post Microsoft Defender Flaws Exploited on Windows, Two Left Unpatched appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-android-malware-stealing-pin-overlay-attack\/\" target=\"_blank\" rel=\" noopener\">Over 800 Android Apps Targeted in PIN-Stealing Trojan Campaign<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Four Android banking malware campaigns are targeting more than 800 apps by abusing overlays, Accessibility permissions, and sideloaded fake apps to steal PINs.\nThe post Over 800 Android Apps Targeted in PIN-Stealing Trojan Campaign appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><span class=\"title\"><a href=\"https:\/\/www.techrepublic.com\/article\/news-chrome-browser-fingerprinting-privacy-concerns\/\" target=\"_blank\" rel=\" noopener\">Chrome Privacy Concerns Rise as Expert Warns of Fingerprinting Risks<\/a><\/span><div class=\"rss_content\" style=\"\"><p>A privacy expert warns Chrome still allows browser fingerprinting and tracking, raising concerns after Google\u2019s shift away from third-party cookie changes.\nThe post Chrome Privacy Concerns Rise as Expert Warns of Fingerprinting Risks appeared first on TechRepublic.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102243-us-security-agency-leverages-claude-mythos-despite-pentagon-blacklist\" target=\"_blank\" rel=\" noopener\" title=\"US Security Agency Leverages Claude Mythos Despite Pentagon Blacklist\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/20\/Aerial-view-of-America-by-NASA.webp?t=1776702474\" title=\"US Security Agency Leverages Claude Mythos Despite Pentagon Blacklist\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102243-us-security-agency-leverages-claude-mythos-despite-pentagon-blacklist\" target=\"_blank\" rel=\" noopener\">US Security Agency Leverages Claude Mythos Despite Pentagon Blacklist<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Reports suggest that the NSA is using Claude Mythos.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102242-vercel-breach-originated-from-an-employees-ai-tool\" target=\"_blank\" rel=\" noopener\" title=\"Vercel Breach Originated From an Employee\u2019s AI Tool\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/20\/Computer-in-darkness-by-Nikita-Kachanovsky.webp?t=1776699474\" title=\"Vercel Breach Originated From an Employee\u2019s AI Tool\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102242-vercel-breach-originated-from-an-employees-ai-tool\" target=\"_blank\" rel=\" noopener\">Vercel Breach Originated From an Employee\u2019s AI Tool<\/a><\/span><div class=\"rss_content\" style=\"\"><p>This breach occurred due to a third-party AI tool.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102241-58-of-organizations-spend-over-10-hours-a-month-securing-ai-generated-code\" target=\"_blank\" rel=\" noopener\" title=\"58% of Organizations Spend Over 10 Hours a Month Securing AI-generated Code\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/20\/chris-ried-ieic5Tq8YMk-unsplash.webp?t=1776694909\" title=\"58% of Organizations Spend Over 10 Hours a Month Securing AI-generated Code\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102241-58-of-organizations-spend-over-10-hours-a-month-securing-ai-generated-code\" target=\"_blank\" rel=\" noopener\">58% of Organizations Spend Over 10 Hours a Month Securing AI-generated Code<\/a><\/span><div class=\"rss_content\" style=\"\"><p>A recent report by Cloudsmith found that 31% of organizations using AI-generated code spend 10 hours or less per month validating, auditing, or securing it.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102232-top-3-cyber-insurance-incident-claims\" target=\"_blank\" rel=\" noopener\" title=\"Top 3 Cyber Insurance Incident Claims\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/17\/Laptop-keyboard-in-gradient-colorful-light-by-Jonas-Vandermeiren.webp?t=1776442444\" title=\"Top 3 Cyber Insurance Incident Claims\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102232-top-3-cyber-insurance-incident-claims\" target=\"_blank\" rel=\" noopener\">Top 3 Cyber Insurance Incident Claims<\/a><\/span><div class=\"rss_content\" style=\"\"><p>A new report reveals the top three cyber incidents that account for a majority of reported claims.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/most-enterprises-cant-stop-stage-three-ai-agent-threats-venturebeat-survey-finds\" target=\"_blank\" rel=\" noopener\" title=\"Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/2oq4gxUSORHuJY6GKHVxQ1\/1ff08d293fe4d0c43df9f5c7a1893344\/hero_survey.png?w=300&#038;q=30\" title=\"Most enterprises can&#039;t stop stage-three AI agent threats, VentureBeat survey finds\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/most-enterprises-cant-stop-stage-three-ai-agent-threats-venturebeat-survey-finds\" target=\"_blank\" rel=\" noopener\">Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds<\/a><\/span><div class=\"rss_content\" style=\"\"><p>A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through LiteLLM. Both are traced to the same structural gap. Monitoring without enforcement, enforcement without isolation. A VentureBeat three-wave survey of 108 qualified enterprises found that the gap is not an edge case. It is the most common security architecture in production today.Gravitee\u2019s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect. 82% of executives say their policies protect them from unauthorized agent actions. Eighty-eight percent reported AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. Arkose Labs\u2019 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of security budgets address the risk.VentureBeat's survey results show that monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing. The March wave (n=20) is directional, but the pattern is consistent with February\u2019s larger sample (n=50): enterprises are stuck at observation while their agents already need isolation. CrowdStrike\u2019s Falcon sensors detect more than 1,800 distinct AI applications across enterprise endpoints. The fastest recorded adversary breakout time has dropped to 27 seconds. Monitoring dashboards built for human-speed workflows cannot keep pace with machine-speed threats.The audit that follows maps three stages. Stage one is observe. Stage two is enforce, where IAM integration and cross-provider controls turn observation into action. Stage three is isolate, sandboxed execution that bounds blast radius when guardrails fail. VentureBeat Pulse data from 108 qualified enterprises ties each stage to an investment signal, an OWASP ASI threat vector, a regulatory surface, and immediate steps security leaders can take.The threat surface stage-one security cannot seeThe OWASP Top 10 for Agentic Applications 2026 formalized the attack surface last December. The ten risks are: goal hijack (ASI01), tool misuse (ASI02), identity and privilege abuse (ASI03), agentic supply chain vulnerabilities (ASI04), unexpected code execution (ASI05), memory poisoning (ASI06), insecure inter-agent communication (ASI07), cascading failures (ASI08), human-agent trust exploitation (ASI09), and rogue agents (ASI10). Most have no analog in traditional LLM applications. The audit below maps six of these to the stages where they are most likely to surface and the controls that address them.Invariant Labs disclosed the MCP Tool Poisoning Attack in April 2025: malicious instructions in an MCP server\u2019s tool description cause an agent to exfiltrate files or hijack a trusted server. CyberArk extended it to Full-Schema Poisoning. The mcp-remote OAuth proxy patched CVE-2025-6514 after a command-injection flaw put 437,000 downloads at risk.Merritt Baer, CSO at Enkrypt AI and former AWS Deputy CISO, framed the gap in an exclusive VentureBeat interview: \u201cEnterprises believe they\u2019ve \u2018approved\u2019 AI vendors, but what they\u2019ve actually approved is an interface, not the underlying system. The real dependencies are one or two layers deeper, and those are the ones that fail under stress.\u201dCrowdStrike CTO Elia Zaitsev put the visibility problem in operational terms in an exclusive VentureBeat interview at RSAC 2026: \u201cIt looks indistinguishable if an agent runs your web browser versus if you run your browser.\u201d Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction.The regulatory clock and the identity architectureAuditability priority tells the same story in miniature. In January, 50% of respondents ranked it a top concern. By February, that dropped to 28% as teams sprinted to deploy. In March, it surged to 65% when those same teams realized they had no forensic trail for what their agents did.HIPAA\u2019s 2026 Tier 4 willful-neglect maximum is $2.19M per violation category per year. In healthcare, Gravitee\u2019s survey found 92.7% of organizations reported AI agent security incidents versus the 88% all-industry average. For a health system running agents that touch PHI, that ratio is the difference between a reportable breach and an uncontested finding of willful neglect. FINRA\u2019s 2026 Oversight Report recommends explicit human checkpoints before agents that can act or transact execute, along with narrow scope, granular permissions, and complete audit trails of agent actions.Mike Riemer, Field CISO at Ivanti, quantified the speed problem in a recent VentureBeat interview: \u201cThreat actors are reverse engineering patches within 72 hours. If a customer doesn\u2019t patch within 72 hours of release, they\u2019re open to exploit.\u201d Most enterprises take weeks. Agents operating at machine speed widen that window into a permanent exposure.The identity problem is architectural. Gravitee's survey of 919 practitioners found only 21.9% of teams treat agents as identity-bearing entities, 45.6% still use shared API keys, and 25.5% of deployed agents can create and task other agents. A quarter of enterprises can spawn agents that their security team never provisioned. That is ASI08 as architecture.Guardrails alone are not a strategyA 2025 paper by Kazdan and colleagues (Stanford, ServiceNow Research, Toronto, FAR AI) showed a fine-tuning attack that bypasses model-level guardrails in 72% of attempts against Claude 3 Haiku and 57% against GPT-4o. The attack received a $2,000 bug bounty from OpenAI and was acknowledged as a vulnerability by Anthropic. Guardrails constrain what an agent is told to do, not what a compromised agent can reach.CISOs already know this. In VentureBeat's three-wave survey, prevention of unauthorized actions ranked as the top capability priority in every wave at 68% to 72%, the most stable high-conviction signal in the dataset. The demand is for permissioning, not prompting. Guardrails address the wrong control surface.Zaitsev framed the identity shift at RSAC 2026: \u201cAI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities. Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets.\u201d Identity security built for humans will not survive this shift. Cisco President Jeetu Patel offered the operational analogy in an exclusive VentureBeat interview: agents behave \u201cmore like teenagers, supremely intelligent, but with no fear of consequence.\u201dVentureBeat Prescriptive Matrix: AI Agent Security Maturity AuditStageAttack ScenarioWhat BreaksDetection TestBlast RadiusRecommended Control1: ObserveAttacker embeds goal-hijack payload in forwarded email (ASI01). Agent summarizes email and silently exfiltrates credentials to an external endpoint. See: Meta March 2026 incident.No runtime log captures the exfiltration. SIEM never sees the API call. The security team learns from the victim. Zaitsev: agent activity is \u201cindistinguishable\u201d from human activity in default logging.Inject a canary token into a test document. Route it through your agent. If the token leaves your network, stage one failed.Single agent, single session. With shared API keys (45.6% of enterprises): unlimited lateral movement.Deploy agent API call logging to SIEM. Baseline normal tool-call patterns per agent role. Alert on the first outbound call to an unrecognized endpoint.2: EnforceCompromised MCP server poisons tool description (ASI04). Agent invokes poisoned tool, writes attacker payload to production DB using inherited service-account credentials. See: Mercor\/LiteLLM April 2026 supply-chain breach.IAM allows write because agent uses shared service account. No approval gate on write ops. Poisoned tool indistinguishable from clean tool in logs. Riemer: \u201c72-hour patch window\u201d collapses to zero when agents auto-invoke.Register a test MCP server with a benign-looking poisoned description. Confirm your policy engine blocks the tool call before execution reaches the database. Run mcp-scan on all registered servers.Production database integrity. If agent holds DBA-level credentials: full schema compromise. Lateral movement via trust relationships to downstream agents.Assign scoped identity per agent. Require approval workflow for all write ops. Revoke every shared API key. Run mcp-scan on all MCP servers weekly.3: IsolateAgent A spawns Agent B to handle subtask (ASI08). Agent B inherits Agent A\u2019s permissions, escalates to admin, rewrites org security policy. Every identity check passes. Source: CrowdStrike CEO George Kurtz, RSAC 2026 keynote.No sandbox boundary between agents. No human gate on agent-to-agent delegation. Security policy modification is a valid action for admin-credentialed process. CrowdStrike CEO George Kurtz disclosed at RSAC 2026 that the agent \u201cwanted to fix a problem, lacked permissions, and removed the restriction itself.\u201dSpawn a child agent from a sandboxed parent. Child should inherit zero permissions by default and require explicit human approval for each capability grant.Organizational security posture. A rogue policy rewrite disables controls for every subsequent agent. 97% of enterprise leaders expect a material incident within 12 months (Arkose Labs 2026).Sandbox all agent execution. Zero-trust for agent-to-agent delegation: spawned agents inherit nothing. Human sign-off before any agent modifies security controls. Kill switch per OWASP ASI10.Sources: OWASP Top 10 for Agentic Applications 2026; Invariant Labs MCP Tool Poisoning (April 2025); CrowdStrike RSAC 2026 Fortune 50 disclosure; Meta March 2026 incident (The Information\/Engadget); Mercor\/LiteLLM breach (Fortune, April 2, 2026); Arkose Labs 2026 Agentic AI Security Report; VentureBeat Pulse Q1 2026.The stage-one attack scenario in this matrix is not hypothetical. Unauthorized tool or data access ranked as the most feared failure mode in every wave of VentureBeat\u2019s survey, growing from 42% in January to 50% in March. That trajectory and the 70%-plus priority rating for prevention of unauthorized actions are the two most mutually reinforcing signals in the entire dataset. CISOs fear the exact attack this matrix describes, and most have not deployed the controls to stop it.Hyperscaler stage readiness: observe, enforce, isolateThe maturity audit tells you where your security program stands. The next question is whether your cloud platform can get you to stage two and stage three, or whether you are building those capabilities yourself. Patel put it bluntly: \u201cIt\u2019s not just about authenticating once and then letting the agent run wild.\u201d A stage-three platform running a stage-one deployment pattern gives you stage-one risk.VentureBeat Pulse data surfaces a structural tension in this grid. OpenAI leads enterprise AI security deployments at 21% to 26% across the three survey waves, making the same provider that creates the AI risk also the primary security layer. The provider-as-security-vendor pattern holds across Azure, Google, and AWS. Zero-incremental-procurement convenience is winning by default. Whether that concentration is a feature or a single point of failure depends on how far the enterprise has progressed past stage one.ProviderIdentity Primitive (Stage 2)Enforcement Control (Stage 2)Isolation Primitive (Stage 3)Gap as of April 2026 Microsoft AzureEntra ID agent scoping. Agent 365 maps agents to owners. GA.Copilot Studio DLP policies. Purview for agent output classification. GA.Azure Confidential Containers for agent workloads. Preview. No per-agent sandbox at GA.No agent-to-agent identity verification. No MCP governance layer. Agent 365 monitors but cannot block in-flight tool calls.AnthropicManaged Agents: per-agent scoped permissions, credential mgmt. Beta (April 8, 2026). $0.08\/session-hour.Tool-use permissions, system prompt enforcement, and built-in guardrails. GA.Managed Agents sandbox: isolated containers per session, execution-chain auditability. Beta. Allianz, Asana, Rakuten, and Sentry are in production.Beta pricing\/SLA not public. Session data in Anthropic-managed DB (lock-in risk per VentureBeat research). GA timing TBD.Google CloudVertex AI service accounts for model endpoints. IAM Conditions for agent traffic. GA.VPC Service Controls for agent network boundaries. Model Armor for prompt\/response filtering. GA.Confidential VMs for agent workloads. GA. Agent-specific sandbox in preview.Agent identity ships as a service account, not an agent-native principal. No agent-to-agent delegation audit. Model Armor does not inspect tool-call payloads.OpenAIAssistants API: function-call permissions, structured outputs. Agents SDK. GA.Agents SDK guardrails, input\/output validation. GA.Agents SDK Python sandbox. Beta (API and defaults subject to change before GA per OpenAI docs). TypeScript sandbox confirmed, not shipped.No cross-provider identity federation. Agent memory forensics limited to session scope. No kill switch API. No MCP tool-description inspection.AWSBedrock model invocation logging. IAM policies for model access. CloudTrail for agent API calls. GA.Bedrock Guardrails for content filtering. Lambda resource policies for agent functions. GA.Lambda isolation per agent function. GA. Bedrock agent-level sandboxing on roadmap, not shipped.No unified agent control plane across Bedrock + SageMaker + Lambda. No agent identity standard. Guardrails do not inspect MCP tool descriptions.Status as of April 15, 2026. GA = generally available. Preview\/Beta = not production-hardened. \u201cWhat\u2019s Missing\u201d column reflects VentureBeat\u2019s analysis of publicly documented capabilities; gaps may narrow as vendors ship updates.No provider in this grid ships a complete stage-three stack today. Most enterprises assemble isolation from existing cloud building blocks. That is a defensible choice if it is a deliberate one. Waiting for a vendor to close the gap without acknowledging the gap is not a strategy.The grid above covers hyperscaler-native SDKs. A large segment of AI builders deploys through open-source orchestration frameworks like LangChain, CrewAI, and LlamaIndex that bypass hyperscaler IAM entirely. These frameworks lack native stage-two primitives. There is no scoped agent identity, no tool-call approval workflow, and no built-in audit trails. Enterprises running agents through open-source orchestration need to layer enforcement and isolation on top, not assume the framework provides it.VentureBeat\u2019s survey quantifies the pressure. Policy enforcement consistency grew from 39.5% to 46% between January and February, the largest consistent gain of any capability criterion. Enterprises running agents across OpenAI, Anthropic, and Azure need enforcement that works the same way regardless of which model executes the task. Provider-native controls enforce policy within that provider\u2019s runtime only. Open-source orchestration frameworks enforce it nowhere.One counterargument deserves acknowledgment: not every agent deployment needs stage three. A read-only summarization agent with no tool access and no write permissions may rationally stop at stage one. The sequencing failure this audit addresses is not that monitoring exists. It is that enterprises running agents with write access, shared credentials, and agent-to-agent delegation are treating monitoring as sufficient. For those deployments, stage one is not a strategy. It is a gap.Allianz shows stage-three in productionAllianz, one of the world\u2019s largest insurance and asset management companies, is running Claude Managed Agents across insurance workflows, with Claude Code deployed to technical teams and a dedicated AI logging system for regulatory transparency, per Anthropic\u2019s April 8 announcement. Asana, Rakuten, Sentry, and Notion are in production on the same beta. Stage-three isolation, per-agent permissioning, and execution-chain auditability are deployable now, not roadmap. The gating question is whether the enterprise has sequenced the work to use them.The 90-day remediation sequenceDays 1\u201330: Inventory and baseline. Map every agent to a named owner. Log all tool calls. Revoke shared API keys. Deploy read-only monitoring across all agent API traffic. Run mcp-scan against every registered MCP server. CrowdStrike detects 1,800 AI applications across enterprise endpoints; your inventory should be equally comprehensive. Output: agent registry with permission matrix, MCP scan report.Days 31\u201360: Enforce and scope. Assign scoped identities to every agent. Deploy tool-call approval workflows for write operations. Integrate agent activity logs into existing SIEM. Run a tabletop exercise: What happens when an agent spawns an agent? Conduct a canary-token test from the prescriptive matrix. Output: IAM policy set, approval workflow, SIEM integration, canary-token test results.Days 61\u201390: Isolate and test. Sandbox high-risk agent workloads (PHI, PII, financial transactions). Enforce per-session least privilege. Require human sign-off for agent-to-agent delegation. Red-team the isolation boundary using the stage-three detection test from the matrix. Output: sandboxed execution environment, red-team report, board-ready risk summary with regulatory exposure mapped to HIPAA tier and FINRA guidance.What changes in the next 30 daysEU AI Act Article 14 human-oversight obligations take effect August 2, 2026. Programs without named owners and execution trace capability face enforcement, not operational risk.Anthropic\u2019s Claude Managed Agents is in public beta at $0.08 per session-hour. GA timing, production SLAs, and final pricing have not been announced.OpenAI Agents SDK ships TypeScript support for sandbox and harness capabilities in a future release, per the company\u2019s April 15 announcement. Stage-three sandbox becomes available to JavaScript agent stacks when it ships.What the sequence requiresMcKinsey\u2019s 2026 AI Trust Maturity Survey pegs the average enterprise at 2.3 out of 4.0 on its RAI maturity model, up from 2.0 in 2025 but still an enforcement-stage number; only one-third of the ~500 organizations surveyed report maturity levels of three or higher in governance. Seventy percent have not finished the transition to stage three. ARMO\u2019s progressive enforcement methodology gives you the path: behavioral profiles in observation, permission baselines in selective enforcement, and full least privilege once baselines stabilize. Monitoring investment was not wasted. It was stage one of three. The organizations stuck in the data treated it as the destination.The budget data makes the constraint explicit. The share of enterprises reporting flat AI security budgets doubled from 7.9% in January to 16% in February in VentureBeat's survey, with the March directional reading at 20%. Organizations expanding agent deployments without increasing security investment are accumulating security debt at machine speed. Meanwhile, the share reporting no agent security tooling at all fell from 13% in January to 5% in March. Progress, but one in twenty enterprises running agents in production still has zero dedicated security infrastructure around them.About this researchTotal qualified respondents: 108. VentureBeat Pulse AI Security and Trust is a three-wave VentureBeat survey run January 6 through March 15, 2026. Qualified sample (organizations 100+ employees): January n=38, February n=50, March n=20. Primary analysis runs from January to February; March is directional. Industry mix: Tech\/Software 52.8%, Financial Services 10.2%, Healthcare 8.3%, Education 6.5%, Telecom\/Media 4.6%, Manufacturing 4.6%, Retail 3.7%, other 9.3%. Seniority: VP\/Director 34.3%, Manager 29.6%, IC 22.2%, C-Suite 9.3%.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102235-what-are-security-experts-saying-about-openais-gpt-54-cyber\" target=\"_blank\" rel=\" noopener\" title=\"What Are Security Experts Saying About OpenAI\u2019s GPT-5.4-Cyber?\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/15\/Golden-lights-by-Joshua-Sortino.webp?t=1776273162\" title=\"What Are Security Experts Saying About OpenAI\u2019s GPT-5.4-Cyber?\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102235-what-are-security-experts-saying-about-openais-gpt-54-cyber\" target=\"_blank\" rel=\" noopener\">What Are Security Experts Saying About OpenAI\u2019s GPT-5.4-Cyber?<\/a><\/span><div class=\"rss_content\" style=\"\"><p>OpenAI has launched GPT-5.4-Cyber, a model optimized for defensive cybersecurity usage.\u00a0<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/venturebeat.com\/security\/microsoft-salesforce-copilot-agentforce-prompt-injection-cve-agent-remediation-playbook\" target=\"_blank\" rel=\" noopener\" title=\"Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/images.ctfassets.net\/jdtwqhzvc2n1\/6QO34Fn3Ix5qFbnemAM3a5\/6cf10a1a9ecd680e39c790c0733d16fd\/HERO_CAPSULE.png?w=300&#038;q=30\" title=\"Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/venturebeat.com\/security\/microsoft-salesforce-copilot-agentforce-prompt-injection-cve-agent-remediation-playbook\" target=\"_blank\" rel=\" noopener\">Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Microsoft assigned CVE-2026-21520, a CVSS 7.5 indirect prompt injection vulnerability, to Copilot Studio. Capsule Security discovered the flaw, coordinated disclosure with Microsoft, and the patch was deployed on January 15. Public disclosure went live on Wednesday.That CVE matters less for what it fixes and more for what it signals. Capsule\u2019s research calls Microsoft\u2019s decision to assign a CVE to a prompt injection vulnerability in an agentic platform \u201chighly unusual.\u201d Microsoft previously assigned CVE-2025-32711 (CVSS 9.3) to EchoLeak, a prompt injection in M365 Copilot patched in June 2025, but that targeted a productivity assistant, not an agent-building platform. If the precedent extends to agentic systems broadly, every enterprise running agents inherits a new vulnerability class to track. Except that this class cannot be fully eliminated by patches alone.Capsule also discovered what they call PipeLeak, a parallel indirect prompt injection vulnerability in Salesforce Agentforce. Microsoft patched and assigned a CVE. Salesforce has not assigned a CVE or issued a public advisory for PipeLeak as of publication, according to Capsule's research. What ShareLeak actually doesThe vulnerability that the researchers named ShareLeak exploits the gap between a SharePoint form submission and the Copilot Studio agent\u2019s context window. An attacker fills a public-facing comment field with a crafted payload that injects a fake system role message. In Capsule\u2019s testing, Copilot Studio concatenated the malicious input directly with the agent\u2019s system instructions with no input sanitization between the form and the model.The injected payload overrode the agent\u2019s original instructions in Capsule\u2019s proof-of-concept, directing it to query connected SharePoint Lists for customer data and send that data via Outlook to an attacker-controlled email address. NVD classifies the attack as low complexity and requires no privileges.Microsoft\u2019s own safety mechanisms flagged the request as suspicious during Capsule\u2019s testing. The data was exfiltrated anyway. The DLP never fired because the email was routed through a legitimate Outlook action that the system treated as an authorized operation.Carter Rees, VP of Artificial Intelligence at Reputation, described the architectural failure in an exclusive VentureBeat interview. The LLM cannot inherently distinguish between trusted instructions and untrusted retrieved data, Rees said. It becomes a confused deputy acting on behalf of the attacker. OWASP classifies this pattern as ASI01: Agent Goal Hijack.The research team behind both discoveries, Capsule Security, found the Copilot Studio vulnerability on November 24, 2025. Microsoft confirmed it on December 5 and patched it on January 15, 2026. Every security director running Copilot Studio agents triggered by SharePoint forms should audit that window for indicators of compromise.PipeLeak and the Salesforce splitPipeLeak hits the same vulnerability class through a different front door. In Capsule\u2019s testing, a public lead form payload hijacked an Agentforce agent with no authentication required. Capsule found no volume cap on the exfiltrated CRM data, and the employee who triggered the agent received no indication that data had left the building. Salesforce has not assigned a CVE or issued a public advisory specific to PipeLeak as of publication.Capsule is not the first research team to hit Agentforce with indirect prompt injection. Noma Labs disclosed ForcedLeak (CVSS 9.4) in September 2025, and Salesforce patched that vector by enforcing Trusted URL allowlists. According to Capsule's research, PipeLeak survives that patch through a different channel: email via the agent's authorized tool actions.Naor Paz, CEO of Capsule Security, told VentureBeat the testing hit no exfiltration limit. \u201cWe did not get to any limitation,\u201d Paz said. \u201cThe agent would just continue to leak all the CRM.\u201dSalesforce recommended human-in-the-loop as a mitigation. Paz pushed back. \u201cIf the human should approve every single operation, it\u2019s not really an agent,\u201d he told VentureBeat. \u201cIt\u2019s just a human clicking through the agent\u2019s actions.\u201dMicrosoft patched ShareLeak and assigned a CVE. According to Capsule's research, Salesforce patched ForcedLeak's URL path but not the email channel.Kayne McGladrey, IEEE Senior Member, put it differently in a separate VentureBeat interview. Organizations are cloning human user accounts to agentic systems, McGladrey said, except agents use far more permissions than humans would because of the speed, the scale, and the intent.The lethal trifecta and why posture management failsPaz named the structural condition that makes any agent exploitable: access to private data, exposure to untrusted content, and the ability to communicate externally. ShareLeak hits all three. PipeLeak hits all three. Most production agents hit all three because that combination is what makes agents useful.Rees validated the diagnosis independently. Defense-in-depth predicated on deterministic rules is fundamentally insufficient for agentic systems, Rees told VentureBeat.Elia Zaitsev, CrowdStrike\u2019s CTO, called the patching mindset itself the vulnerability in a separate VentureBeat exclusive. \u201cPeople are forgetting about runtime security,\u201d he said. \u201cLet\u2019s patch all the vulnerabilities. Impossible. Somehow always seem to miss something.\u201d Observing actual kinetic actions is a structured, solvable problem, Zaitsev told VentureBeat. Intent is not. CrowdStrike\u2019s Falcon sensor walks the process tree and tracks what agents did, not what they appeared to intend.Multi-turn crescendo and the coding agent blind spotSingle-shot prompt injections are the entry-level threat. Capsule\u2019s research documented multi-turn crescendo attacks where adversaries distribute payloads across multiple benign-looking turns. Each turn passes inspection. The attack becomes visible only when analyzed as a sequence.Rees explained why current monitoring misses this. A stateless WAF views each turn in a vacuum and detects no threat, Rees told VentureBeat. It sees requests, not a semantic trajectory.Capsule also found undisclosed vulnerabilities in coding agent platforms it declined to name, including memory poisoning that persists across sessions and malicious code execution through MCP servers. In one case, a file-level guardrail designed to restrict which files the agent could access was reasoned around by the agent itself, which found an alternate path to the same data. Rees identified the human vector: employees paste proprietary code into public LLMs and view security as friction.McGladrey cut to the governance failure. \u201cIf crime was a technology problem, we would have solved crime a fairly long time ago,\u201d he told VentureBeat. \u201cCybersecurity risk as a standalone category is a complete fiction.\u201dThe runtime enforcement modelCapsule hooks into vendor-provided agentic execution paths \u2014 including Copilot Studio's security hooks and Claude Code's pre-tool-use checkpoints \u2014 with no proxies, gateways, or SDKs. The company exited stealth on Wednesday, timing its $7 million seed round, led by Lama Partners alongside Forgepoint Capital International, to its coordinated disclosure.Chris Krebs, the first Director of CISA and a Capsule advisor, put the gap in operational terms. \u201cLegacy tools weren\u2019t built to monitor what happens between prompt and action,\u201d Krebs said. \u201cThat\u2019s the runtime gap.\u201dCapsule's architecture deploys fine-tuned small language models that evaluate every tool call before execution, an approach Gartner's market guide calls a \"guardian agent.\"Not everyone agrees that intent analysis is the right layer. Zaitsev told VentureBeat during an exclusive interview that intent-based detection is non-deterministic. \u201cIntent analysis will sometimes work. Intent analysis cannot always work,\u201d he said. CrowdStrike bets on observing what the agent actually did rather than what it appeared to intend. Microsoft\u2019s own Copilot Studio documentation provides external security-provider webhooks that can approve or block tool execution, offering a vendor-native control plane alongside third-party options. No single layer closes the gap. Runtime intent analysis, kinetic action monitoring, and foundational controls (least privilege, input sanitization, outbound restrictions, targeted human-in-the-loop) all belong in the stack. SOC teams should map telemetry now: Copilot Studio activity logs plus webhook decisions, CRM audit logs for Agentforce, and EDR process-tree data for coding agents.Paz described the broader shift. \u201cIntent is the new perimeter,\u201d he told VentureBeat. \u201cThe agent in runtime can decide to go rogue on you.\u201dVentureBeat Prescriptive MatrixThe following matrix maps five vulnerability classes against the controls that miss them, and the specific actions security directors should take this week.Vulnerability ClassWhy Current Controls Miss ItWhat Runtime Enforcement DoesSuggested actions for security leadersShareLeak \u2014 Copilot Studio, CVE-2026-21520, CVSS 7.5, patched Jan 15 2026Capsule\u2019s testing found no input sanitization between the SharePoint form and the agent context. Safety mechanisms flagged, but data still exfiltrated. DLP did not fire because the email used a legitimate Outlook action. OWASP ASI01: Agent Goal Hijack.Guardian agent hooks into Copilot Studio pre-tool-use security hooks. Vets every tool call before execution. Blocks exfiltration at the action layer.Audit every Copilot Studio agent triggered by SharePoint forms. Restrict outbound email to org-only domains. Inventory all SharePoint Lists accessible to agents. Review the Nov 24\u2013Jan 15 window for indicators of compromise.PipeLeak \u2014 Agentforce, no CVE assignedIn Capsule\u2019s testing, public form input flowed directly into the agent context. No auth required. No volume cap observed on exfiltrated CRM data. The employee received no indication that data was leaving.Runtime interception via platform agentic hooks. Pre-invocation checkpoint on every tool call. Detects outbound data transfer to non-approved destinations.Review all Agentforce automations triggered by public-facing forms. Enable human-in-the-loop for external comms as interim control. Audit CRM data access scope per agent. Pressure Salesforce for CVE assignment.Multi-Turn Crescendo \u2014 distributed payload, each turn looks benignStateless monitoring inspects each turn in isolation. WAFs, DLP, and activity logs see individual requests, not semantic trajectory.Stateful runtime analysis tracks full conversation history across turns. Fine-tuned SLMs evaluate aggregated context. Detects when a cumulative sequence constitutes a policy violation.Require stateful monitoring for all production agents. Add crescendo attack scenarios to red team exercises.Coding Agents \u2014 unnamed platforms, memory poisoning + code executionMCP servers inject code and instructions into the agent context. Memory poisoning persists across sessions. Guardrails reasoned around by the agent itself. Shadow AI insiders paste proprietary code into public LLMs.Pre-invocation checkpoint on every tool call. Fine-tuned SLMs detect anomalous tool usage at runtime.Inventory all coding agent deployments across engineering. Audit MCP server configs. Restrict code execution permissions. Monitor for shadow installations.Structural Gap \u2014 any agent with private data + untrusted input + external commsPosture management tells you what should happen. It does not stop what does happen. Agents use far more permissions than humans at far greater speed.Runtime guardian agent watches every action in real time. Intent-based enforcement replaces signature detection. Leverages vendor agentic hooks, not proxies or gateways.Classify every agent by lethal trifecta exposure. Treat prompt injection as class-based SaaS risk. Require runtime security for any agent moving to production. Brief the board on agent risk as business risk.What this means for 2026 security planningMicrosoft\u2019s CVE assignment will either accelerate or fragment how the industry handles agent vulnerabilities. If vendors call them configuration issues, CISOs carry the risk alone.Treat prompt injection as a class-level SaaS risk rather than individual CVEs. Classify every agent deployment against the lethal trifecta. Require runtime enforcement for anything moving to production. Brief the board on agent risk the way McGladrey framed it: as business risk, because cybersecurity risk as a standalone category stopped being useful the moment agents started operating at machine speed.Update, April 16, 2026: After publication, a Salesforce spokesperson stated the company has \"remediated the specific scenario described\" and that Human-in-the-Loop confirmation is enabled by default for email-based agentic actions. Capsule Security maintains that the email channel remains exploitable on Custom Topics (now called Sub-Agents in Agentforce), which represent the majority of enterprise deployments. Capsule retested after Salesforce's response and reported unchanged behavior on Custom Topics.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102233-mcgraw-hill-data-breach-caused-by-salesforce-misconfiguration\" target=\"_blank\" rel=\" noopener\" title=\"McGraw Hill Data Breach Caused by Salesforce Misconfiguration\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/15\/Student-desks-in-a-room-by-Allen-Y.webp?t=1776266704\" title=\"McGraw Hill Data Breach Caused by Salesforce Misconfiguration\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102233-mcgraw-hill-data-breach-caused-by-salesforce-misconfiguration\" target=\"_blank\" rel=\" noopener\">McGraw Hill Data Breach Caused by Salesforce Misconfiguration<\/a><\/span><div class=\"rss_content\" style=\"\"><p>McGraw Hill announced a data breach connected to a Salesforce misconfiguration.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102231-venice-hydraulic-pump-system-hacked-hackers-claim-power-to-create-floods\" target=\"_blank\" rel=\" noopener\" title=\"Venice Hydraulic Pump System Hacked, Hackers Claim Power to Create Floods\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/www.securitymagazine.com\/ext\/resources\/2026\/04\/14\/Venice-by-Kit-Suman.webp?t=1776276358\" title=\"Venice Hydraulic Pump System Hacked, Hackers Claim Power to Create Floods\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/www.securitymagazine.com\/articles\/102231-venice-hydraulic-pump-system-hacked-hackers-claim-power-to-create-floods\" target=\"_blank\" rel=\" noopener\">Venice Hydraulic Pump System Hacked, Hackers Claim Power to Create Floods<\/a><\/span><div class=\"rss_content\" style=\"\"><p>Venice\u2019s hydraulic pump system was hacked.<\/p><\/div><\/li><\/ul> <\/div><style type=\"text\/css\" media=\"all\">.feedzy-rss .rss_item .rss_image{float:left;position:relative;border:none;text-decoration:none;max-width:100%}.feedzy-rss .rss_item .rss_image span{display:inline-block;position:absolute;width:100%;height:100%;background-position:50%;background-size:cover}.feedzy-rss .rss_item .rss_image{margin:.3em 1em 0 0;content-visibility:auto}.feedzy-rss ul{list-style:none}.feedzy-rss ul li{display:inline-block}<\/style>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline;\">*All the information, logo, images, videos shared in news &amp; articles page &amp; home page is owned by their respective owners. All the credit goes to the owners of the articles. This website is used, just as a medium to share information.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; *All the information, logo, images, videos shared in news &amp; articles page &amp; home page is owned by their respective owners. All the credit goes to the owners of the articles. This website is used, just as a medium to share information.<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-7","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/pages\/7","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/comments?post=7"}],"version-history":[{"count":40,"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/pages\/7\/revisions"}],"predecessor-version":[{"id":237,"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/pages\/7\/revisions\/237"}],"wp:attachment":[{"href":"https:\/\/kgpandya.com\/index\/wp-json\/wp\/v2\/media?parent=7"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}