RPDI
Back to Blog

OpenAI Just Bought the Company That Tests AI for Vulnerabilities — What the AI Security Consolidation Wave Means for Every Business Running AI Tools

TL;DR

Three AI security acquisitions in one week. OpenAI bought Promptfoo — the open-source standard for testing AI vulnerabilities. Fasoo and Konsilix merged into Symbologic for enterprise AI data governance. AppViewX acquired Eos for AI agent identity control. The pattern: the companies building AI are buying the companies that audit AI. If your governance plan is 'we trust the vendor,' that vendor just acquired the auditor.

The AI Security Stack Is Being Locked In — Right Now

Between 2016 and 2019, the cybersecurity market was fragmented. Dozens of startups. Point solutions everywhere. Then Palo Alto, CrowdStrike, and Microsoft went on acquisition sprees and consolidated the entire stack into three platform bets.

Companies that had already built multi-vendor security architectures kept their flexibility. Companies that waited got locked into whatever their primary vendor decided was 'good enough.'

The AI security market is in that exact window right now. This week, three deals accelerated the timeline.

OpenAI acquiring Promptfoo is the equivalent of CrowdStrike buying the pen-testing framework everyone uses to validate their defenses. The tool that was neutral — that anyone could use to evaluate any model — is now owned by the model provider. The neutral evaluator is no longer neutral.

Three Deals. Three Layers of the Trust Stack.

Each acquisition targets a different layer of AI governance. Here is what changed:

Analysis

OpenAI + Promptfoo: The Evaluation Layer

Promptfoo is the most widely-used open-source framework for LLM testing — adversarial prompts, hallucination benchmarks, prompt injection detection, model comparison. OpenAI will integrate it into their Frontier safety platform. They now control both the model AND the primary tool used to test the model. Independent AI evaluation just lost its most popular instrument.

Analysis

Fasoo + Konsilix = Symbologic: The Governance Layer

Fasoo (data security and classification) merged with Konsilix (AI governance automation) to create Symbologic. It classifies sensitive data before AI processes it, enforces access policies, monitors for unauthorized AI data usage, and generates audit trails. Think of it as AI DLP — preventing your AI tools from touching data they should not touch.

Analysis

AppViewX + Eos: The Identity Layer

AI agents are proliferating — coding assistants that commit code, chatbots that access CRM data, pipelines that process PII. New problem: who IS this agent? What credentials does it use? What happens when its API key gets compromised? Eos provides AI-native identity management. AppViewX acquired it to extend machine identity into the AI agent ecosystem.

Analysis

ISC2 Certification Update: The Talent Signal

ISC2 — the organization behind CISSP — updated its entire certification portfolio to formally include AI security concepts. Not a new cert. The existing certifications now require AI security competency. The talent market has declared: you cannot be a security professional in 2026 without understanding AI attack surfaces.

Why the Promptfoo Deal Matters Most

The AI industry has a structural trust problem. This deal made it worse.

Metric67KGITHUB STARS ON PROMPTFOO BEFORE THE ACQUISITION. THE MOST POPULAR INDEPENDENT AI TESTING TOOL IS NOW OWNED BY THE COMPANY WHOSE MODELS IT TESTS.

Before the acquisition, enterprises used Promptfoo to compare OpenAI against Anthropic, Google, and open-source alternatives. Security teams used it to red-team AI deployments regardless of vendor. Its credibility came from independence — no model provider paid for favorable benchmarks. Post-acquisition, that independence is gone. OpenAI has every incentive to optimize evaluations in ways that favor their models. The community will fork. But enterprise adoption follows resources, not principles. If you relied on Promptfoo for independent AI evaluation, you now need a framework that accounts for evaluator bias. The tool that told you which model was best is now owned by one of the contestants.

The Four AI Attack Surfaces CISOs Need to Inventory Now

The RH-ISAC report this week confirmed what every security team already felt: CISOs expect bigger budgets. Their top concern is AI-related risk. Four vectors are driving that concern:

Step 01

Shadow AI Usage

68% awareness rate. Still the #1 AI security concern. The attack surface is not the model — it is the employee copying customer data into an unauthorized browser tab. If your DLP does not monitor clipboard activity to AI domains, you have a blind spot the size of your entire AI attack surface.

Step 02

AI Agent Identity

Coding assistants committing code. Chatbots accessing CRM data. Automated pipelines processing PII. Each needs its own credentials, access scope, rotation schedule, and activity logs. If your AI agents share API keys with no rotation policy, you have a non-human identity gap.

Step 03

Wireless Infrastructure Expansion

AI workloads drive massive wireless investment. More access points = more entry vectors. More edge AI devices = more endpoints. More IoT sensors feeding AI = more data in transit. The ROI from AI is real. So is the security surface area.

Step 04

Regulatory Formalization

NAIC insurance AI oversight proposals. SEC enforcement on 'AI washing.' Regulatory scrutiny of AI governance is no longer hypothetical. If you market AI capabilities, your claims must be auditable. If you use third-party AI in regulated workflows, those models must be documented.

The 5-Step AI Governance Framework (Before the Auditors Ask)

You do not need a six-figure platform or a dedicated AI security team. You need a framework that answers five questions every auditor, client, and regulator will ask within 12 months:

Step 01

Inventory Every AI Tool

Simple spreadsheet. Tool name, vendor, what data it accesses, who approved it, whether the vendor's data handling policy has been reviewed. Most companies discover 3-5x more AI tools than expected. You cannot govern what you cannot see.

Step 02

Classify Data Into Three Tiers

Tier 1: any AI tool can process it (public info, marketing content). Tier 2: only approved enterprise AI tools (internal docs, non-sensitive business data). Tier 3: never touches external AI (customer PII, financial records, trade secrets). Put it on one page. Train your team.

Step 03

Lock Down AI Agent Credentials

Every AI agent gets: a dedicated API key (not shared with humans), a 90-day rotation schedule, access scoped to minimum required permissions, and logging of every action. Treat AI agents like contractor accounts — limited access, monitored activity, regular rotation.

Step 04

Run Quarterly Risk Reviews

A 2-hour meeting. Not a 6-month project. Agenda: new unauthorized tools? Vendor policy changes? Agent credentials current? AI-related security incidents? Vendor acquisitions affecting your stack? Check five items every 90 days.

Step 05

Document It for Clients

One-page AI governance summary: what AI tools you use, how customer data is protected from AI processing, what evaluation framework you use, who is accountable. When a client asks 'how do you govern your AI?' — your answer is a document, not a shrug.

Build Governance Before the Platform Lock-In

This week established the pattern. The evaluation layer, the governance layer, and the identity layer are being absorbed into the same vendor ecosystems that sell the AI.

Build vendor-independent AI governance now. Inventory your tools. Classify your data. Lock down agent credentials. Establish an evaluation framework that does not depend on any single vendor. The companies that built vendor-neutral security before cybersecurity consolidated in 2018 maintained flexibility. The companies that waited got locked in. The AI governance window is open. It will not be in 18 months.

🔧 Need an AI governance framework before your next client audit?

We inventory every AI tool in your organization, classify your data tiers, build your acceptable use policy, and deliver a client-ready governance document in a 2-week sprint. Fixed-price. No hourly billing. Book your free AI governance audit →