OpenAI has introduced Aardvark, an autonomous security agent powered by GPT-5 that scans codebases to detect and fix software vulnerabilities before attackers can exploit them.

Described as “an agentic security researcher,” Aardvark continuously analyses repositories, monitors commits, and tests code in sandboxed environments to validate real-world exploitability. It then proposes human-reviewable patches using OpenAI’s Codex system.

OpenAI said Aardvark has already uncovered meaningful flaws in its own software and external partner projects, identifying 92 per cent of known vulnerabilities in benchmark tests and ten new issues worthy of CVE identifiers.

The system is currently in private beta, with OpenAI inviting select organisations to apply for early access through its website to help refine accuracy and reporting workflows. Wider availability is expected once testing concludes, with OpenAI also planning free scans for selected non-commercial open-source projects.

Businesses interested in trying Aardvark can apply to join the beta via OpenAI’s official site and begin integrating it with their GitHub environments to test how autonomous code analysis could help their own security posture.