
Secure Code Warrior debuts Trust Agent AI for code risk insight
Secure Code Warrior has commenced a beta programme for expanded AI capabilities within its Trust Agent product, designed to offer traceability, visibility and governance for chief information security officers over developers' use of AI coding tools.
The company's newly announced Trust Agent: AI upgrade combines data signals from AI coding tool usage, vulnerabilities, code commit activity, and developer secure coding skills to give security leaders a detailed view of risks emerging from the adoption of artificial intelligence within the software development lifecycle.
Industry context
Security professionals have noted a gap in existing tools when it comes to monitoring which AI coding solutions and large language models (LLMs) are in use across development teams. There is often limited oversight regarding the amount of application code produced by AI, or assurance that developers possess the expertise to detect and address vulnerabilities in AI-generated outputs. The introduction of LLMs has introduced not only the risk of insecure code, but also the potential for biased coding outcomes.
"AI allows developers to generate code at a speed we've never seen before," said Pieter Danhieux, Secure Code Warrior Co-Founder & CEO. "However, using the wrong LLM by a security-unaware developer, the 10x increase in code velocity will introduce 10x the amount of vulnerabilities and technical debt. Trust Agent: AI produces the data needed to plug knowledge gaps, filter security-proficient developers to the most sensitive projects, and, importantly, monitor and approve the AI tools they use throughout the day. We're dedicated to helping organizations prevent uncontrolled use of AI on software and product security."
Trust Agent: AI is described by Secure Code Warrior as the first solution to examine the relationship between the developer, the models being used-including any vulnerabilities those models might introduce-and the actual repositories where AI-generated code is committed. This approach makes it possible to trace the use of generative AI across large enterprise codebases and connect its use to specific security outcomes.
Capabilities overview
The Trust Agent: AI solution offers a set of integrated governance and observability features across different stages of the development process. Key functions highlighted by the company include identification of unapproved LLMs, with visibility into the vulnerabilities those tools might introduce; flexible policy controls that enable organisations to log, warn, or block pull requests from developers who use unsanctioned tools or those lacking requisite secure coding skills; and output analysis which identifies how much code is AI-generated and its location throughout repositories.
The platform's policy controls allow security leaders to respond to the use of unsanctioned AI tools or address skills gaps amongst developers. This, the company states, enables organisations to align the capabilities of their development teams with overall security requirements while maintaining oversight across rapid AI-driven code production.
Availability and access
Trust Agent: AI's general release is projected for 2026, though Secure Code Warrior has opened an early access list for organisations wishing to participate in the beta phase. The company states the product is positioned to assist enterprises in adjusting security programmes to address existing and future threats that emerge as generative AI tools become more embedded in software development practices.
The product's detailed monitoring and analytics are intended to support CISOs in making data-driven decisions regarding the deployment of AI across large teams and multiple repositories, as well as respond proactively to the potential risks associated with high-velocity code generation enabled by LLMs.
Secure Code Warrior focuses on developer risk management and has stated its commitment to supporting organisations in reducing vulnerabilities arising from increased adoption of AI within the software development lifecycle by providing visibility and governance mechanisms tailored to new sources of security risk.