Cisco Systems Inc. has introduced Project CodeGuard, an open-source framework designed to secure software developed with artificial intelligence coding agents. The system integrates security controls across the software lifecycle to harden code from its initial creation.
The framework is engineered to be unified and model-agnostic, allowing it to function with various AI tools. It aims to deliver “secure by default” code by weaving guardrails into multiple stages of development. Cisco specified that Project CodeGuard is not intended as a replacement for engineering judgment but serves as an “added defense-in-depth layer.” This approach complements human oversight rather than supplanting it, reinforcing security throughout the AI-assisted coding process.
At launch, CodeGuard includes a core rule set derived from established industry guidance, including the Open Worldwide Application Security Project (OWASP) and Common Weakness Enumeration (CWE). This initial set targets recurring software flaws. The specific vulnerabilities addressed include hardcoded secrets, missing or inadequate input validation, the use of outdated cryptographic methods, and dependencies on software components that have reached their end-of-life.
These rules are applied at distinct phases of development to provide continuous security enforcement. During the planning and specification phase, they steer AI agents toward safer coding patterns. While code is actively being generated, the framework can block the creation of insecure snippets in real time. Following generation, the rules are used for comprehensive review and validation. Project CodeGuard also provides a community-driven ruleset, translators for popular AI coding agents, and validators to assist teams in automating these security measures.
Cisco emphasized that this multi-stage methodology is critical because AI assistants are increasingly involved across the entire software lifecycle, from drafting initial designs and scaffolding services to proposing code fixes. A single rule, such as one governing input validation or secret management, is designed to influence each step. This includes suggesting safer alternatives during generation, flagging risky constructs as they appear, and finally verifying that the completed code properly externalizes secrets and sanitizes all inputs.
Company representatives stated that CodeGuard does not guarantee perfectly secure output and that human peer review and standard security controls remain necessary. The framework’s primary objective is to reduce the probability that “low-hanging” vulnerabilities are introduced into production environments as AI accelerates software delivery schedules. The company’s roadmap for the project includes broader language coverage, adapters for additional AI coding platforms, automated rule validation, and feedback loops to refine rules based on community usage.
To support this evolution, Cisco is inviting security engineers, developers, and AI researchers to contribute to the project. The company has requested submissions of new rules, the construction of additional translators, and telemetry-informed improvements through its public repository.