CiscoDevNet

An open specification for agentic AI security evaluation and testing, from Cisco.

18
7
100% credibility
Found May 14, 2026 at 23 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

An open specification from Cisco outlining how to build AI agent systems for thorough software security evaluations.

How It Works

1
🔍 Discover Foundry

You hear about this helpful blueprint from Cisco for creating AI teams that check software for security issues.

2
📖 Read the guiding principles

You go through the short list of unbreakable rules and the detailed plan to understand how it all works.

3
💡 Tailor the blueprint to you

You answer simple questions about your setup, like what tools you use, so the plan fits your world perfectly.

4
🛠️ Create your security team

You follow the customized plan to build AI helpers that scan code, investigate problems, and report findings.

5
🚀 Put it into action

You launch your new system on software you want to check, watching it work step by step.

Secure software insights

Your AI team delivers trustworthy reports on vulnerabilities, helping you build safer software every time.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 23 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is foundry-security-spec?

Foundry-security-spec is an open specification from Cisco for building agentic AI systems that evaluate and test software security. It distills production lessons into a blueprint with eight core agent roles, a finding lifecycle, and guardrails, letting you plug in your frontier LLM and target codebase to produce trustworthy vulnerability reports. No code here—just a Markdown spec designed for specification-driven development via tools like GitHub's spec-kit, solving the "how do we make AI agents reliable for security evals" problem without infrastructure lock-in.

Why is it gaining traction?

It stands out by offering a battle-tested architecture that's neutral to your stack—use any LLM provider, issue tracker, or datastore—while pairing with CodeGuard rules for a self-improving detection flywheel that turns findings into prevention guardrails. Developers hook into its clarify-plan-implement workflow with spec-kit, resolving open questions like severity taxonomies into custom specs fast. Early buzz comes from its focus on long-term scalability over quick scans, echoing patterns in OpenAPI or OpenTelemetry specs but for agentic security testing.

Who should use this?

Security teams at enterprises with frontier LLMs auditing internal codebases, needing systematic agentic evaluation beyond basic scanners. Platform builders creating custom vuln-hunting pipelines who want a proven shape instead of starting from scratch. AI security researchers prototyping agentic workflows grounded in real ops, not benchmarks.

Verdict

Solid seed spec (v0.1.0) with excellent rationale and getting-started guide, but at 18 stars and 1.0% credibility, it's pre-mainstream—ideal if you're building for the long haul and have spec-kit-savvy devs. Skip for turnkey tools; grab if agentic security specs from Cisco align with your github specification-driven development needs.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.