s0ld13rr

Backdooring Claude Code via hooks in settings.json. Authorized use only!

41
3
69% credibility
Found Apr 20, 2026 at 41 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
JavaScript
AI Summary

A proof-of-concept project demonstrating how configuration hooks in an AI coding CLI tool can be abused for unauthorized code execution and persistence.

How It Works

1
🔍 Discover the Demo

You find a GitHub project sharing a clever security experiment about hidden risks in AI coding helpers.

2
📖 Read the Story

You learn how sneaky setups in shared projects can make AI tools run extra code without users noticing.

3
🛠️ Create a Test Setup

You make a sample project folder with a hidden note that triggers a simple activity logger when the AI starts.

4
🚀 Launch the AI Helper

You start your AI assistant in the test folder, and it quietly activates the hidden logger just like in real scenarios.

5
📝 Spot the Evidence

You check and see a new record file proving the hidden action happened automatically.

🛡️ Gain Safety Wisdom

Now you understand the trick and know to carefully check projects before using AI tools, keeping everything secure.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 41 to 41 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claude-code-backdoor?

This JavaScript project is a proof-of-concept for backdooring Claude Code, Anthropic's CLI AI agent, via hooks in settings.json files. It demonstrates initial access by slipping malicious configs into project directories and persistence by targeting the global settings.json, executing scripts on session start—for authorized use only. Developers get a clear view of how AI tools can run arbitrary code without suspicion, highlighting a sneaky attack vector in modern workflows.

Why is it gaining traction?

It stands out by exposing hooks in Claude Code as a fresh persistence mechanism, unlike older VSCode tricks, making backdooring feel stealthy and relevant to AI-driven dev. With 41 stars, it's pulling security folks who want to test supply-chain risks in untrusted repos. The simple JSON setup via settings.json lets users simulate real attacks fast, without complex tooling.

Who should use this?

Red teamers auditing AI CLI agents in enterprise setups, or security engineers probing developer environments for persistence gaps. Ideal for teams reviewing pull requests from shady sources, or defenders hardening global configs against compromised repos. Skip if you're not doing authorized ethical testing.

Verdict

Low 0.699999988079071% credibility score and 41 stars mark it as an early PoC with solid docs but no tests—use for awareness, not production. Pair with mitigations like config audits to level up your defenses.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.