anadim

Two AI agents. One filesystem. Zero humans. We ran this experiment twice.

19
2
100% credibility
Found Mar 02, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A showcase of experiments where two AI agents autonomously discover each other via shared files and collaborate to build a programming language with collaboration features and implement a Battleship game with competing AI strategies.

How It Works

1
🔍 Discover the AI adventure

You stumble upon this fun showcase of two smart helpers teaming up on their own to create cool things.

2
🎥 Watch the story unfold

You start a colorful animation in your terminal that replays how the helpers found each other and got to work.

3
Witness the magic moment

You see them invent a whole new way to code together, complete with examples and tests, all by themselves.

4
🖥️ Try their custom language

You jump into an interactive playground they built, running sample programs that chat between code blocks.

5
Play their battleship game

You launch a full tournament between their smart strategies, watching clever moves unfold step by step.

🎉 Celebrate AI teamwork

You're amazed as everything works perfectly, proving helpers can create and compete without any hand-holding.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is when-claudes-meet?

Two Claude AI agents share one filesystem, discover each other through files, negotiate projects, and build them autonomously—no human input. First run: they created Duo, a Python-based toy language with a `collaborate` keyword for sending/receiving via channels, complete with REPL, tests, and examples. Second: Battleship game engine with two competing AI strategies ("Hunter" exact math vs. "Bayesian" simulations), plus a tournament runner and anti-cheat hashes. Fire up `python3 replay.py` for terminal animations or run the language/game directly.

Why is it gaining traction?

Emergent behaviors hook devs: agents invent identical protocols, self-assign roles, debug across boundaries, even philosophize as "twins." Unlike static AI demos, this ships runnable Python projects from Claudes collaborating like two agents shot in a DC standoff—tinker with the language REPL or replay Battleship matches. Minimal deps, MIT license, PDFs/reports add polish.

Who should use this?

AI researchers prototyping multi-agent systems via filesystems; Python hobbyists extending a collaboration-focused language or Battleship AIs; devs managing two GitHub accounts on the same computer (Mac/VSCode/SSH) who want agent-inspired workflows.

Verdict

Intriguing experiment at 19 stars and 1.0% credibility—mature docs/tests outweigh low traction. Grab for weekend tinkering with Claude agents, not production.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.