BerriAI

A simple, self-hosted infrastructure platform for running multiple agents in production.

18
0
100% credibility
Found May 10, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

LiteLLM Agent Platform is a self-hosted web dashboard for creating customizable AI agents that chat, reason, and work on code projects in isolated secure spaces with persistent sessions.

How It Works

1
๐Ÿ” Discover the agent playground

You find LiteLLM Agent Platform on GitHub, a simple way to create and chat with smart AI helpers right on your computer.

2
๐Ÿš€ Start your personal setup

With a few easy steps, you launch the web dashboard and database on your machine, ready to play in minutes.

3
๐Ÿง  Build your first AI helper

Pick a smart AI brain, give it instructions like 'review my code', and link a project folder so it understands your work.

4
โœจ Bring it to life

Click to start a private chat room where your helper wakes up, remembers everything, and works just for you.

5
๐Ÿ’ฌ Chat and collaborate

Ask questions, watch it think step-by-step, use tools, and see ideas flow in real-time with full conversation history.

๐ŸŽ‰ Your AI team is ready

Now you have secure, always-available helpers that pick up right where you left off, perfect for daily work.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is litellm-agent-platform?

LiteLLM Agent Platform is a TypeScript-based, self-hosted infrastructure for running multiple AI agents in production, handling per-team sandboxes and session continuity across restarts. Developers get a Next.js web UI to create agents with custom models, prompts, and MCP tools, spawn sessions via warm pool or cold starts, and chat via streaming UI or curl-friendly API endpoints like POST /v1/managed_agents/agents/{id}/session. It integrates LiteLLM for multi-provider models and Kubernetes sandboxes for isolation, with docker-compose for local dev and one-click Render deploys.

Why is it gaining traction?

It stands out with dead-simple local setup via kind clusters and docker-compose, plus prod deploys to EKS/Render without Kubernetes expertise. Fast session starts via warm pooling cut cold-boot waits from 90s to 5s, and built-in API proxying for MCP tools enables agentic workflows like repo cloning or git ops. Devs love the session restart that replays history, making it reliable for long-running simple self-hosted chat agents.

Who should use this?

AI engineers building production agent fleets for code review or data pipelines, backend teams needing simple self-hosted git servers or cloud storage proxies via agents, and students prototyping simple GitHub projects with AI assistance. Ideal for devs evaluating self-hosted alternatives to vendor platforms for custom workflows.

Verdict

Try it if you're earlyโ€”solid for agent prototyping with great docs and curl examples, but 18 stars and 1.0% credibility signal alpha maturity; expect rough edges in scaling. Pair with LiteLLM for quick wins.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.