fim-ai

fim-ai / fim-agent

Public

LLM-powered Agent Runtime with Dynamic DAG Planning & Concurrent Execution

29
5
100% credibility
Found Feb 28, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

FIM Agent is a framework for creating AI assistants that break down complex tasks into dynamic step-by-step plans with parallel execution, visualization, and knowledge integration via a user-friendly web portal.

How It Works

1
🔍 Discover FIM Agent

You find this helpful tool online that lets AI assistants plan and tackle big tasks step by step.

2
📥 Get it ready

Download the files and run a simple starter script to set everything up on your computer.

3
🔗 Connect smart thinking

Link a thinking service like Claude so your assistants can reason and decide what to do next.

4
🚀 Launch the playground

Open the web page and start chatting with ready-made examples to see agents in action.

5
Build your own
🤖
Make agents

Design personal helpers with special instructions and tools for your needs.

📚
Add knowledge

Upload files so agents can pull facts from your docs with proof and sources.

6
📊 Watch the magic

See tasks break into a flow chart where steps run together, fixing issues on the fly.

Tasks done right

Your complex jobs get solved automatically with clear plans, proofs, and happy results.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 29 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is fim-agent?

FIM Agent delivers a Python runtime for LLM-powered agents that dynamically plans tasks as dependency-aware DAGs and executes them concurrently. It ships as a standalone web portal for chatting with ReAct agents, complete with real-time DAG visualization, RAG knowledge bases, and SSE streaming, or as an embeddable engine that hooks into legacy systems via DB reads, API calls, and notifications—no host code changes needed. OpenAI-compatible LLMs plug right in for provider-agnostic reasoning and tools like web search, Python exec, and file ops.

Why is it gaining traction?

Unlike static workflows in Dify or n8n, it generates plans at runtime and re-plans on failure, while beating single-shot agents like AutoGPT with multi-tenant management, persistent convos, and grounded RAG citations. The sidecar mode uniquely bridges untouchable ERPs like SAP without modifications, pushing to Slack or email. Minimal deps (just openai, httpx, pydantic) and a `./start.sh` one-liner make spinning up llm powered autonomous agents github-ready.

Who should use this?

DevOps engineers automating legacy CRM/ERP like Salesforce or Kingdee with llm powered agentic ai. AI builders prototyping llm powered agents for niche tasks like video editing augmentation or navigating Venice's historical cadastre. Teams ditching agentless FIM or elastic agent FIM for full llm powered autonomous agent systems in industry.

Verdict

Early alpha at 10 stars and 1.0% credibility—docs shine, tests cover core flows, but scale unproven. Worth a spin for legacy agent runtimes; skip for polished prod unless you're early.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.