Mte90

OpenCode stops working if a model goes in timeout or there are errors, this plugin fixes those issues

12
0
100% credibility
Found Apr 14, 2026 at 12 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A silent add-on for OpenCode that automatically detects and fixes common AI session stalls like silent streams, unexecuted tools, loops, and stuck parent tasks.

How It Works

1
😩 Hit a snag with your AI helper

While chatting with your AI coding buddy in OpenCode, it suddenly stalls and stops responding, leaving you waiting.

2
🔍 Find the auto-resume helper

You discover this handy add-on that watches your AI chats and jumps in to fix stalls automatically.

3
📥 Add the helper to OpenCode

Simply download it and connect it to your OpenCode setup with a quick addition.

4
Everything is ready

Your AI sessions now have a silent guardian that keeps things moving without you lifting a finger.

5
💬 Chat away with your AI

You ask your AI to help with code, and if it pauses too long, the helper nudges it to continue seamlessly.

6
🔄 Stalls vanish automatically

No more blinking cursors or waiting—the add-on detects issues like stuck tools or loops and fixes them quietly.

🎉 Smooth sailing ahead

Your coding projects flow effortlessly with reliable AI help that never gets stuck.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 12 to 12 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is opencode-auto-resume?

This TypeScript plugin for OpenCode automatically detects and recovers from stalled LLM sessions—like when opencode just stops working on model timeouts, errors, broken tool calls printed as raw text, hallucination loops, or orphan parent agents after subagents finish. It sends targeted "continue" prompts with exponential backoff, preserves your exact agent, model, and provider config (prometheus, claude-3, etc.), and runs silently without UI interruptions. Install via npm, add to opencode.jsonc, and it handles session discovery plus cleanup to avoid memory leaks.

Why is it gaining traction?

Unlike manual babysitting or generic retry tools, it tackles OpenCode-specific pains head-on: stream stalls after 48s idle, tool XML not executing, opencode stops after compaction, and subagent hangs—using per-session timers that pause during multi-busy states. Developers hook it as an opencode github integration or agent enhancer, getting zero-config resilience for opencode github copilot models in enterprise flows. Config tweaks like chunkTimeoutMs let you dial aggression for your setup.

Who should use this?

AI coding agent users on OpenCode, especially those running long builds with sisyphus or prometheus agents where opencode stops working mid-task due to model issues or errors. Backend devs integrating opencode github app into CI/CD pipelines, or teams on opencode github enterprise experimenting with mcp tools, who hate context loss from stalls. Skip if you're on stable, short sessions.

Verdict

Grab it if OpenCode flakiness kills your flow—solid docs, tunable options, and test coverage make it reliable despite 12 stars and 1.0% credibility score signaling early maturity. Worth the npm install for heavy users; watch for upstream OpenCode fixes that might obsolete it.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.