divyaran7an

Use any LLM with the Cursor Agent SDK

11
1
100% credibility
Found May 02, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
JavaScript
AI Summary

A library that enables the Cursor Agent SDK to route AI model requests to any OpenAI-compatible provider while keeping local tools intact.

How It Works

1
🕵️ Discover flexible AI helper

You hear about a way to make your coding assistant use any smart AI brain instead of just one service.

2
📦 Add the simple connector

You easily add this small helper to your coding project to unlock more options.

3
🔗 Connect your favorite AI

You link your chosen smart AI service so the assistant can think with it.

4
🤖 Build your assistant

You create your helpful coding agent just like before, now powered by your AI.

5
💬 Give it real tasks

You ask it to read files, run commands, or summarize your code.

Perfect results every time

Your assistant delivers smart responses and handles your files safely, saving you time and money.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is cursor-sdk-gateway?

This JavaScript library lets you run Cursor Agent SDK locally with any LLM API, routing model calls through Vercel AI SDK or OpenAI-compatible endpoints like OpenRouter, Ollama, or LiteLLM. Instead of relying on Cursor's backend and API key, you configure once before importing the SDK, keeping all local tools (file ops, shell, MCP) intact. It's a drop-in gateway for Cursor agents, perfect for any GitHub repo needing flexible LLM backends without rewriting agent code.

Why is it gaining traction?

It stands out by preserving the full Cursor SDK API—agents, subagents, hooks, background shells—while swapping in cheap models like DeepSeek or local Ollama runs, dodging Cursor backend limits or outages. Developers love the one-line setup for existing apps and npm test parity checks, plus examples ported from Cursor's cookbook and Anthropic demos. Compared to raw LiteLLM or other LLM gateways, it handles Cursor's agent loop (streaming, tools) seamlessly.

Who should use this?

Local Cursor SDK users building AI agents for GitHub projects, like automating code edits or shell tasks without cloud dependency. Ideal for JS/TS devs running Ollama or vLLM servers, prototyping agents offline, or integrating any LLM gateway into Node scripts. Skip if you need Cursor's cloud VMs or PR features.

Verdict

Try it for local Cursor agents—solid docs, runnable examples, and MIT license make the 11 stars and 1.0% credibility score forgivable for an early project. Production? Wait for more adoption unless tests cover your tools.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.