pingshian0131

OpenClaw plugin: send LLM traces to Arize Phoenix — inspect prompts, responses, and token usage in Phoenix UI

17
0
100% credibility
Found Mar 13, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

This OpenClaw add-on captures complete details of every AI model interaction and sends them to Arize Phoenix for viewing in an intuitive dashboard.

How It Works

1
🔍 Discover the chat tracker

While using your AI assistant builder called OpenClaw, you learn about this helpful add-on that lets you watch every conversation with the AI in detail.

2
📊 Start the viewing dashboard

You launch a simple dashboard on your computer that collects and shows all the chat records in a friendly interface.

3
📥 Place the add-on in your tools

You copy the tracker files into the special folder where your AI builder keeps its helpful extensions.

4
🔗 Connect and activate tracking

In your AI builder's settings, you switch on the tracker and point it to your dashboard so it starts recording chats.

5
🔄 Refresh your AI setup

You restart your AI tools, and they welcome the new tracker right away.

👀 Inspect your AI conversations

Now, chat with any assistant and instantly see full details like prompts, replies, word counts, and times in the dashboard—everything works smoothly!

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is openclaw-plugin-llm-trace-phoenix?

This TypeScript plugin for OpenClaw hooks into LLM calls from your agents and sends full traces to Arize Phoenix, letting you inspect prompts, responses, token usage, latency, and model details in Phoenix's UI. It solves the black-box problem of debugging AI agents by capturing every input and output without proxies or agent changes—just drop it into your OpenClaw plugins setup. Run Phoenix via Docker, configure the phoenixUrl and projectName in openclaw.json, and traces appear at localhost:6006.

Why is it gaining traction?

Among openclaw plugins, it stands out for native OTLP/HTTP support via OpenInference conventions, delivering clean spans like "anthropic/claude-opus" with agent IDs and cache metrics—no manual logging needed. Developers grab it for zero-overhead tracing that scales with OpenClaw's llm_input/output events, plus easy verification in gateway logs. It's a quick win over generic tools, especially if you're already on Arize Phoenix.

Who should use this?

OpenClaw users building multi-agent LLM apps who need to debug erratic responses or track token costs across providers. AI engineers tuning prompts in production agents, or teams monitoring latency spikes in openclaw github copilot-like workflows. Skip if you're not running Phoenix or pre-2025 OpenClaw.

Verdict

Worth adding for Phoenix users—solid docs and setup make it plug-and-play despite 16 stars and 1.0% credibility signaling early maturity. Test it on a side project first; no tests visible, but MIT license keeps risks low.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.