mexchy1000

Interactive AI agentic framework for PET imaging that explores multimodal images, quantification, and AI-tool based end-to-end analyses

16
3
100% credibility
Found Mar 17, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

DICOMclaw is a chat-based AI assistant that analyzes medical scan images by loading patient studies, viewing them in a multi-panel display, and generating overlays, measurements, and reports through natural conversation.

How It Works

1
🔍 Find your scans

Put folders with patient scan images in the app's scan folder and refresh the list to see all available studies.

2
👁️ Open a study

Click a patient from the sidebar to load their images into the four-panel viewer showing different angles.

3
💬 Chat with the assistant

Type a request like 'find lesions' or 'measure liver activity' in the chat to start analysis.

4
Review and approve the plan

The assistant shows a step-by-step plan; approve it to let the analysis begin safely.

5
🎯 Watch results appear

Overlays highlight organs or lesions on your images, with numbers for activity levels and sizes.

6
📊 Explore measurements

Click results files for charts, tables, and summaries of what was found.

Get your full report

Ask for a report to receive a complete summary with all findings, ready to review or share.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is dicomclaw?

DICOMclaw is an interactive agentic AI framework for PET imaging, built in TypeScript with Python analysis skills, that unifies multimodal image viewing, quantification, and end-to-end analyses in one workspace. Load DICOM studies to get a synced 2x2 viewer for CT, PET, fusion, and MIP; draw VOIs or chat with a ReAct agent to trigger skills like lesion detection, organ segmentation, SUV stats, radiomics, and report generation. It solves the pain of switching between viewers, segmenters, and spreadsheets by overlaying AI results in real time with plan approval.

Why is it gaining traction?

Its transparent agentic loop—showing reasoning, plans for approval, and progress—builds trust over black-box tools, while markdown guides make customizing AI behavior dead simple without restarts. The interactive viewer supports VOI drawing, MIP clicking, and @mentions in chat, paired with battle-tested AI tools like AutoPET-3 and TotalSegmentator for reliable quantification. Devs dig the full-stack setup (React frontend, Node backend) that streams results via Socket.io, enabling github actions interactive workflows for batch analyses.

Who should use this?

Nuclear medicine researchers quantifying PET/CT lesions for oncology trials, or clinical scientists tracking treatment response across studies. AI devs prototyping interactive agentic systems for conlangs or medical imaging who want an end-to-end framework with vision interpretation. Radiologists exploring multimodal images via chat without clinical deployment hassles.

Verdict

Early alpha with 16 stars and 1.0% credibility score—docs are solid via README quickstart, but test coverage is light and GPU setup needs work. Fork for research PET analyses today; skip for production until more adoption.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.