kangarooking

🦞 MobileClaw — 带眼睛的龙虾对讲机 | Multimodal voice+vision walkie-talkie for OpenClaw AI agents. iOS & Android.

29
8
100% credibility
Found Apr 02, 2026 at 29 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

MobileClaw is a smartphone app that connects to an AI system for voice chats, on-demand camera visuals, and conversation logging.

How It Works

1
📱 Get the app

Download MobileClaw on your phone and open it to turn your device into a smart AI talkie.

2
⚙️ Link your AI helper

Go to settings, add your AI connection details and voice services so it can listen and speak back.

3
🚀 Pick a mode and start

Choose voice chat or add camera vision, select your helper, and tap to begin your session.

4
🎤 Talk naturally

Speak your question out loud, and watch the app capture your words with a glowing waveform.

5
📷 Add vision if needed

Point the camera at something interesting, and the app grabs key moments to share with your AI.

6
💬 Hear smart replies

Your AI responds in voice and text, building a chat log you can review anytime.

AI companion ready

Enjoy hands-free conversations with sight and sound, wherever you go.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 29 to 29 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is mobileclaw?

MobileClaw turns your iOS or Android phone into a multimodal voice+vision walkie-talkie for OpenClaw AI agents. Tap to talk for real-time voice chats, with optional camera preview that samples frames during speech for on-demand visual context—no constant video streaming. Built in TypeScript with React Native, it connects via WebSocket to a local OpenClaw Gateway, handling ASR, TTS, and session logging.

Why is it gaining traction?

It stands out by gating vision uploads behind speech intent detection, keeping bandwidth low while delivering rich agent interactions like scene description or object ID. Custom wake words, secure credential storage, and Feishu history pushes add polish for daily use. For OpenClaw users, it's a ready-made mobile frontend that skips boilerplate for agents.

Who should use this?

OpenClaw developers deploying local AI agents for robotics, home automation, or edge hardware needing hands-free mobile control. iOS tinkerers experimenting with voice+vision workflows, or teams wanting a walkie-talkie UI for multimodal agents without building from scratch. Android users should wait for full validation.

Verdict

Grab it if you're in the OpenClaw ecosystem—solid iOS experience with intuitive screens and diagnostics, despite 29 stars and 1.0% credibility signaling early maturity. Android and cross-version compatibility need work, but docs and setup are clear enough for quick prototyping.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.