pattern-ai-labs

AgentCall lets AI Agents join meetings with voice, video & screen-share to build together. Supports Google Meet, Teams, Zoom (Beta)

16
0
100% credibility
Found Apr 19, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A skill enabling AI assistants to join Google Meet, Zoom, and Teams calls with voice, animated avatars, screensharing, real-time transcription, and chat.

How It Works

1
🧑‍💻 Discover AI meeting helper

You find a handy tool that lets your smart AI assistant join video calls just like a person.

2
📱 Sign up for free access

Visit the website, create a free account, and link it to your AI companion so it can join calls.

3
🔌 Add the meeting skill

Place the tool's files into your AI project or install it easily to unlock meeting powers.

4
🔗 Tell AI to join

Simply say to your AI, 'Join this meeting link,' and it prepares to hop in.

5
Pick AI's style
🔊
Voice chat

AI talks and listens without a face, perfect for quick calls.

😊
Animated avatar

AI appears with a moving face that shows if it's listening or speaking.

📺
Share screens

AI can show slides or dashboards while chatting.

6
🗣️ AI participates fully

Your AI hears everyone speak, replies naturally, sees shared screens, and even sends chat messages.

🎉 Meetings made better

Your AI takes notes, answers questions, and shares info, making calls smoother and more fun.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is agentcall?

AgentCall lets AI agents join Google Meet, Teams, and Zoom (beta) meetings with voice, video avatars, and screen-sharing to collaborate in real time. Written in Python with Node.js support, it pipes meeting transcripts, chat, and screenshots to coding agents like Claude Code, Cursor, or Aider via simple stdin/stdout, while agents respond through low-latency TTS (54 voices, 9 languages). Developers get AI that participates naturally—listening, speaking, and sharing screens—without rebuilding their agent workflows.

Why is it gaining traction?

It stands out with smart features like VAD gap buffering for complete utterances, barge-in prevention, auto-interruption detection, and crash recovery, making conversations feel human in group settings. The stdin/stdout protocol integrates instantly with 30+ agent frameworks, and built-in avatars plus dynamic screensharing (URLs or local ports) enable visual collaboration. No extra LLM needed—the agent's session context handles everything.

Who should use this?

AI agent builders integrating tools like Claude Code or Cursor into team calls, remote coding pairs wanting AI co-pilots for live debugging, or support teams using Gemini CLI for customer meetings. Ideal for devs in Windsurf, OpenClaw, or JetBrains Junie who need agents to handle voice commands, share dashboards, or take screenshots during Google Meet or Teams sessions.

Verdict

Promising for agent call ai experiments, but at 16 stars and 1.0% credibility, it's early-stage—Zoom beta and low adoption signal risks, though excellent docs, examples, and MIT license lower the barrier. Try it if you're prototyping agent-meeting integrations; skip for production until more battle-tested.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.