ASLP-lab

The Full-Duplex Interaction Track of the ICASSP 2026 Human-like Spoken Dialogue Systems Challenge aims to advance the evaluation of full-duplex dialogue systems by in- troducing a dual-channel dialogue dataset of real human- recorded conversations.

19
0
100% credibility
Found May 01, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

HumDial-FDBench provides a dataset of real human conversations and a benchmark to evaluate AI systems on full-duplex spoken dialogue handling interruptions and rejections.

How It Works

1
๐Ÿ” Discover HumDial-FDBench

You find this project while looking for ways to test AI assistants that handle real conversations with interruptions and overlaps.

2
๐Ÿ“ฅ Get conversation examples

Download free recordings of natural human chats from the easy link to see realistic talking patterns.

3
๐Ÿ“– Explore test scenarios

Review examples of interruptions, backchannels, and pauses to understand what makes conversations feel human.

4
๐Ÿš€ Try the demo assistant

Start the ready-to-run demo and chat live to experience full-duplex talking where the AI listens and responds naturally even mid-sentence.

5
๐Ÿงช Test your own AI

Use the benchmark tools to check how well your conversation AI handles overlaps and quick changes.

๐Ÿ† See your results on leaderboard

Compare your AI's scores with top systems worldwide and celebrate improvements in natural dialogue.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is HumDial-FDBench?

HumDial-FDBench is a Python-based benchmark and dual-channel dataset from the ICASSP 2026 HumDial Challenge, designed to evaluate full-duplex dialogue systems on real human-recorded conversations. It tests handling of interruptions, overlaps, and turn-taking in nine scenarios like follow-up questions, negations, or backchannels, advancing full-duplex speech interaction beyond turn-based setups. Developers get a Hugging Face dataset for train/dev/test splits, a unified evaluation protocol, and a public leaderboard tracking interruption/rejection scores plus delay metrics.

Why is it gaining traction?

This full duplex bench GitHub repo stands out with its realistic dual-channel data capturing human-like dialogue phenomena, plus a baseline Flask demo for quick full-duplex interaction testing via web chat. The leaderboard pits open-source models against proprietary ones like Gemini, showing practical gaps in conversational continuity. It hooks speech AI devs needing a standardized way to benchmark responsive, overlap-aware systems without custom data collection.

Who should use this?

Speech researchers benchmarking full-duplex TTS/ASR pipelines for interruptions or rejections in human-like dialogue. Voice assistant builders at companies like Google or ElevenLabs evaluating latency and natural flow in concurrent listen-generate setups. Challenge participants prepping for ICASSP 2026 HumDial submissions.

Verdict

Grab it if you're in full-duplex speech evalโ€”dataset and leaderboard deliver immediate value despite low 1.0% credibility from 19 stars and nascent docs. Maturity is early (focus on baselines over polished APIs), but it's a solid starting point for advancing evaluation in overlapping conversations.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.