jeremiah-masters

High-performance, lock-free concurrent hash table in Go, based on DLHT, with cooperative resizing and cache-efficient buckets.

29
0
100% credibility
Found May 01, 2026 at 29 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

DLHT is a high-performance concurrent hash table library for Go that provides lock-free get, insert, delete, and put operations with automatic resizing for high-concurrency workloads.

How It Works

1
🔍 Discover fast storage

You learn about DLHT, a super speedy way to store and find information quickly, perfect for apps with lots of users accessing data at the same time.

2
📦 Add to your project

You easily include DLHT in your app so it can handle storing pairs of info like names and numbers.

3
Create your collection

You make a new storage space and set its starting size, ready to hold your data.

4
Store information

You add key-value pairs, like saving 'apple' with the number 5, and it handles many additions smoothly.

5
🔍 Find data instantly

You look up any item by its key and get the value right away, even with heavy use.

6
🔄 Update or remove

You change values for existing keys or delete them entirely without interrupting others.

7
📈 Check performance

You view simple stats like how full it is and how well it's scaling.

🚀 App runs lightning fast

Your application now manages high traffic lookups and changes effortlessly, staying quick and reliable.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 29 to 29 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is dlht?

DLHT delivers a high-performance, lock-free concurrent hash table in Go, directly from the DLHT paper on high performance dynamic lock free hash tables and list based sets. It handles Get, Insert, Delete, and Put ops linearly and scalably across cores, solving sync.Map's contention stalls in high-throughput backends. Users get generic support for string/uint64 keys plus an inline mode for uint64 integers, with auto-resizing and stats via simple API calls like `m.New[string, int](opts)`.

Why is it gaining traction?

It laps sync.Map, cornelk, haxmap, and xsync in benchmarks—higher throughput, lower latency on read/churn/growth workloads up to 48 cores. No locks mean true non-blocking scaling, cache-friendly buckets keep perf predictable, and cooperative resizing avoids pauses. Devs grab it for easy wins in high performance github projects, like swapping sync.Map for 2-5x speedups without code changes.

Who should use this?

Go backend engineers building high-concurrency services—think API gateways, caches, or real-time analytics with 100+ goroutines hitting shared maps. Perfect for high performance computing github apps or games needing fast, contested lookups beyond high performance mysql github limits. Skip if you need range scans or complex queries.

Verdict

Grab it for perf hotspots; allocator suits most, inline crushes uint64 paths. 29 stars and 1.0% credibility signal early days, but solid benchmarks, examples, and Apache 2.0 make it worth prototyping over sync.Map today.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.