GAMMA-v2: An end-to-end co-design simulation framework integrating gem5 and MLIR, enabling LLM and operator-level workload modeling, configurable accelerator generation, and system-level evaluation for mapping and architecture exploration.
GAMMA-v2 is a simulation framework for evaluating AI chip architectures on large language model workloads through compiler-driven planning and detailed performance reporting.
How It Works
You hear about a helpful tool that lets you test if your custom AI chip can handle big language models like chatting assistants.
You simply tell it about your chip's memory size, speed, and computing power using easy checklists.
Choose a popular AI model family like Llama or Qwen, and what task to run, like generating text.
The tool automatically plans the best way to run your model on the chip, step by step.
Press go and watch it simulate how your chip performs the AI work.
Get clear reports on speed, efficiency, and tips to improve your design.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.