Inference scaling benchmark of Qwen3.5-2B on AMD Instinct MI300X using ROCm and Hugging Face Transformers.
-
Updated
May 6, 2026 - Python
Inference scaling benchmark of Qwen3.5-2B on AMD Instinct MI300X using ROCm and Hugging Face Transformers.
White paper & reproducible benchmark suite for LLM inference optimization on AMD MI300X using ROCm 6.1
Meridian — An AI-first project management platform featuring a serverless architecture powered by React and an AMD MI300X Inference Endpoint.
Span-cited English investor memos from Japanese 有価証券報告書, produced by a 14B nekomata-qfin fine-tune on a single AMD Instinct MI300X.
Local-first purple-team CLI for Terraform IaC: LocalStack sandboxes, MI300X/vLLM red-blue reasoning, deterministic remediation, and evidence reports.
Domain-specific fine-tuned code model for AMD ROCm GPU kernel optimization. SFT + GRPO on MI300X. 14% vs CUDA hand-tuned. 🤗 HF Space: https://huggingface.co/spaces/XMRTDAO/rocm-kernel-tuner
Zero-Knowledge multi-agent DAO governance on AMD MI300X + ROCm 6.2. AI agents propose, humans vote privately, treasury executes via ZK proofs. 🤗 HF Space: https://huggingface.co/spaces/XMRTDAO/zero-claw
Add a description, image, and links to the mi300x topic page so that developers can more easily learn about it.
To associate your repository with the mi300x topic, visit your repo's landing page and select "manage topics."