Add CUDA graph capture probe for iris collectives#527
Draft
Add CUDA graph capture probe for iris collectives#527
Conversation
Probe script that tests which iris operations can be captured in a CUDA graph. Uses hipStreamBeginCapture detection (authoritative from HIP runtime) plus fresh-data replay validation to catch stale results. Results on MI355X (2 GPUs): - device_barrier: CAPTURABLE - host_barrier: NOT CAPTURABLE (NCCL) - All CCL ops (all_reduce, all_gather, all_to_all, reduce_scatter): NOT CAPTURABLE — refresh_peer_access does CPU-CUDA tensor copy during capture - ops.matmul_all_reduce: NOT CAPTURABLE (same root cause) Root cause: SymmetricHeap.allocate() calls refresh_peer_access() which does self.heap_bases[rank] = int(all_bases_arr[rank]), a CPU-CUDA copy illegal during graph capture. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Contributor
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds a standalone probe script to detect which iris collective/ops are CUDA-graph capturable by attempting torch.cuda.graph() capture and validating correctness on replay.
Changes:
- Introduces a
try_captureharness that warms up, captures, replays, and validates operations to catch stale-result “false positives”. - Adds probe cases for iris barriers, CCL collectives (all_reduce variants, all_gather, all_to_all, reduce_scatter), and
ops.matmul_all_reduce. - Prints a rank-0 summary table of capturability outcomes and truncates error details for readability.
Comment on lines
+413
to
+420
| def replay_setup(): | ||
| # Keep same A, B (matmul is deterministic for same inputs) | ||
| A.copy_(A_ref) | ||
| B.copy_(B_ref) | ||
|
|
||
| def validate(): | ||
| # Check output is non-zero and finite | ||
| return C.abs().max().item() > 0 and torch.isfinite(C).all().item() |
| # --------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| def try_capture(name, warmup_fn, capture_fn, reset_fn, replay_setup_fn, validate_fn, ctx, rank): |
Comment on lines
+148
to
+155
| result_buf = ctx.zeros((64,), dtype=torch.float32) | ||
|
|
||
| def warmup(): | ||
| buf.fill_(float(rank + 1)) | ||
| ctx.device_barrier() | ||
| # Read neighbor to prove barrier works | ||
| neighbor = (rank + 1) % world_size | ||
| heap_bases = ctx.get_heap_bases() |
Comment on lines
+34
to
+39
| def setup(): | ||
| """Initialize distributed + iris.""" | ||
| local_rank = int(os.environ.get("LOCAL_RANK", 0)) | ||
| torch.cuda.set_device(local_rank) | ||
| dist.init_process_group(backend="gloo") | ||
| ctx = iris.iris(heap_size=1 << 30) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
tests/graph_capture_probe.py— a probe script that tests which iris operations can be captured in a CUDA graphhipStreamBeginCapturedetection (authoritative from HIP runtime) plus fresh-data replay validation to catch stale resultstorchrun --nproc_per_node=2 --standalone tests/graph_capture_probe.pyResults (MI355X, 2 GPUs)
device_barrierhost_barrierccl.all_reduce(atomic)refresh_peer_accessCPU↔CUDA copyccl.all_reduce(two_shot)ccl.all_reduce(one_shot)ccl.all_gatherccl.all_to_allccl.reduce_scatterops.matmul_all_reduceRoot cause
SymmetricHeap.allocate()callsrefresh_peer_access()every time, which does:This is a CPU↔CUDA tensor copy, which is illegal during
hipStreamBeginCapture. It gets triggered when anyctx.zeros()allocation happens inside the CCL launch path (workspace creation in preamble).Fix direction
To make CCL ops graph-capturable, we need to:
get_heap_bases()return a pre-built CUDA tensor without any CPU interaction during the kernel launch pathasync_op=Trueto skip the trailingctx.barrier()(already supported)🤖 Generated with Claude Code