openvmm: support multiple numa memory ranges for test only purposes #3361
openvmm: support multiple numa memory ranges for test only purposes #3361chris-oo wants to merge 5 commits intomicrosoft:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds support for describing guest RAM as multiple vNUMA node ranges (primarily for test scenarios), enabling OpenVMM/Petri/vmm_tests to exercise NUMA-aware behaviors without requiring real host NUMA pinning.
Changes:
- Introduces
MemoryLayout::new_with_numa()to split RAM ranges across vNUMA nodes while respecting MMIO/PCI gaps. - Plumbs per-node RAM sizes through Petri and OpenVMM configs, including a new
--numa-memoryCLI flag. - Adds unit tests validating NUMA range splitting behavior in
vm_topology.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| vm/vmcore/vm_topology/src/memory.rs | Adds NUMA-aware memory layout constructor + tests for multi-node splitting behavior. |
| petri/src/vm/openvmm/construct.rs | Computes mem_size from per-node sizes when provided and forwards NUMA sizes into OpenVMM config. |
| petri/src/vm/mod.rs | Extends Petri MemoryConfig with optional numa_mem_sizes. |
| openvmm/openvmm_entry/src/ttrpc/mod.rs | Initializes new OpenVMM memory config field (numa_mem_sizes: None) for ttrpc path. |
| openvmm/openvmm_entry/src/lib.rs | CLI-to-config plumbing for NUMA sizes; sets mem_size to sum when --numa-memory is used. |
| openvmm/openvmm_entry/src/cli_args.rs | Adds --numa-memory parsing and mutual exclusion with --memory. |
| openvmm/openvmm_defs/src/config.rs | Extends shared MemoryConfig with numa_mem_sizes. |
| openvmm/openvmm_core/src/worker/dispatch.rs | Uses MemoryLayout::new_with_numa() when NUMA sizes are provided and validates totals. |
| /// guest RAM size | ||
| #[clap( | ||
| short = 'm', | ||
| long, | ||
| value_name = "SIZE", | ||
| default_value = "1GB", | ||
| value_parser = parse_memory | ||
| value_parser = parse_memory, | ||
| conflicts_with = "numa_memory" | ||
| )] | ||
| pub memory: u64, | ||
|
|
||
| /// per-NUMA-node guest RAM sizes (comma-separated, e.g. "2G,2G"). | ||
| /// Distributes memory across vNUMA nodes reported to the guest. Mutually | ||
| /// exclusive with --memory. | ||
| /// | ||
| /// TODO: This is informational topology only. Backing pages are not pinned | ||
| /// to any host topology, nor coordinated with CPUs. This should change once | ||
| /// we implement real numa support. | ||
| #[clap(long, value_name = "SIZES", value_parser = parse_numa_memory, conflicts_with = "memory")] | ||
| pub numa_memory: Option<Vec<u64>>, |
There was a problem hiding this comment.
--memory has a default_value, so it is always considered present by clap. With conflicts_with = "numa_memory"/conflicts_with = "memory", passing --numa-memory will likely always error due to a conflict with the defaulted --memory. To make --numa-memory usable, consider making memory an Option<u64> (and applying the default in code), or using clap's conditional defaults (e.g., default_value_if*) / an ArgGroup so the default for --memory is suppressed when --numa-memory is provided.
Not being able to specify numa nodes for memory makes it impossible to test certain behavior in vmm_tests. Add the ability to report this, even if the backing page is not pinned to any specific hardware numa node. This behavior should change once we support real numa pinning for CPUs and memory in openvmm.
This also refactors
MemoryLayoutto usesubtract_rangeswhen finding free sections of the address space to place ram.Required for #3312 to have openvmm tests work.