Skip to content

Centralize 'trtexec' subprocess runs in ONNX into a single function#1268

Open
gcunhase wants to merge 1 commit intoNVIDIA:mainfrom
gcunhase:dev/gcunhasergio/add_run_trtexec_func
Open

Centralize 'trtexec' subprocess runs in ONNX into a single function#1268
gcunhase wants to merge 1 commit intoNVIDIA:mainfrom
gcunhase:dev/gcunhasergio/add_run_trtexec_func

Conversation

@gcunhase
Copy link
Copy Markdown
Contributor

@gcunhase gcunhase commented Apr 15, 2026

What does this PR do?

Type of change: code improvement

Subprocess is needed to run trtexec commands but those require nosec approval. This PR centralizes it into a single function for the ONNX workflow to avoid future approval triggers. The torch workflow has that centralized in run_command.

Original request: #1259 (comment)

Usage

results = _run_trtexec(cmd)

Testing

N/A

Before your PR is "Ready for review"

  • Is this change backward compatible?: ✅
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: N/A
  • Did you write any new necessary tests?: N/A
  • Did you update Changelog?: N/A

Summary by CodeRabbit

  • Refactor
    • Centralized internal benchmark execution logic to improve code organization and maintainability. No user-facing functionality changes.

Signed-off-by: gcunhase <4861122+gcunhase@users.noreply.github.com>
@gcunhase gcunhase requested a review from a team as a code owner April 15, 2026 21:06
@gcunhase gcunhase requested a review from cjluo-nv April 15, 2026 21:06
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

Refactored subprocess execution logic by creating a new _run_trtexec helper function in ort_utils.py and updating existing code to use it, centralizing trtexec command execution and eliminating duplicate subprocess invocations.

Changes

Cohort / File(s) Summary
Subprocess Execution Utility
modelopt/onnx/quantization/ort_utils.py
Added _run_trtexec(cmd, timeout=None) helper function that wraps subprocess.run with trtexec command validation. Updated _check_for_trtexec to use this new utility instead of direct subprocess invocation.
Benchmark Refactoring
modelopt/onnx/quantization/autotune/benchmark.py
Removed direct subprocess.run call and import; now delegates trtexec execution to _run_trtexec from ort_utils. Downstream control flow (result handling, returncode checks, latency parsing) remains unchanged.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Security Anti-Patterns ❌ Error Pull request contains # nosec B603 comment on line 53 of ort_utils.py, violating the repository's explicit policy forbidding nosec comments to bypass Bandit security checks. Remove the # nosec B603 comment and replace the assert statement with explicit runtime validation using raise ValueError instead.
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: centralizing trtexec subprocess execution into a single function across the ONNX codebase.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

def _run_trtexec(cmd, timeout=None):
"""Run a 'trtexec' command via subprocess."""
# Ensure that this command is a trtexec run
assert any("trtexec" in c for c in cmd), "Subprocess can only execute 'trtexec' commands"
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kevalmorabia97 please let me know if this is enough to ensure only trtexec runs or if this is even necessary. Thanks.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not pass trtexec in cmd and only pass arguments to trtexec so this function prepends trtexec to the cmd?

Please add type annotation also

@gcunhase gcunhase changed the title Centralize subprocess runs in ONNX into a single function Centralize 'trtexec' subprocess runs in ONNX into a single function Apr 15, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelopt/onnx/quantization/ort_utils.py`:
- Around line 47-53: Replace the fragile assert and the banned "# nosec" in
_run_trtexec: validate that cmd is a non-empty sequence and that cmd[0] contains
"trtexec" (or equals "trtexec") using an explicit runtime check (e.g., if not
cmd or "trtexec" not in str(cmd[0]): raise ValueError("...")), then call
subprocess.run(cmd, capture_output=True, text=True, timeout=timeout) without any
"# nosec" comment; keep the function name _run_trtexec and parameters (cmd,
timeout) and raise a clear exception (ValueError/RuntimeError) when validation
fails so Bandit checks are not bypassed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 7e439fa2-8582-4851-9734-1eec0c8351c0

📥 Commits

Reviewing files that changed from the base of the PR and between 361f7e3 and 6f25809.

📒 Files selected for processing (2)
  • modelopt/onnx/quantization/autotune/benchmark.py
  • modelopt/onnx/quantization/ort_utils.py

Comment on lines +47 to +53
def _run_trtexec(cmd, timeout=None):
"""Run a 'trtexec' command via subprocess."""
# Ensure that this command is a trtexec run
assert any("trtexec" in c for c in cmd), "Subprocess can only execute 'trtexec' commands"

# Run trtexec command
return subprocess.run(cmd, capture_output=True, text=True, timeout=timeout) # nosec B603
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the problematic patterns exist at the expected locations.
rg -n 'def _run_trtexec|assert any\("trtexec"|# nosec B603|subprocess\.run\(' modelopt/onnx/quantization/ort_utils.py -C2

Repository: NVIDIA/Model-Optimizer

Length of output: 452


🏁 Script executed:

# Check for calls to _run_trtexec to understand usage context
rg -n '_run_trtexec' modelopt/onnx/quantization/ort_utils.py -A 5 -B 5

Repository: NVIDIA/Model-Optimizer

Length of output: 927


🏁 Script executed:

# Check if there are any imports of subprocess and how it's being used elsewhere
rg -n 'import subprocess|from subprocess' modelopt/onnx/quantization/ort_utils.py -B 2 -A 2

Repository: NVIDIA/Model-Optimizer

Length of output: 184


🏁 Script executed:

# Verify the file structure and get full context around the function
sed -n '1,60p' modelopt/onnx/quantization/ort_utils.py

Repository: NVIDIA/Model-Optimizer

Length of output: 2324


Replace the assert gate and remove the # nosec bypass in _run_trtexec.

Line 50 uses assert for command validation, which can be disabled with Python optimization flags (-O, -OO), making it an unreliable security control. Line 53 adds # nosec B603, which violates repo policy: "Bandit security checks are enforced via pre-commit hooks. # nosec comments are not allowed as bypasses." Replace the assert with an explicit runtime check on cmd[0] and remove the nosec suppression. Request formal codeowner approval only if the subprocess pattern genuinely cannot be restructured to satisfy Bandit without exceptions.

Proposed fix
-def _run_trtexec(cmd, timeout=None):
+def _run_trtexec(cmd: Sequence[str], timeout: float | None = None):
     """Run a 'trtexec' command via subprocess."""
-    # Ensure that this command is a trtexec run
-    assert any("trtexec" in c for c in cmd), "Subprocess can only execute 'trtexec' commands"
+    if not cmd:
+        raise ValueError("Empty command passed to _run_trtexec")
+    exe = os.path.basename(str(cmd[0])).lower()
+    if exe not in {"trtexec", "trtexec.exe"}:
+        raise ValueError("Subprocess can only execute 'trtexec' commands")
 
     # Run trtexec command
-    return subprocess.run(cmd, capture_output=True, text=True, timeout=timeout)  # nosec B603
+    return subprocess.run(list(cmd), capture_output=True, text=True, timeout=timeout, check=False)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/onnx/quantization/ort_utils.py` around lines 47 - 53, Replace the
fragile assert and the banned "# nosec" in _run_trtexec: validate that cmd is a
non-empty sequence and that cmd[0] contains "trtexec" (or equals "trtexec")
using an explicit runtime check (e.g., if not cmd or "trtexec" not in
str(cmd[0]): raise ValueError("...")), then call subprocess.run(cmd,
capture_output=True, text=True, timeout=timeout) without any "# nosec" comment;
keep the function name _run_trtexec and parameters (cmd, timeout) and raise a
clear exception (ValueError/RuntimeError) when validation fails so Bandit checks
are not bypassed.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 15, 2026

Codecov Report

❌ Patch coverage is 83.33333% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 76.52%. Comparing base (361f7e3) to head (6f25809).

Files with missing lines Patch % Lines
modelopt/onnx/quantization/ort_utils.py 75.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1268       +/-   ##
===========================================
+ Coverage   55.67%   76.52%   +20.84%     
===========================================
  Files         458      459        +1     
  Lines       48464    48530       +66     
===========================================
+ Hits        26982    37136    +10154     
+ Misses      21482    11394    -10088     
Flag Coverage Δ
examples 41.38% <16.66%> (+17.27%) ⬆️
gpu 60.00% <83.33%> (+39.61%) ⬆️
unit 52.03% <33.33%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you unify with

p = subprocess.Popen(cmd, stdout=log, stderr=log, cwd=str(cwd) if cwd else None) # nosec
also

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to avoid coupling modelopt/torch with modelopt/onnx. What do you think is best?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modelopt.torch._deploy already has some dependencies on modelopt.onnx so its fine. We can keep this in modelopt.onnx.trt_utils or whatever names makes sense

For example this: https://github.com/NVIDIA/Model-Optimizer/blob/07ae8e71281a1c17898991b0cd76db5788f490bb/modelopt/torch/_deploy/_runtime/trt_client.py

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, will look into this today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants