Removed version fixes for torch transformers in windows ptq example requirements#1275
Removed version fixes for torch transformers in windows ptq example requirements#1275hthadicherla wants to merge 2 commits intomainfrom
Conversation
Signed-off-by: Hrishith Thadicherla <hthadicherla@nvidia.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughUpdated the example requirements file to relax version constraints: Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
examples/windows/onnx_ptq/genai_llm/requirements.txt (1)
2-3: Consider bounded ranges instead of fully unpinned dependenciesLines 2–3 being fully unpinned can make the example non-reproducible and vulnerable to future major-version breakage. Other examples in the repo use tested versions with upper bounds (e.g.,
torch>=2.6.0,transformers<5.0). If flexibility is desired, prefer tested lower bounds with a major version upper cap.Suggested change
- torch - transformers + torch>=2.7,<3 + transformers>=4.57,<5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/windows/onnx_ptq/genai_llm/requirements.txt` around lines 2 - 3, The requirements file lists unpinned dependencies (torch, transformers); replace these with tested bounded ranges to avoid breakage and ensure reproducibility—e.g., set a tested minimum and a major-version upper bound for torch and transformers (for example torch>=2.6.0,<3.0 and transformers>=4.0,<5.0) or use your project’s chosen lower bounds/upper caps, updating the entries for "torch" and "transformers" in requirements.txt accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@examples/windows/onnx_ptq/genai_llm/requirements.txt`:
- Around line 2-3: The requirements file lists unpinned dependencies (torch,
transformers); replace these with tested bounded ranges to avoid breakage and
ensure reproducibility—e.g., set a tested minimum and a major-version upper
bound for torch and transformers (for example torch>=2.6.0,<3.0 and
transformers>=4.0,<5.0) or use your project’s chosen lower bounds/upper caps,
updating the entries for "torch" and "transformers" in requirements.txt
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 67c6a315-53a9-4e12-a24b-68cc510586b1
📒 Files selected for processing (1)
examples/windows/onnx_ptq/genai_llm/requirements.txt
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1275 +/- ##
==========================================
- Coverage 75.61% 74.73% -0.88%
==========================================
Files 459 459
Lines 48597 48621 +24
==========================================
- Hits 36747 36338 -409
- Misses 11850 12283 +433
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| torch==2.9.0 | ||
| transformers==4.57.3 | ||
| torch | ||
| transformers |
There was a problem hiding this comment.
does this work for transformers v5 also?
There was a problem hiding this comment.
agreed, do we want to specify a range instead?
There was a problem hiding this comment.
Atleast for the quantization example, I saw that it worked with transformers v5 for the models I tested(llama, qwen, gemma). I could however lock it install highest version below 5 if you think it could cause an issue.
There was a problem hiding this comment.
Do you support MoE models? Thats the main one which is affected by transformers v5. If you only care about dense llms, then you are fine using latest transformers
There was a problem hiding this comment.
I don't know whether it is used for MOE models. Atleast in our internal automation , we only test for dense models.
@vishalpandya1990 do you know if our quantization example supports MOE models ?
There was a problem hiding this comment.
Adding upper bound to < 5 anyway.
If there is a need in future for transformers v5 for this particular example, I can add it then.
Signed-off-by: Hrishith Thadicherla <99313418+hthadicherla@users.noreply.github.com>
What does this PR do?
Type of change: Bug fix
Removed version fixes for torch and transformers
Testing
Tested quantization with a couple of models . Working as expected.
Summary by CodeRabbit