Open
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
| def test_tokenizer_w_generation_prompt(self): | ||
| verify_chat_template_generation_prompt_logic(self.qwen3_tokenizer) | ||
|
|
||
| def test_tokenizer_wo_generation_promt(self): |
Collaborator
There was a problem hiding this comment.
nit: test_tokenizer_wo_generation_prompt
vlad-karp
reviewed
Mar 10, 2026
| "gsutil", | ||
| "cp", | ||
| "-r", | ||
| "gs://maxtext-dataset/hf/llama2-chat-tokenizer", |
Collaborator
There was a problem hiding this comment.
is it okay to use a private gs: location ?
| ValueError: If the `add_generation_prompt` tokens do not exactly | ||
| match the beginning of an assistant message in the template. | ||
| """ | ||
| dummy_msgs = [{"role": "user", "content": "Test message"}] |
Collaborator
There was a problem hiding this comment.
Can you also include system prompt in verification?
| prompt_wo_gen = tokenizer_model.apply_chat_template(dummy_msgs, add_generation_prompt=False, tokenize=True) | ||
| prompt_with_gen = tokenizer_model.apply_chat_template(dummy_msgs, add_generation_prompt=True, tokenize=True) | ||
| # Extract the tokenized generation prompt (the expected assistant prefix) | ||
| assistant_prefix_tokens = prompt_with_gen[len(prompt_wo_gen) :] |
Collaborator
There was a problem hiding this comment.
Add a check before this:
if prompt_with_gen[:len(prompt_wo_gen)] != prompt_wo_gen:
raise ValueError("Unable to extract generation prompt tokens.")
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR is adding test follow-up to PR #3284, which added the generation_prompt to the prompt tokens to improve SFT masking.
Add an additional check to ensure the generation prompt behaves as expected for SFT masking. If the tokenizer's chat template is misconfigured, the pipeline will now quickly raise an error.
add_generation_promptwhen runningtokenizer.apply_chat_templateis set toTrueorFalse. refThanks to @vlad-karp for raising the concern that led to adding this test!
Tests
Checklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.