Skip to content

Add chat template check for sft#3350

Open
ChingTsai wants to merge 1 commit intomainfrom
jimmytsai/add-template-check-for-sft
Open

Add chat template check for sft#3350
ChingTsai wants to merge 1 commit intomainfrom
jimmytsai/add-template-check-for-sft

Conversation

@ChingTsai
Copy link
Collaborator

@ChingTsai ChingTsai commented Mar 9, 2026

Description

  • This PR is adding test follow-up to PR #3284, which added the generation_prompt to the prompt tokens to improve SFT masking.

  • Add an additional check to ensure the generation prompt behaves as expected for SFT masking. If the tokenizer's chat template is misconfigured, the pipeline will now quickly raise an error.

    • It verifies that the generation_prompt (assistant prefix) remains identical regardless of whether add_generation_prompt when running tokenizer.apply_chat_template is set to True or False. ref

Thanks to @vlad-karp for raising the concern that led to adding this test!

Tests

python3 -m pytest -vv tests/unit/sft_data_processing_test.py  -m "external_training" -s

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@ChingTsai ChingTsai changed the title add chat template check for sft Add chat template check for sft Mar 9, 2026
@codecov
Copy link

codecov bot commented Mar 9, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

def test_tokenizer_w_generation_prompt(self):
verify_chat_template_generation_prompt_logic(self.qwen3_tokenizer)

def test_tokenizer_wo_generation_promt(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: test_tokenizer_wo_generation_prompt

"gsutil",
"cp",
"-r",
"gs://maxtext-dataset/hf/llama2-chat-tokenizer",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it okay to use a private gs: location ?

ValueError: If the `add_generation_prompt` tokens do not exactly
match the beginning of an assistant message in the template.
"""
dummy_msgs = [{"role": "user", "content": "Test message"}]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also include system prompt in verification?

prompt_wo_gen = tokenizer_model.apply_chat_template(dummy_msgs, add_generation_prompt=False, tokenize=True)
prompt_with_gen = tokenizer_model.apply_chat_template(dummy_msgs, add_generation_prompt=True, tokenize=True)
# Extract the tokenized generation prompt (the expected assistant prefix)
assistant_prefix_tokens = prompt_with_gen[len(prompt_wo_gen) :]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a check before this:

if prompt_with_gen[:len(prompt_wo_gen)] != prompt_wo_gen:
    raise ValueError("Unable to extract generation prompt tokens.")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants