-
Notifications
You must be signed in to change notification settings - Fork 8
Query Insights: Update model & refine prompt components #468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…into dev/xingfan/updatemodel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request refactors the LLM prompt architecture to improve security against prompt injection attacks and updates the preferred model from 'gpt-5' to 'gpt-4o'.
Changes:
- Updated the preferred LLM model from 'gpt-5' (non-existent) to 'gpt-4o', with 'gpt-4o-mini' as the fallback
- Introduced a new
FilledPromptResultinterface to separate prompt components into craftedPrompt (instructions), userQuery (user input), and contextData (system-retrieved data) - Refactored message structure to use Assistant role for instructions and User role for data, implementing a prompt injection defense pattern
- Added security-focused prompt components including PRIORITY DECLARATION, SECURITY INSTRUCTIONS, and CRITICAL_JSON_REMINDER
- Updated all prompt templates with new structure including data placeholder documentation sections
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/commands/llmEnhancedCommands/promptTemplates.ts | Major refactoring: changed model configuration, added new utility functions for security-focused prompts, created FilledPromptResult interface, and restructured all prompt templates with security instructions and data placeholder documentation |
| src/commands/llmEnhancedCommands/queryGenerationCommands.ts | Updated fillPromptTemplate to return FilledPromptResult components and refactored message construction to use Assistant/User role pattern for security |
| src/commands/llmEnhancedCommands/indexAdvisorCommands.ts | Updated fillPromptTemplate to return FilledPromptResult components, refactored message construction to match new pattern, and updated telemetry calculation |
tnaum-ms
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xingfan-git please resolve copilot's concerns, and add a description in the PR that explains why the applied solution is an effective approach against 'prompt injections'.
|
@copilot summarize the PR and generate description |
|
@xingfan-git I've opened a new pull request, #475, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
Updated the prompt structure to mitigate prompt injection attacks. The changes include:
Set the default model as gpt-4o to provide faster response