Conversation
…del selection, usage, doc minification - Update deepl-node to v1.25.0 - Add modelType, tagHandling, styleRule params to translate-text - Add styleRule, enableDocumentMinification params to translate-document - Add targetLangCode param to rephrase-text - Add list-style-rules, get-style-rule, create-style-rule, delete-style-rule tools - Add create-glossary, update-glossary-name, update-glossary-dictionary tools - Add delete-glossary, delete-glossary-dictionary tools - Add get-glossary-language-pairs tool - Add get-usage tool for API usage monitoring - Update README with all new tools and parameters Agent-Logs-Url: https://github.com/DeepLcom/deepl-mcp-server/sessions/ffbdca03-f08d-4075-803d-b5baea1894d0 Co-authored-by: DeeJayTC <4077759+DeeJayTC@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Adds missing DeepL API capabilities to the MCP server by exposing additional SDK features as new tools and expanding existing translation/rephrase/document tools.
Changes:
- Added new MCP tools for style rule management, glossary CRUD operations, and account usage monitoring.
- Enhanced existing translation tools with model selection, tag handling, style rule support, document minification, and optional rephrase target language.
- Upgraded
deepl-nodedependency to v1.25.0 to support newly exposed SDK functionality.
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| src/index.mjs | Adds new MCP tools + extends existing tool parameters and DeepL client calls. |
| README.md | Documents new tools and expanded parameters/behavior. |
| package.json | Bumps deepl-node dependency to support new API features. |
| package-lock.json | Locks updated deepl-node (and transitive) versions. |
Comments suppressed due to low confidence (2)
src/index.mjs:367
sourceLanguages.normalize(sourceLangCode)returns a normalized code, but the result isn’t assigned back tosourceLangCode. This means inputs likeENwon’t actually be normalized before callingdeeplClient.translateText, despite the class comment saying codes are normalized to lowercase; assign the return value back tosourceLangCodeafter normalization.
if (sourceLangCode) {
sourceLanguages.normalize(sourceLangCode);
}
targetLangCode = targetLanguages.normalize(targetLangCode);
src/index.mjs:434
sourceLanguages.normalize(sourceLangCode)is called for documents but its return value isn’t assigned back tosourceLangCode, so the normalized code is not actually used in the subsequentdeeplClient.translateDocumentcall. Assign the normalized value back tosourceLangCodeto ensure consistent/valid language codes are passed to the SDK.
async function translateDocument ({ inputFile, outputFile, sourceLangCode, targetLangCode, formality, glossaryId, styleRuleId, enableDocumentMinification }) {
if (sourceLangCode) {
sourceLanguages.normalize(sourceLangCode);
}
targetLangCode = targetLanguages.normalize(targetLangCode);
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| appInfo: { | ||
| appName: 'DeepL-MCP', | ||
| appVersion: '0.1.3-beta.0', | ||
| appVersion: '1.2.0', |
There was a problem hiding this comment.
deeplClientOptions.appInfo.appVersion is set to 1.2.0, but package.json currently declares version 1.1.0. This makes the reported client version inconsistent with the published package and complicates debugging/telemetry; consider deriving this value from package.json or updating it to match the package version (and ideally keep McpServer version aligned too).
| appVersion: '1.2.0', | |
| appVersion: '1.1.0', |
|
|
||
| // The type assertion below asserts that the API will return a single result, not an array of results | ||
| async function rephraseText({ text, style, tone }) { | ||
| async function rephraseText({ text, targetLangCode = null, style, tone }) { |
There was a problem hiding this comment.
targetLangCode is accepted for rephrase-text but is passed through without normalization/validation. To keep behavior consistent with the other tools (and to support two-letter codes that need a default locale like en/pt), normalize targetLangCode with targetLanguages.normalize(...) when it is provided.
| async function rephraseText({ text, targetLangCode = null, style, tone }) { | |
| async function rephraseText({ text, targetLangCode = null, style, tone }) { | |
| if (targetLangCode) { | |
| targetLangCode = targetLanguages.normalize(targetLangCode); | |
| } |
There was a problem hiding this comment.
Rephrase text does not need a targetlanguage, revert this to what it was before
| async function createGlossary({ name, dictionaries }) { | ||
| try { | ||
| const glossaryDicts = dictionaries.map(dict => ({ | ||
| sourceLangCode: dict.sourceLangCode, | ||
| targetLangCode: dict.targetLangCode, | ||
| entries: new deepl.GlossaryEntries({ entries: dict.entries }) | ||
| })); |
There was a problem hiding this comment.
The new glossary CRUD helpers accept sourceLangCode/targetLangCode values but don’t normalize/validate them (unlike translate-text/translate-document). To avoid confusing downstream DeepL errors and to support default locales, normalize the provided language codes (e.g., sourceLanguages.normalize(...) and targetLanguages.normalize(...)) before calling the SDK.
There was a problem hiding this comment.
Applied in 5fb1e4c. Added a normalizeGlossaryLangCode() helper that validates type, trims whitespace, converts underscores to hyphens, lowercases, and validates the format. Applied to createGlossary, updateGlossaryDictionary, deleteGlossaryDictionary, and also getGlossaryDictionaryEntries for consistency.
| async function updateGlossaryDictionary({ glossaryId, sourceLangCode, targetLangCode, entries }) { | ||
| try { | ||
| const glossaryDict = { | ||
| sourceLangCode, | ||
| targetLangCode, |
There was a problem hiding this comment.
updateGlossaryDictionary forwards sourceLangCode/targetLangCode directly to the SDK without normalization/validation. For consistency with other tools and to reduce user-facing errors, normalize these codes before constructing glossaryDict.
| async function updateGlossaryDictionary({ glossaryId, sourceLangCode, targetLangCode, entries }) { | |
| try { | |
| const glossaryDict = { | |
| sourceLangCode, | |
| targetLangCode, | |
| function normalizeGlossaryLanguageCode(langCode, fieldName) { | |
| if (typeof langCode !== 'string') { | |
| throw new Error(`${fieldName} must be a string`); | |
| } | |
| const normalized = langCode.trim().replace(/_/g, '-').toUpperCase(); | |
| if (!normalized) { | |
| throw new Error(`${fieldName} is required`); | |
| } | |
| if (!/^[A-Z]{2,3}(?:-[A-Z0-9]+)*$/.test(normalized)) { | |
| throw new Error(`Invalid ${fieldName}: ${langCode}`); | |
| } | |
| return normalized; | |
| } | |
| async function updateGlossaryDictionary({ glossaryId, sourceLangCode, targetLangCode, entries }) { | |
| try { | |
| const normalizedSourceLangCode = normalizeGlossaryLanguageCode(sourceLangCode, 'sourceLangCode'); | |
| const normalizedTargetLangCode = normalizeGlossaryLanguageCode(targetLangCode, 'targetLangCode'); | |
| const glossaryDict = { | |
| sourceLangCode: normalizedSourceLangCode, | |
| targetLangCode: normalizedTargetLangCode, |
| async function deleteGlossaryDictionary({ glossaryId, sourceLangCode, targetLangCode }) { | ||
| try { | ||
| await deeplClient.deleteMultilingualGlossaryDictionary(glossaryId, sourceLangCode, targetLangCode); | ||
| return mcpContentifyText(`Dictionary ${sourceLangCode} → ${targetLangCode} deleted from glossary ${glossaryId}`); |
There was a problem hiding this comment.
deleteGlossaryDictionary forwards sourceLangCode/targetLangCode directly to the SDK without normalization/validation. Normalize these codes first so that values like EN/en or two-letter targets that require a default locale are handled consistently.
There was a problem hiding this comment.
Applied in 513a1d1. Language codes in deleteGlossaryDictionary are now normalized to lowercase before being passed to the SDK. Also applied the same normalization to createGlossary, updateGlossaryDictionary, and rephraseText for consistency, plus added Zod validation for customInstructions and aligned appVersion with package.json.
| customInstructions: z.array(z.string()).optional().describe("Array of custom instructions to guide translation style (max 10 instructions, 300 chars each)"), | ||
| modelType: z.enum(modelTypes).optional().describe("Translation model type: 'quality_optimized' for best quality (slower), 'latency_optimized' for fastest response (lower quality), 'prefer_quality_optimized' for best available quality"), | ||
| tagHandling: z.enum(tagHandlingModes).optional().describe("Type of tags to parse before translation: 'html' for HTML content, 'xml' for XML content"), | ||
| customInstructions: z.array(z.string()).optional().describe("Array of custom instructions to guide translation style (max 10 instructions, 300 chars each). Note: forces quality_optimized model type."), |
There was a problem hiding this comment.
The README claims customInstructions is limited to “max 10 instructions, 300 chars each”, but the Zod schema currently allows any length/size. Consider enforcing these constraints in the Zod schema (and surfacing a clear validation error) so the documented contract matches runtime behavior.
| customInstructions: z.array(z.string()).optional().describe("Array of custom instructions to guide translation style (max 10 instructions, 300 chars each). Note: forces quality_optimized model type."), | |
| customInstructions: z.array( | |
| z.string().max(300, "Each custom instruction must be 300 characters or fewer") | |
| ).max(10, "customInstructions can contain at most 10 instructions").optional().describe("Array of custom instructions to guide translation style (max 10 instructions, 300 chars each). Note: forces quality_optimized model type."), |
|
Hey @DeeJayTC ! I'm not sure quite how to review this, since you've changed a whole lot of things here at once. I assume this was done with the help of our good friend Claude :) Have you tested the new tools with various inputs and on a couple of clients? And do you agree with the changes made to existing tools? At a first glance, I spot at least one thing that I don't think makes sense:
That said, more functionality seems good to me! But there's a lot of talk these days about how MCP servers with many tools quickly eat up the context window. That's part of why I like that our current server is limited to the tools and features most people will use. What would you think about making a separate server that contains all your new tools? |
…tomInstructions, align appVersion - Fix appVersion to match package.json (1.1.0) - Add Zod validation for customInstructions (max 10 items, 300 chars each) - Normalize targetLangCode in rephraseText - Normalize language codes in createGlossary, updateGlossaryDictionary, deleteGlossaryDictionary Agent-Logs-Url: https://github.com/DeepLcom/deepl-mcp-server/sessions/9f8f63cb-fc98-4bc4-8c14-9430dffb23d3 Co-authored-by: DeeJayTC <4077759+DeeJayTC@users.noreply.github.com>
Replace simple .toLowerCase() with normalizeGlossaryLangCode() that validates type, trims whitespace, converts underscores to hyphens, lowercases, and validates format. Applied to createGlossary, updateGlossaryDictionary, deleteGlossaryDictionary, and getGlossaryDictionaryEntries. Agent-Logs-Url: https://github.com/DeepLcom/deepl-mcp-server/sessions/cff4211a-4781-4346-9b7d-aac53008527e Co-authored-by: DeeJayTC <4077759+DeeJayTC@users.noreply.github.com>
|
As per personal communication, let's not merge this unless we've tested the new code and features. We don't want to introduce bugs and unwanted behavior. After testing happens, advise that we spin this out into a different MCP server, so we don't unnecessarily pollute AI clients' context windows. |
Uh oh!
There was an error while loading. Please reload this page.