fix(memory): ensure auto context memory tool message compression works correctly#1279
Open
benym wants to merge 2 commits intoagentscope-ai:mainfrom
Open
fix(memory): ensure auto context memory tool message compression works correctly#1279benym wants to merge 2 commits intoagentscope-ai:mainfrom
benym wants to merge 2 commits intoagentscope-ai:mainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
AgentScope-Java Version
[1.0.11]
Description
[Closes #1260]
Background
AutoContextMemory.compressIfNeeded()implements a multi-strategy compression pipeline to keep the working memory context within token limits. Two logic bugs were found that reduce compression effectiveness:Bug 1: Strategy 1 cursor/index mismatch
In the Strategy 1 while loop,
currentMsgswas re-created fromworkingMemoryStorageon every iteration, butcursorStartIndexwas carried over from the previous iteration's index calculation (based on the old list structure).After each compression,
summaryToolsMessages()replaces multiple tool messages with a single summary viaMsgUtils.replaceMsg(), shrinking the list. The next iteration then creates a fresh list from the updatedworkingMemoryStorage, but applies the stalecursorStartIndex— which was computed against the pre-compression list — to this new list, causing index mismatch.Impact: Only the first group of consecutive tool messages gets compressed per
compressIfNeeded()call; subsequent groups are silently missed.Fix: Create
currentMsgsonce before the loop. SincesummaryToolsMessages()modifies the list in-place,cursorStartIndexstays consistent with the actual list structure across iterations.replaceWorkingMessage()is called once after the loop completes, which also provides better atomicity.Bug 2:
offloadingLargePayload()lastKeep guard blocks Strategy 3The early-return guard at the top of
offloadingLargePayload():applies unconditionally to both Strategy 2 (
lastKeep=true) and Strategy 3 (lastKeep=false). By design, Strategy 3 should not be subject to lastKeep protection — it only protects messages after the latest assistant response.Impact: When total message count is less than
lastKeep, Strategy 3 is incorrectly skipped, leaving large historical payloads uncompressed in the working memory.Fix: Add
lastKeep &&condition so the guard only applies when lastKeep protection is enabled:How to Test
Strategy 1 fix:
io.agentscope.core.memory.autocontext.AutoContextMemoryTest#testMultipleToolGroupsCompressedInSingleCallStrategy 3 fix:
io.agentscope.core.memory.autocontext.AutoContextMemoryTest#testStrategy3NotBlockedByLastKeepGuardChecklist
Please check the following items before code is ready to be reviewed.
mvn spotless:applymvn test)