Series B — Adaptive per-resolver duplication#1
Open
Isusami wants to merge 3 commits into
Open
Conversation
Adds public LossScore(resolverID) uint64 wrapping the existing lossScoreLocked permille helper. Returns 200 as the initial-probation value when sent < 5 (matches the locked helper). No behavior change — this is a refactor prerequisite for adaptive per-resolver duplication. Tests assert exact ratios at fixed (sent, lost) pairs rather than monotonicity, since the rate can drop while losses accumulate as sent grows.
Introduces MIN_DUP and MAX_DUP client config keys (range 1..10) that bound the adaptive per-resolver duplication count. Defaults preserve existing behavior: both equal the post-finalize PacketDuplicationCount, so adaptive duplication is opt-in. Validation reuses the existing clampInt idiom; invalid ranges fall back to the effective PacketDuplicationCount.
Wires LossScore (from B1) and MIN_DUP/MAX_DUP (from B2) into the
planner's runtimePacketDuplicationCount path. The dup count handed
to SelectTargets is now adaptive:
clamp(round(MIN_DUP + (MAX_DUP - MIN_DUP) * lossRate),
MIN_DUP, MAX_DUP)
where lossRate = LossScore(resolver) / 1000.
Conservatism: dup counts are only allowed to DECREASE after a
sustained healthy window of low-loss samples — increases apply
immediately. This preserves the duplication-for-survivability
posture under bursty loss.
Coordination with stream failover hysteresis is documented inline.
Setup-class packets remain governed by SetupPacketDuplicationCount
and are unaffected by the adaptive clamp.
Defaults (MIN_DUP == MAX_DUP == current PacketDuplicationCount)
preserve existing behavior.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Replaces the fixed
PACKET_DUPLICATION_COUNTwith a per-resolver adaptive duplication count derived from observed loss. Defaults preserve existing behavior (adaptive is opt-in).What changed
LossScoregetter (refactor) — exposes the existinglossScoreLockedpermille helper asLossScore(resolverID) uint64for use outsidebalancer.go. Returns200as the initial-probation value whensent < 5(matches the locked helper). No behavior change.MIN_DUP/MAX_DUPconfig keys — bounds for adaptive duplication. Validation:1 ≤ MIN_DUP ≤ MAX_DUP ≤ 10. "Unset" is exactly zero (TOML default for missing keys). Negative or out-of-range values are clamped viaclampInt. Inverted ranges fall back to the effectivePacketDuplicationCount. Defaults: both equal the effectivePacketDuplicationCountafter clamp → no behavior change unless operator opts in.SelectTargetsis nowclamp(round(MIN_DUP + (MAX_DUP - MIN_DUP) * lossRate), MIN_DUP, MAX_DUP)wherelossRate = LossScore(r) / 1000.How it works
internal/client/async_runtime.go, just beforeSelectTargets), not the dispatcher. The dispatcher continues to only forwarddupCount.PACKET_STREAM_SYN,PACKET_PACKED_CONTROL_BLOCKS,PACKET_SOCKS5_SYN,PACKET_STREAM_CLOSE_READ,PACKET_STREAM_CLOSE_WRITE) stay governed bySetupPacketDuplicationCount— adaptive duplication does NOT apply to setup traffic. This boundary is enforced byAdaptiveDuplicationCount's early-return on!isBalancerStreamDataLike(packetType)and documented in a code comment on the method.lossPermillestays at or belowadaptiveDupHealthyPermille(50 ‰ ≈ 5% loss) for at leastadaptiveDupHealthyWindow(3 seconds). Increases apply immediately. This preserves duplication-for-survivability under bursty loss.streamFailoverCooldown(default 1s; adaptive uses 3s). A stream failover may already have rotated the preferred resolver before the adaptive guard decides it is safe to drop the dup count back down — so the two hysteresis mechanisms can't double-damp the same loss signal. Documented inline next to the constant block inbalancer.go.Example usage
Test plan
go vet ./...go build ./...go test ./...