Skip to content

Series B — Adaptive per-resolver duplication#1

Open
Isusami wants to merge 3 commits into
mainfrom
series-b-adaptive-duplication
Open

Series B — Adaptive per-resolver duplication#1
Isusami wants to merge 3 commits into
mainfrom
series-b-adaptive-duplication

Conversation

@Isusami
Copy link
Copy Markdown
Owner

@Isusami Isusami commented May 12, 2026

Summary

Replaces the fixed PACKET_DUPLICATION_COUNT with a per-resolver adaptive duplication count derived from observed loss. Defaults preserve existing behavior (adaptive is opt-in).

What changed

  1. B1 — LossScore getter (refactor) — exposes the existing lossScoreLocked permille helper as LossScore(resolverID) uint64 for use outside balancer.go. Returns 200 as the initial-probation value when sent < 5 (matches the locked helper). No behavior change.
  2. B2 — MIN_DUP / MAX_DUP config keys — bounds for adaptive duplication. Validation: 1 ≤ MIN_DUP ≤ MAX_DUP ≤ 10. "Unset" is exactly zero (TOML default for missing keys). Negative or out-of-range values are clamped via clampInt. Inverted ranges fall back to the effective PacketDuplicationCount. Defaults: both equal the effective PacketDuplicationCount after clamp → no behavior change unless operator opts in.
  3. B3 — Planner wiring + sustained-healthy window — the dup count handed to SelectTargets is now clamp(round(MIN_DUP + (MAX_DUP - MIN_DUP) * lossRate), MIN_DUP, MAX_DUP) where lossRate = LossScore(r) / 1000.

How it works

  • Wire site: the planner (internal/client/async_runtime.go, just before SelectTargets), not the dispatcher. The dispatcher continues to only forward dupCount.
  • Setup-class packets (PACKET_STREAM_SYN, PACKET_PACKED_CONTROL_BLOCKS, PACKET_SOCKS5_SYN, PACKET_STREAM_CLOSE_READ, PACKET_STREAM_CLOSE_WRITE) stay governed by SetupPacketDuplicationCount — adaptive duplication does NOT apply to setup traffic. This boundary is enforced by AdaptiveDuplicationCount's early-return on !isBalancerStreamDataLike(packetType) and documented in a code comment on the method.
  • Conservatism: dup counts only DECREASE after the preferred resolver's lossPermille stays at or below adaptiveDupHealthyPermille (50 ‰ ≈ 5% loss) for at least adaptiveDupHealthyWindow (3 seconds). Increases apply immediately. This preserves duplication-for-survivability under bursty loss.
  • Coordination with failover hysteresis: the adaptive sustained-healthy window is constructed to be longer than streamFailoverCooldown (default 1s; adaptive uses 3s). A stream failover may already have rotated the preferred resolver before the adaptive guard decides it is safe to drop the dup count back down — so the two hysteresis mechanisms can't double-damp the same loss signal. Documented inline next to the constant block in balancer.go.

Example usage

# Default (no opt-in): adaptive disabled, behaves like before
PACKET_DUPLICATION_COUNT = 3

# Opt-in: let the planner pick between 1× and 4× based on loss
PACKET_DUPLICATION_COUNT = 3
MIN_DUP = 1
MAX_DUP = 4

Test plan

  • go vet ./...
  • go build ./...
  • go test ./...
  • LossScore: initial probation returns 200; clean traffic → 0; exact ratios at fixed (sent, lost) pairs (no monotonicity assertion).
  • MIN_DUP/MAX_DUP: unset → defaults to effective dup; valid range preserved; invalid range falls back; bounds clamped to [1, 10]; TOML key round-trip.
  • Adaptive clamp: MIN at lossRate=0, MAX at lossRate=1, mid-range matches the formula.
  • Sustained-healthy guard: single low-loss sample does NOT drop dup; sample after the 3-second window does; subsequent loss resets the window.
  • Setup-class packets ignore the adaptive clamp.
  • Stream failover-streak guard still fires when streams are unhealthy.

Isusami added 3 commits May 12, 2026 19:16
Adds public LossScore(resolverID) uint64 wrapping the existing
lossScoreLocked permille helper. Returns 200 as the initial-probation
value when sent < 5 (matches the locked helper).

No behavior change — this is a refactor prerequisite for adaptive
per-resolver duplication. Tests assert exact ratios at fixed
(sent, lost) pairs rather than monotonicity, since the rate can
drop while losses accumulate as sent grows.
Introduces MIN_DUP and MAX_DUP client config keys (range 1..10)
that bound the adaptive per-resolver duplication count. Defaults
preserve existing behavior: both equal the post-finalize
PacketDuplicationCount, so adaptive duplication is opt-in.

Validation reuses the existing clampInt idiom; invalid ranges
fall back to the effective PacketDuplicationCount.
Wires LossScore (from B1) and MIN_DUP/MAX_DUP (from B2) into the
planner's runtimePacketDuplicationCount path. The dup count handed
to SelectTargets is now adaptive:

    clamp(round(MIN_DUP + (MAX_DUP - MIN_DUP) * lossRate),
          MIN_DUP, MAX_DUP)

where lossRate = LossScore(resolver) / 1000.

Conservatism: dup counts are only allowed to DECREASE after a
sustained healthy window of low-loss samples — increases apply
immediately. This preserves the duplication-for-survivability
posture under bursty loss.

Coordination with stream failover hysteresis is documented inline.
Setup-class packets remain governed by SetupPacketDuplicationCount
and are unaffected by the adaptive clamp.

Defaults (MIN_DUP == MAX_DUP == current PacketDuplicationCount)
preserve existing behavior.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant