Skip to content

[RFC] Semi-xmux: Connection Multiplexing Layer for Quota Optimization and Fingerprint Randomization #794

@w0l4i

Description

@w0l4i

Hi there,
I would like to propose an architectural enhancement for MHRV called Semi-xmux.

The primary goal is to significantly reduce Google Apps Script (GAS) quota burn, accelerate Download/Upload speeds, reduce waiting time for EETB (Estimated End-to-End Time to Byte), and drastically randomize the traffic fingerprint against active DPI systems.

While the recently added coalesce feature is a great step forward, it still leaves room for optimization. Currently, if a website initiates 5 batches inside 1 connection, MHRV processes them as such. I propose adding a dedicated, internal multiplexing layer handled directly by MHRV.

Proposed Architecture & Milestones
I suggest breaking this fundamental feature into two progressive milestones:

Phase 1: Fundamental Multiplexing (Mux Layer)

Instead of executing a TLS handshake and generating overhead for every single micro-request, MHRV should pack 4 to 8 active connections into a single main HTTP stream.

Mechanism: MHRV acts as an aggregator, carrying multiple connections from multiple distinct target websites together in one payload.

Reassembly: The node side (whether GAS via UrlFetchApp.fetchAll optimizations or the tunnel-node service) natively disassembles the muxed stream, fetches the external resources, and returns a multiplexed/multipart response back to the client.

Impact: This immediately reduces the sheer number of isolated requests hitting the GAS endpoint, optimizing the 20,000 daily quota and reducing latency caused by repetitive connection warm-ups.

Phase 2: "Semi-xmux" Distribution Layer
Once the core mux feature is stable, we can build the actual Semi-xmux logic on top of it.

Mechanism: The client takes these heavily multiplexed streams and distributes them across several different GAS deployments (a multi-edge routing approach).

Impact: Sending stream-muxed connections across diverse deployment URLs acts as an ultimate layer of performance. It effectively load-balances the processing wait time across multiple Google edges.

DPI Resistance: By distributing packed streams across varying endpoints, the TLS fingerprint and packet timing (temporal jitter) become highly randomized, making it exponentially harder for active throttling systems to detect and disrupt the flow via RST-injection.

Expected Benefits:
Quota Efficiency: Massive reduction in quota burn per user, allowing heavier daily usage on the free tier.
Performance: Faster page loading times and lower EETB due to reduced handshake overhead.
Censorship Resilience: Highly randomized traffic fingerprints and better resistance against active DPI probing.

I would love to hear your thoughts on this approach and discuss how we can realistically implement the chunking and reassembly logic on the Rust client and the GAS/tunnel-node sides.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions