Conversation
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
run benchmarks |
|
🤖 |
|
run benchmark tpch tpchds |
|
🤖 Hi @Dandandan, thanks for the request (#19390 (comment)).
Please choose one or more of these with |
|
🤖: Benchmark completed Details
|
|
run benchmark tpch tpcds |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
🤖 |
|
🤖: Benchmark completed Details
|
@Dandandan how do I think once this optim is done, there could be a lot to reuse for broadcast joins... |
For plain (non dynamic) filters, I think based on a treshold (<= 3) it either gets planned as a chain of or expressions or using |
7ba1c85 to
276a37f
Compare
|
run benchmark in_list |
276a37f to
d18b346
Compare
|
🤖 |
|
🤖: Benchmark completed Details
|
2fc00e5 to
3db393a
Compare
|
run benchmark in_list |
|
🤖 |
|
🤖: Benchmark completed Details
|
|
run benchmarks in_list_strategy in_list |
|
Benchmark job started for this request (job |
|
Benchmark job started for this request (job |
|
🤖 Criterion benchmark running (GKE) | trigger |
|
🤖 Criterion benchmark running (GKE) | trigger |
|
🤖 Criterion benchmark completed (GKE) | trigger Details
Resource Usagebase (merge-base)
branch
|
|
🤖 Criterion benchmark completed (GKE) | trigger New benchmark — branch-only results (no baseline comparison) Details
Resource Usagebranch
|
Add a new in_list_strategy benchmark file with targeted coverage of each optimization strategy, without replacing the existing in_list benchmarks which are kept intact for historical comparison.
Introduces the StaticFilter trait to decouple membership testing from InListExpr. Migrates existing HashSet optimizations into primitive_filter.rs to maintain performance parity while enabling future specialized implementations. Triggers for all constant IN lists.
Replaces HashSet<u8> with a 32-byte stack-allocated bitmap. Provides O(1) membership testing via bit-shifting, significantly reducing memory overhead and improving cache locality. Triggers for UInt8 arrays.
Implements an 8 KB heap-allocated bitmap for UInt16. Maintains O(1) performance while handling the larger value space. Triggers for UInt16 arrays.
Introduces zero-copy buffer reinterpretation to allow signed integers and other 1 or 2-byte primitive types (e.g. Float16) to use the high-performance bitmap filters. Triggers for all types with 1-byte or 2-byte width.
f2d1e00 to
459bd09
Compare
Adds a const-generic unrolled comparison chain that avoids CPU branching. Outperforms hash lookups for very small lists. Triggers for primitives when list size <= 32 (4-byte), 16 (8-byte), or 4 (16-byte).
Implements a fast hash table using open addressing with linear probing and a 25% load factor. Replaces the legacy HashSet for primitives, reducing indirection. Triggers for primitives when list size exceeds branchless thresholds.
Introduces a two-stage filter for ByteView types. Stage 1 uses a fast DirectProbeFilter on masked views (len + prefix) for quick rejection; Stage 2 performs full verification only for potential long-string matches. Triggers for Utf8View and BinaryView.
Port of the two-stage View optimization to standard Utf8 and LargeUtf8 types. Encodes strings as i128 (len + prefix) for fast O(1) pre-filtering before falling back to full string comparison. Triggers for Utf8 and LargeUtf8.
459bd09 to
85ee830
Compare
FixedSizeBinary(N) arrays share the same contiguous buffer layout as primitive arrays, so for power-of-2 widths (1, 2, 4, 8, 16) we can zero-copy reinterpret them and use the optimized primitive filters (bitmap, branchless, hash) instead of falling through to the NestedTypeFilter fallback.
85ee830 to
361e281
Compare
|
@alamb @geoffreyclaude I wonder if you have any thoughts on how we can move forward with this. May a high level the tradeoff I see is:
Practically speaking I want to get this across the line but I find this hard to review, both because of the size of the diff but also because of the nuance of the implementation. Quite honestly without AI explaining to me some of the more niche branches I’d struggle to grok this or find holes in the logic. |
|
@adriangb Thanks for looking into this a bit more! If we agree most of these optimizations are worth pursuing (my biased Datadog opinion being that the Int8/Int16 and Utf8 optims are the most critical), we should first get the first two commits merged. Those set the foundations for the rest. Each subsequent optimization commit is then much easier to reason about independently and they can be easily split into dedicated PRs. The first commit is "just" AI generated benchmarks, I don't think we need to care too much about that one. The second one though is the most important... |
|
Adding new benchmarks in a separate PR sounds like a good way to start Also a related PR here: |
|
Yeah agreed. @geoffreyclaude if you make a PR with just the benchmarks we can merge that immediately. Then the second commit would be its own PR. The rest I feel like we really need to look at the tradeoff of performance vs. complexity one by one and e.g. bake them off against #21547. |
@adriangb I (well, Codex...) cherry-picked the first two commits to dedicated PRs: |
## Which issue does this PR close? - Part of #19241. - This PR was originally proposed as the first commit in the broader `IN LIST` optimization series in #19390. ## Rationale for this change `IN LIST` has become the target of several specialized execution strategies, but the existing benchmark coverage in `datafusion/physical-expr/benches/in_list.rs` is mostly end-to-end and historical in nature. That broad coverage is useful for regression tracking, but it is not ideal for answering more focused questions such as: - how a specific strategy behaves as the list size crosses a threshold - whether the fast path helps both hit-heavy and miss-heavy workloads - how null handling affects the strategy-specific implementations - how two-stage string filters behave in their worst-case collision patterns This PR adds a dedicated strategy benchmark harness for `IN LIST` so future performance work can be evaluated against a stable, repeatable, strategy-focused corpus. This PR does not change `InList` execution behavior. It only adds benchmark coverage. ## What changes are included in this PR? - Adds `datafusion/physical-expr/benches/in_list_strategy.rs` - Registers the new benchmark target in `datafusion/physical-expr/Cargo.toml` - Keeps the existing `benches/in_list.rs` benchmark suite intact for broader historical comparison - Adds targeted Criterion coverage for the main `IN LIST` strategy families, including: - bitmap-style paths for narrow integer cases - branchless and hash/probe-style paths for primitive values at different list-size thresholds - reinterpretation-heavy cases such as floats and timestamps - string and string-view cases, including stage-2 / prefix-collision stress inputs - null-heavy and `NOT IN` scenarios - dictionary and fixed-size-binary coverage used by the broader `IN LIST` implementation ## Are these changes tested? Yes. I validated this PR with: - `cargo fmt --all` - `cargo clippy -p datafusion-physical-expr --all-targets --all-features -- -D warnings` - `cargo test -p datafusion-physical-expr in_list --lib` ## Are there any user-facing changes? No user-facing changes. This PR only adds benchmark coverage for development and performance evaluation.
Which issue does this PR close?
Rationale for this change
The current
InListexpression implementation uses a genericArrayStaticFilterthat relies onmake_comparatorfor all types, which adds significant overhead for primitive types. This PR introduces type-specialized filters that exploit the properties of different data types to achieve substantial performance improvements.What changes are included in this PR?
This PR refactors the
InListexpression to use specialized filter strategies based on data type and list size. The implementation is split into 10 incremental commits:Commit 1: Strategy-Focused InList Benchmarks
benches/in_list_strategy.rsto establish focused microbenchmarks for the filter strategies and threshold boundaries explored in the follow-up commitsCommit 2: Modular StaticFilter Architecture
Refactors
InListExprfrom a single monolithic file into a module (in_list/) with submodules:static_filter.rs(trait),primitive_filter.rs(primitive optimizations),nested_filter.rs(complex type fallback),result.rs(result construction),strategy.rs(filter selection), andtransform.rs(type transformations). Introduces theStaticFiltertrait to decouple membership testing fromInListExpr, enabling pluggable filter implementations without changing the public API.Commit 3: Bitmap Filter for UInt8 (stack-based)
bitmap/u8_list=4_match=50%is 8.1× faster than baselineCommit 4: Bitmap Filter for UInt16 (heap-based)
Commit 5: Zero-Copy Reinterpretation for Int8/Int16/Float16
in_list_Int16_list=28_nulls=0%is 13.6× faster than baselineCommit 6: Branchless Filter for Small Primitive Lists
values.iter().fold(false, |acc, &v| acc | (v == needle))primitive/i32_branchless_list=4_match=50%is 13.4× faster;reinterpret/timestamp_ns_branchless_list=4_match=50%is 22.5× fasterCommit 7: Direct Probe Hash Filter for Large Primitive Lists
std::HashSetfor primitives when list exceeds branchless thresholdsCommit 8: ByteView Two-Stage Filter (Utf8View/BinaryView)
in_list_Utf8View_list=28_nulls=0%_str=100is 11.1× faster than baselineUtf8Viewbenches are 3.70× geomean faster overallCommit 9: Utf8/LargeUtf8 Two-Stage Filter
[len:u32][data:12 bytes]for quick rejectionUtf8benches are 1.20× geomean faster overall, with a few small-list regressions still presentCommit 10: FixedSizeBinary Zero-Copy Reinterpretation
FixedSizeBinary(N)forN ∈ {1, 2, 4, 8, 16}now uses the primitive fast paths instead of the generic nested fallbackfixed_size_binary/fsb16_list=10000_match=50%is 7.6× faster than baselinePerformance Summary
Benchmarks were compared by scanning
target/criterion, pairingbefore/sample.jsonwith the latestnew/sample.json, and using the best observed sample (min(time / iters)) on each side to reduce system noise.Overall on the current benchmark corpus:
Representative wins from the latest run:
reinterpret/timestamp_ns_branchless_list=4_match=50%: 22.5× fasterin_list_TimestampNs_list=3_nulls=20%: 17.9× fasterin_list_Int16_list=28_nulls=0%: 13.6× fasterprimitive/i32_branchless_list=4_match=50%: 13.4× fasterin_list_Utf8View_list=28_nulls=0%_str=100: 11.1× fasterfixed_size_binary/fsb16_list=10000_match=50%: 7.6× fasterKnown regressions from the latest run are limited to a few legacy
Utf8cases.Filter Selection Strategy
Are these changes tested?
Yes, the optimizations are covered by the existing
in_listtest suite, which exercises:Benchmark coverage lives in both:
benches/in_list.rsfor broad end-to-end legacy coveragebenches/in_list_strategy.rsfor strategy-focused microbenchmarks at the threshold boundariesAre there any user-facing changes?
No user-facing API changes. This is a pure performance optimization that maintains identical behavior.