feat: Let Binaryen encode our memory init more efficiently#2975
feat: Let Binaryen encode our memory init more efficiently#2975geraintluff wants to merge 3 commits intoAssemblyScript:mainfrom
Conversation
|
Oh, and obviously it fails a bunch of release-build fixture tests, because it's packing the data differently. 😅 |
|
Won't this essentially compress alignment away, say for example when we want 4-byte alignment for reasonably performant 32-bit memory accesses? |
|
@dcodeIO This doesn't change the memory layout at all, only how it's serialised into WASM/WAT. The title is probably misleading, sorry! Here's an example of the issue - it will take less space to serialise those as a single block of |
HerrCai0907
left a comment
There was a problem hiding this comment.
My biggest concern is this feature may OOM the building machine when the wasm is large. I remember there are a PR (maybe not merged) to don't create a real JS buffer during compilation when the data is all-zero to reduce the memory cost.
e.g. memory.data(large_number)
@HerrCai0907 Excellent point. I'll convert this back to draft, and then re-open when I have something incremental which avoids this problem. |
AssemblyScript produces a bunch of small data chunks. Binaryen can split up large chunks to represent them in a compact way, but not combine them.
This change checks whether Binaryen will be doing any optimisation at all, and combines all the
MemorySegments into one big segment.Here's a comparison of the bootstrapped AssemblyScript compiler itself, with and without the change:
Since this patch adds functionality, it increases the code size of the debug build, but the release build is still smaller due to the repacking.
It's a small improvement, and on my other projects I'm only seeing a ~1% change but 🤷.