feat: support module_version for add-to-app releases#121
feat: support module_version for add-to-app releases#121eseidel wants to merge 58 commits intoshorebird/devfrom
Conversation
) * fix: stop using globals for patch data * chore: run et format * chore: add missing files * test: add unittest * chore: run et format * chore: move elf_cache down into runtime * chore: rename elf* to patch* * chore: clean up logs * chore: clean up comments * chore: use Shorebird dart * chore: small cleanup
…ASE_URL (#97) * fix: make dart/flutter work without FLUTTER_STORAGE_BASE_URL * feat: shorebird flutter should work without setting FLUTTER_STORAGE_BASE_URL * fix: flutter_tools test fixes * fix: enable running flutter_tools tests * chore: remove unnecessary workflow
* chore: move build_engine scripts into this repo * chore: fix path of content_aware_hash.sh
* chore: roll dart to 6a78a2deaee05bc74775fcfa2ff27aa53c96efca * wip * chore: run et format * chore: attempt to clean up shorebird.cc * chore: fix build * chore: remove FLUTTER_STORAGE_BASE_URL override
* feat: allow patch_verification_mode * test: update tests * chore: rename to patch_verification
…clude patch_verification option
* es/report_start_fix * fix: second callsite
* chore: add a C++ interface onto the updater * chore: centralize SHOREBIRD_PLATFORM_SUPPORTED * test: fix tests
Previously we stopped reporting start on android by accident. This fixes that. I also removed the once-per-process guard since it's not necessary. This should be correctly reporting once-per-shell and let the rust code only handle the first of the calls. Fixes shorebirdtech/shorebird#3488
As part of our previous fix for FlutterEngineGroup, we introduced a new bug whereby report_launch_start could be called more than once in a multi-engine scenerio. That would cause confusion about what the current boot patch is, since the current patch is updated as part of report_launch_start. report_launch_start should only be called once per processs, which this change fixes. We still need more end-to-end testing at this layer to prevent bugs like this from sneaking in.
…#108) - Create a template Flutter project once in setUpAll and copy it per test, avoiding repeated `flutter create` calls - Run a warm-up `flutter build apk` in setUpAll (outside per-test timeout) to prime the Gradle cache - Add actions/cache for ~/.gradle so subsequent CI runs start warm - Add VERBOSE env var and failure output logging from #107
* chore: split CI into parallel jobs Split the single CI job into three parallel jobs: 1. flutter-tools-tests: Runs on ubuntu + macOS (unchanged) 2. shorebird-android-tests: Runs on Ubuntu only (faster runners) 3. shorebird-ios-tests: Runs on macOS only (requires Xcode) This improves CI performance by: - Running all jobs in parallel instead of sequentially - Moving Android tests off macOS to faster Ubuntu runners - Removing Windows from the matrix (nothing was running there anyway) Expected speedup on macOS: ~5 minutes (no longer runs Android tests) * Add Android smoke test on macOS Run a single "can build an apk" test on macOS to catch any platform-specific issues with Android builds on macOS. * Add comment explaining why Windows is excluded
* feat: add sharded CI build runner Adds a Dart-based shard runner for parallel engine builds: - JSON configs for Linux, macOS, Windows shards - run_shard.dart: executes gn/ninja/rust builds - compose.dart: assembles iOS/macOS frameworks - GCS upload/download for artifact staging * refactor: parse compose.json into typed objects Move from reading compose.json directly as Map<String, dynamic> to using proper ComposeConfig and ComposeDef model classes. This follows a more idiomatic Dart pattern. * feat: add finalize.dart for manifest generation and uploads Implements the finalize job logic that: - Downloads artifacts from GCS staging - Generates artifacts_manifest.yaml - Uploads to production GCS bucket (download.shorebird.dev) Ports upload logic from linux_upload.sh, mac_upload.sh, and generate_manifest.sh into a single Dart script. * fix: correct cargo-ndk invocation for Android Rust builds - Use --target flag (not -t) with Rust target triples - Set ANDROID_NDK_HOME to engine's hermetic NDK - Build all Android targets in a single cargo ndk command - Remove incorrect _androidArch helper function This matches the behavior of linux_build.sh. * test: add unit tests for config parsing Tests cover: - PlatformConfig: single-step shards, multi-step shards, compose_input - BuildStep: gn_ninja and rust step types - ComposeConfig: compose definitions, requires, script, args - Error handling for unknown shards/compose names * feat: add artifacts field to shard configs Define artifacts declaratively in JSON configs instead of hardcoding upload paths in Dart. Each artifact specifies: - src: source path relative to out/ - dst: destination path with $engine placeholder - zip: whether to zip before upload (for directories like dart-sdk) - content_hash: whether to also upload to content-hash path (for Dart SDK) This makes the config self-describing and aligns with Flutter's ci/builders/*.json pattern of explicit artifact declarations. * refactor: read shard names from JSON configs instead of hardcoding Load PlatformConfig for each platform to get shard names dynamically, rather than maintaining a duplicate list in finalize.dart. * feat(ci): add manifest generation and bucket configuration - Extract generateManifest to lib/manifest.dart with tests - Refactor finalize.dart to use artifacts from JSON configs - Add --bucket flag for test uploads to alternate buckets - Add compare_buckets.dart for validating uploads against production * chore: add pubspec.lock for shard_runner * chore: allow shard_runner pubspec.lock in gitignore * chore: use local .gitignore for shard_runner pubspec.lock * refactor: load manifest from template file Move manifest content to artifacts_manifest.template.yaml and update generateManifest to load from template file with sync IO. * fix: fail finalize on download errors instead of continuing A missing shard download means incomplete artifacts. Better to fail loudly than silently upload an incomplete build. * fix: fail on gsutil/zip errors instead of warning Upload and zip failures should halt the build, not silently continue with missing artifacts. * refactor: clean up shard_runner CLI and config parsing - Add shared runChecked() helper to eliminate duplicated Process.run + exit code check patterns across config.dart, gcs.dart, finalize.dart, and compose.dart - Add @immutable annotations to all data classes (via package:meta) - Remove implicit single-step shard shorthand; all shards now use explicit steps arrays in JSON configs - Convert all async file IO to sync equivalents (existsSync, etc.) - Make --engine-src and --run-id mandatory CLI args, removing hidden defaults and GITHUB_RUN_ID env var fallback - Restructure compose.json to use explicit flags/path_args instead of a single args list that guessed flag vs path semantics - Collect outDirs from config upfront rather than accumulating during execution * ci: add shard_runner tests to shorebird_ci workflow - Add analysis_options.yaml (package:lints/recommended with strict mode) - Add shard-runner-tests job with format, analyze, and test steps - Fix stale await on sync PlatformConfig.load in compare_buckets.dart - Reformat all files to Dart standard (80 char width)
Each shard runs on a separate machine, so it needs its own Rust build step for the updater library. Previously only the host/android shards had Rust steps, but all shards that build libflutter need libupdater.a for their specific target triple.
On Windows, gcloud SDK tools like gsutil are installed as .cmd files. When Dart's Process.run is called without runInShell, it doesn't resolve these .cmd extensions. This adds a helper that explicitly checks for .cmd versions in PATH on Windows.
The manifest template still referenced Flutter.dSYM.zip but mac_upload.sh uploads Flutter.framework.dSYM.zip (the new name as of Flutter 3.27.0). Update the template, generate script, and test to match what's actually uploaded. Relates to shorebirdtech/shorebird#3035
Adds a --trace option that produces a Chrome Trace Event Format JSON file showing where time is spent across all build layers (flutter tool, Gradle, flutter assemble targets). The output can be viewed in Perfetto at https://ui.perfetto.dev.
The intermediate trace file path is shared across all Gradle variants. This is safe today since flutter build apk only runs one variant per invocation, but would need per-variant paths if that changes.
- Add --trace option to flutter build ios / flutter build ipa - Pass TRACE_FILE through Xcode build settings to xcode_backend.dart - Instrument buildXcodeProject() in mac.dart with pre-xcode, xcode, and post-xcode spans - Merge flutter assemble trace events from intermediate file - Remove TRACE_FILE from toEnvironmentConfig() since both Android and iOS orchestrators compute intermediate paths directly
When shorebird.yaml contains a release_version field, use it as the release version instead of the host app's version+versionCode. This enables add-to-app (AAR/iOS framework) releases where the module's identity needs to be independent of the host app's version. The CLI injects this field during `shorebird release aar` builds. Falls back to existing behavior when release_version is absent, so this is fully backwards-compatible. Part of shorebirdtech/shorebird#793
When the SHOREBIRD_RELEASE_VERSION environment variable is set, write it as release_version into the compiled shorebird.yaml. This follows the same pattern as SHOREBIRD_PUBLIC_KEY. The Shorebird CLI sets this env var during `shorebird release aar` builds. The engine reads release_version from the compiled yaml and uses it instead of the host app's version for patch checking. Part of shorebirdtech/shorebird#793
The concept applies to both AAR and iOS framework releases — module_version is the platform-agnostic name for the version of an embeddable Flutter module.
Design discussion: module_version vs release_version in the protocolThe current approach in this PR works by having the engine substitute
However, this means the host app's actual version is thrown away. The server has no way to distinguish a module version (git hash) from an app version, and can't report which host app versions are running a given module. Alternative: plumb module_version as a separate fieldInstead of overriding
The updater would then send both in the This requires touching more code:
But avoids future problems:
RecommendationThe current PR is a valid MVP, but we're leaning toward plumbing Leaving this here for discussion before we decide which path to take. |
Instead of overriding release_version in the engine, pass module_version as a separate optional field through the full stack: - YamlConfig: parse module_version from shorebird.yaml - UpdateConfig: propagate module_version - PatchCheckRequest: send module_version alongside release_version - Engine C++: revert release_version override, pass host app version through unchanged The server uses module_version for patch lookup when present, while release_version always contains the host app's version for analytics. This avoids polluting release_version data with git hashes from module releases. Part of shorebirdtech/shorebird#793
c92ab93 to
b348f63
Compare
Summary
SHOREBIRD_MODULE_VERSIONenv var incompileShorebirdYaml()and writemodule_versioninto the compiledshorebird.yaml(same pattern asSHOREBIRD_PUBLIC_KEY)module_versionfrom shorebird.yaml, add toUpdateConfigandPatchCheckRequestas a separate optional field sent alongsiderelease_versionContext
Part of shorebirdtech/shorebird#793
release_versionalways stays the host app's version.module_versionis a separate field used for patch lookup in add-to-app releases where the module's identity is independent of the host app.Companion PRs:
Test plan
module_versionappears in compiled yaml when env var is set