Update product names: Workflows→Jobs, Delta Live Tables→Spark Declarative Pipelines#4967
Update product names: Workflows→Jobs, Delta Live Tables→Spark Declarative Pipelines#4967lennartkats-db wants to merge 7 commits intomainfrom
Conversation
…tive Pipelines Update all non-generated references to retired product names: - "Databricks Workflows" / "Workflows" → "Databricks Jobs" / "Jobs" - "Delta Live Tables" → "Spark Declarative Pipelines" - "DLT" → "SDP" (in comments/internal code) - Template parameter `include_dlt` → `include_sdp` - Template file `dlt_pipeline.ipynb` → `sdp_pipeline.ipynb` Generated files (schema JSON, docsgen, acceptance test outputs, Python models) are not updated here — regenerate with `make schema`, `make docs`, `make test-update`, `make test-update-templates`, `make -C python codegen` after the upstream proto changes land. Co-authored-by: Isaac
Co-authored-by: Isaac
Co-authored-by: Isaac
With include_pipeline properly wired (was silently ignored as include_dlt), PIPELINE=no now excludes the pipeline resource. With only a job resource, dynamic_version causes 1 change and 0 unchanged, which is correct behavior. Co-authored-by: Isaac
The template renamed include_dlt to include_pipeline in a prior PR, but the combinations test intentionally still passes include_dlt (which gets silently ignored, defaulting to yes). Renaming to include_pipeline makes PIPELINE=no actually exclude pipelines, causing divergent output across variants which the combinations framework doesn't support. Co-authored-by: Isaac
The output was corrupted when running tests locally without terraform, replacing the successful deployment output with terraform init errors. Restores correct output from main and applies DLT→SDP string change. Co-authored-by: Isaac
simonfaltum
left a comment
There was a problem hiding this comment.
Review swarm: Isaac + Cursor (1 round, Cursor timed out)
0 Critical | 1 Major (Gap) | 1 Nit | 1 Suggestion
The rename changes look correct across the board. A few things worth addressing before merge, the main one being that jsonschema.json appears to be manually edited rather than regenerated from the source annotations. There are also a couple of downstream generated files (jsonschema_for_docs.json, docsgen/output/reference.md, docsgen/output/resources.md) that still contain the old product names.
See inline comments for specifics.
| }, | ||
| "additionalProperties": false, | ||
| "markdownDescription": "The pipeline resource allows you to create Delta Live Tables [pipelines](https://docs.databricks.com/api/workspace/pipelines/create). For information about pipelines, see [link](https://docs.databricks.com/dlt/index.html). For a tutorial that uses the Declarative Automation Bundles template to create a pipeline, see [link](https://docs.databricks.com/dev-tools/bundles/pipelines-tutorial.html)." | ||
| "markdownDescription": "The pipeline resource allows you to create Spark Declarative [Pipelines](https://docs.databricks.com/api/workspace/pipelines/create). For information about pipelines, see [link](https://docs.databricks.com/dlt/index.html). For a tutorial that uses the Declarative Automation Bundles template to create a pipeline, see [link](https://docs.databricks.com/dev-tools/bundles/pipelines-tutorial.html)." |
There was a problem hiding this comment.
[Gap (Major)] This file looks like it was manually edited rather than regenerated from the source annotations. That's fragile and can drift.
I think you should only edit the source files (annotations.yml, annotations_openapi_overrides.yml) and then run make schema && make schema-for-docs && make docs to regenerate everything consistently. The other generated files (jsonschema_for_docs.json, docsgen/output/reference.md, docsgen/output/resources.md) still contain the old product names and would get picked up by regeneration.
| "_": | ||
| "markdown_description": |- | ||
| The pipeline resource allows you to create Delta Live Tables [pipelines](/api/workspace/pipelines/create). For information about pipelines, see [_](/dlt/index.md). For a tutorial that uses the Declarative Automation Bundles template to create a pipeline, see [_](/dev-tools/bundles/pipelines-tutorial.md). | ||
| The pipeline resource allows you to create Spark Declarative [Pipelines](/api/workspace/pipelines/create). For information about pipelines, see [_](/dlt/index.md). For a tutorial that uses the Declarative Automation Bundles template to create a pipeline, see [_](/dev-tools/bundles/pipelines-tutorial.md). |
There was a problem hiding this comment.
[Nit] The new text reads "create Spark Declarative [Pipelines](...)" which splits the product name across a link boundary. "Spark Declarative" on its own doesn't mean anything. The original worked because "Delta Live Tables" was a self-contained product name and "pipelines" was a generic noun.
Consider "create [Spark Declarative Pipelines](/api/workspace/pipelines/create)" to keep the full product name inside the link.
|
|
||
| // One or more DLT pipelines is being recreated. | ||
| if len(dltActions) != 0 { | ||
| // One or more SDP pipelines is being recreated. |
There was a problem hiding this comment.
[Suggestion] Nice rename from dltActions to pipelineActions. The comment here could match: since the variable already says "pipeline", you could simplify to // One or more pipelines is being recreated. and drop the "SDP" abbreviation.
Changes
include_dlt→include_sdpand filedlt_pipeline.ipynb→sdp_pipeline.ipynbin experimental-jobs-as-code template