You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/website/docs/hub/features/transformations/index.md
+50-39Lines changed: 50 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,8 +11,8 @@ import { DltHubFeatureAdmonition } from '@theme/DltHubFeatureAdmonition';
11
11
12
12
`dlt transformations` let you build new tables or full datasets from datasets that have _already_ been ingested with `dlt`. `dlt transformations` are written and run in a very similar fashion to dlt source and resources. `dlt transformations` require you to have loaded data to a location, for example a local duckdb database, a bucket or a warehouse on which the transformations may be executed. `dlt transformations` are fully supported for all of our sql destinations including all filesystem and bucket formats.
13
13
14
-
You create them with the `@dlt.hub.transformation` decorator which has the same signature as the `@dlt.resource` decorator, but does not yield items but rather a SQL query including the resulting
15
-
column schema. dlt transformations support the same write_dispositions per destination as dlt resources do.
14
+
You create them with the `@dlt.hub.transformation` decorator, which has the same signature as the `@dlt.resource` decorator but yields a SQL query, including the resulting
15
+
column schema, rather than data items. dlt transformations support the same write_dispositions per destination as dlt resources do.
16
16
17
17
## Motivations
18
18
@@ -31,23 +31,18 @@ A few real-world scenarios where dlt transformations can be useful:
31
31
32
32
## Quick-start in three simple steps
33
33
34
-
For the example below you can copy–paste everything into one script and run it. It is useful to know how to use dlt [Datasets and Relations](../../../general-usage/dataset-access/dataset.md), since these are heavily used in transformations.
34
+
For the example below, you can copy–paste everything into one script and run it.
35
+
36
+
:::note
37
+
It is useful to know how to use dlt [Datasets and Relations](../../../general-usage/dataset-access/dataset.md), since these are heavily used in transformations.
38
+
:::
35
39
36
40
### 1. Load some example data
37
41
38
42
The snippets below assume that we have a simple fruitshop dataset as produced by the dlt fruitshop template:
### 1.1 Use the fruitshop template as a starting point
44
-
45
-
Alternatively, you can follow the code examples below by creating a new pipeline with the fruitshop template and running transformations on the resulting dataset:
@@ -75,7 +70,7 @@ Most of the following examples will be using the ibis expressions of the `dlt.Da
75
70
76
71
***Decorator arguments** mirror those accepted by `@dlt.resource`.
77
72
* The transformation function signature must contain at least one `dlt.Dataset` which is used inside the function to create the transformation SQL statements and calculate the resulting schema update.
78
-
*Yields a `Relation` created with ibis expressions or a select query which will be materialized into the destination table. If the first item yielded is a valid sql query or relation object, data will be interpreted as a transformation. In all other cases, the transformation decorator will work like any other resource.
73
+
*A transformation yields a `Relation` created with ibis expressions or a select query which will be materialized into the destination table. If the first item yielded is a valid sql query or relation object, data will be interpreted as a transformation. In all other cases, the transformation decorator will work like any other resource.
79
74
80
75
## Loading to other datasets
81
76
@@ -105,13 +100,13 @@ Below we load the data from our local DuckDB instance to a Postgres instance. dl
105
100
106
101
### Yielding multiple transformations from one transformation resource
107
102
108
-
`dlt transformations` may also yield more than one transformation instruction. If no further table name hints are supplied, the result will be a union of the yielded transformation instructions. dlt will take care of the necessary schema migrations, you will just need to ensure that no columns are marked as non-nullable that are missing from one of the transformation instructions:
103
+
`dlt transformations` may also yield more than one transformation instruction. If no further table name hints are supplied, the result will be a union of the yielded transformation instructions. `dlt` will take care of the necessary schema migrations, you will just need to ensure that no columns are marked as non-nullable that are missing from one of the transformation instructions:
You may supply column and table hints the same way you do for regular resources. `dlt` will derive schema hints from your query, but in some cases you may need to change or amend hints, such as making columns nullable for the example above or change the precision or type of a column to make it work with a given target destination (if different from the source)
109
+
You may supply column and table hints the same way you do for regular resources. `dlt` will derive schema hints from your query, but in some cases you may need to modify or extend them — for example, making columns nullable as in the example above, or adjusting the precision or type of a column to ensure compatibility with a specific target destination (if it differs from the source).
@@ -125,45 +120,57 @@ The identifiers (table and column names) used in these raw SQL expressions must
125
120
126
121
## Using pandas dataframes or arrow tables
127
122
128
-
You can also directly write transformations using pandas or arrow. Please note that your transformation resource will act like a regular resource in this case, you will not have column level hints forward but rather dlt will just see the dataframes or arrow tables you yield and process them like from any other resource. This may change in the future.
123
+
You can also write transformations directly using pandas or arrow. Note that in this case your transformation resource behaves like a regular resource: column-level hints will not be propagated, and `dlt` will simply treat the yielded dataframes or arrow tables like data from any other resource. This behavior may change in the future.
When executing transformations, dlt computes the resulting schema before the transformation is executed. This allows dlt to:
130
+
When executing transformations, `dlt` computes the resulting schema before the transformation is executed. This allows `dlt` to:
136
131
137
132
1. Migrate the destination schema accordingly, creating new columns or tables as needed
138
133
2. Fail early if there are schema mismatches that cannot be resolved
139
134
3. Preserve column-level hints from source to destination
140
135
141
136
### Schema evolution
142
137
143
-
For example, if your transformation joins two tables and creates new columns, dlt will automatically update the destination schema to accommodate these changes. If your transformation would result in incompatible schema changes (like changing a column's data type in a way that could lose data), dlt will fail before executing the transformation, protecting your data and saving execution and debug time.
138
+
For example, if your transformation joins two tables and creates new columns, `dlt` will automatically update the destination schema to accommodate these changes. If your transformation would result in incompatible schema changes (like changing a column's data type in a way that could lose data), `dlt` will fail before executing the transformation, protecting your data and saving execution and debug time.
144
139
145
140
You can inspect the computed result schema during development by looking at the result of `compute_columns_schema` on your `Relation`:
When creating or updating tables with transformation resources, dlt will also forward certain column hints to the new tables. In our fruitshop source, we have applied a custom hint named
152
-
`x-annotation-pii` set to True for the `name` column, which indicates that this column contains PII (personally identifiable information). We might want to know downstream of our transformation layer
153
-
which columns resulted from origin columns that contain private data:
146
+
When creating or updating tables with transformation resources, `dlt` will also forward certain column hints to the new tables. In our fruitshop source, we have applied a custom hint named
147
+
`x-annotation-pii` set to True for the `name` column, which indicates that this column contains PII (personally identifiable information).
148
+
Downstream of the transformation layer, we may want to know which columns originate from columns that contain private data:
*`dlt` will only forward certain types of hints to the resulting tables: custom hints starting with `x-annotation...` and type hints such as `nullable`, `data_type`, `precision`, `scale`, and `timezone`. Other hints, such as `primary_key` or `merge_keys`, will need to be set via the `columns` argument on the transformation decorator, since `dlt` does not know how the transformed tables will be used.
155
+
*`dlt` cannot forward hints for columns that result from combining multiple origin columns, such as when they are concatenated or produced through other SQL operations.
158
156
159
-
*`dlt` will only forward certain types of hints to the resulting tables: custom hints starting with `x-annotation...` and type hints such as `nullable`, `data_type`, `precision`, `scale`, and `timezone`. Other hints, such as `primary_key` or `merge_keys`, will need to be set via the `columns` argument on the transformation decorator, since dlt does not know how the transformed tables will be used.
160
-
*`dlt` will not be able to forward hints for columns that are the result of combining two origin columns, for example by concatenating them or similar sql operations.
161
157
162
-
## Query Normalization
158
+
## Lifecycle of a SQL transformation
163
159
164
-
### `dlt` columns
160
+
Just like regular dlt resources, dlt transformations go through the three stages of extract, normalize, and load when a pipeline is run.
165
161
166
-
When executing transformations, dlt will add internal dlt columns to your SQL queries depending on the configuration:
162
+
### Extract
163
+
164
+
In the extract stage, a `Relation` yielded by a transformation is converted into a SQL string and saved as a `.model` file along with its source SQL dialect.
165
+
At this stage, the SQL string is just the user's original query — either the string that was explicitly provided or the one generated by `Relation.to_sql()`. No dlt-specific columns like `_dlt_id` or `_dlt_load_id` are added yet.
166
+
167
+
### Normalize
168
+
169
+
In the normalize stage, `.model` files are read and processed. This is where the main transformation logic happens.
170
+
171
+
#### `dlt` columns
172
+
173
+
During normalization, `dlt` will add internal dlt columns to your SQL queries depending on the configuration:
167
174
168
175
-`_dlt_load_id`, which tracks which load operation created or modified each row, is **added by default**. Even if present in your query, the `_dlt_load_id` column will be **replaced with a constant value** corresponding to the current load ID. To disable this behavior, set:
169
176
```toml
@@ -185,31 +192,33 @@ When executing transformations, dlt will add internal dlt columns to your SQL qu
185
192
186
193
Additionally, column names are normalized according to the naming schema selected and the identifier capabilities of the destinations. This ensures compatibility and consistent naming conventions across different data sources and destination systems.
187
194
188
-
This allows dlt to maintain data lineage and enables features like incremental loading and merging, even when working with raw SQL queries.
195
+
This allows `dlt` to maintain data lineage and enables features like incremental loading and merging, even when working with raw SQL queries.
189
196
190
197
:::info
191
198
The normalization described here, including automatic injection or replacement of dlt columns, applies only to SQL-based transformations. Python-based transformations, such as those using dataframes or arrow tables, follow the [regular normalization process](../../../reference/explainers/how-dlt-works.md#normalize).
192
199
:::
193
200
194
-
### Query Processing
201
+
#### Query Processing
202
+
203
+
Additionally, the normalization process in `dlt` takes care of several important steps to ensure your queries are executed smoothly and correctly on the input dataset:
195
204
196
-
When you run your transformations, `dlt` takes care of several important steps to ensure your queries are executed smoothly and correctly on the input dataset. Here’s what happens behind the scenes:
205
+
1. Adds special dlt columns (see above for details).
206
+
2. Fully qualifies all identifiers by adding database and dataset prefixes, so tables are always referenced unambiguously during query execution.
207
+
3. Properly quotes and, if necessary, adjusts the case of your identifiers to match the destination’s requirements.
208
+
4. Handles differences in naming conventions by aliasing columns and tables as needed, so names always match those in the destination.
209
+
5. Reorders columns to match the expected order in the destination table.
210
+
6. Fills in default `NULL` values for any columns that exist in the destination table but are not selected in your query.
197
211
198
-
1. Expands any `*` (star) selects to include all relevant columns.
199
-
2. Adds special dlt columns (see below for details).
200
-
3. Fully qualifies all identifiers by adding database and dataset prefixes, so tables are always referenced unambiguously during query execution.
201
-
4. Properly quotes and, if necessary, adjusts the case of your identifiers to match the destination’s requirements.
202
-
5. Handles differences in naming conventions by aliasing columns and tables as needed, so names always match those in the destination.
203
-
6. Reorders columns to match the expected order in the destination table.
204
-
7. Fills in default `NULL` values for any columns that exist in the destination table but are not selected in your query.
212
+
### Load
205
213
206
-
Given a table of the name `my_table` with the columns `id` and `value` on `duckdb`, on a dataset with the name `my_dataset`, loaded into a dataset named `transformed_dataset`, the following query:
214
+
In the load stage, the normalized SELECT queries from `.model` files are wrapped in INSERT statements and executed on the destination.
215
+
For example, given this query from the extract stage:
207
216
208
217
```sql
209
218
SELECT id, value FROM table
210
219
```
211
220
212
-
Will be translated to
221
+
After the normalize stage processes it (adding dlt columns, wrapping in subquery, etc.), the load stage executes:
213
222
214
223
```sql
215
224
INSERT INTO
@@ -228,6 +237,8 @@ FROM (
228
237
AS _dlt_subquery
229
238
```
230
239
240
+
The SELECT portion is what was produced during the normalize stage. In the load stage, this query is executed via the destination's SQL client, materializing the transformation result directly in the database.
0 commit comments