Docs
10. Transform data using file upload

Transform data with dbt models using file upload

Overview

Let’s finalize our data pipeline by creating an orders model that combines stg_orders and stg_payments to show the breakdown of payments received for an order.

Steps

Adding a model using file upload

  1. Switch to the code mode in the top left corner by clicking on the Lineage dropdown
  2. Download these two files:
  3. Drag and drop the files into the models folder
  4. Preview the transformed data by clicking on Preview in the bottom drawer and verify that the table has 9 columns

Commit the uploaded files

  1. Click on the Commit & Push button at the top, you should see two tracked files:
    • orders.sql
    • orders.yml
  2. Add a commit name: adding orders table
  3. Click on Commit
  4. Wait for the pre-configured checks

Build the entire pipeline that generates the final orders table

  1. Open the bottom drawer
  2. Navigate to Build
  3. Enter y42 build -s +orders
  4. Click on Build now
  5. Observe the build job by clicking on the newly created job row
    • In total there will be 5 jobs created:
      1. First, you will see 2 jobs for the two source tables (raw_orders, raw_payments)
      2. Second, you will see 2 jobs for the staging tables (stg_orders, stg_payments)
      3. And finally, 1 job for the orders table (orders)
    • Click on the DAG tab to view the dependency graph and how a successful job triggers the next downstream jobs.

Preview the orders data

After the job is Ready, view the transformed data in the Data tab in the bottom drawer.

Up next

You've successfully built an end-to-end data pipeline using dbt models, integrating stg_orders and stg_payments into a unified orders model.

Next, we will move the pipeline into production by scheduling automatic updates through the use of the orchestrations feature.