Living with Snowflake: No More Late Nights and Weekend ETLs

This article is the first in a planned series on what it’s like to have Snowflake as your data warehouse/data lake. In teaching “Zero 2 Snowflake” workshops, I have found that people have difficulty wrapping their heads around what it means to have a groundbreaking technology that is truly “designed for the cloud” database.

It’s easy to get involved in the technical details of the architecture and miss the proverbial forest for the trees. This post describes how Snowflake’s architecture spells an end to run ETL at night and on the weekends and how it drastically reduces the time taken for ETL processes to run.

The Snowflake features that help achieve all these are:

  • Immutable data storage

  • Pointers to the data

  • On-demand compute clusters that you only pay for as you use

  • Separation of data from computing

  • Performance scales near linearly by increasing the size of your cluster

Why have companies been running ETL processes at night and on the weekend for all these years? The ETL process would run on the same computers running the reports. Also, for traditional databases, writes and reads contend with each other. Nobody wants the ETL process to bog down the system.


snowflake

Snowflake data is stored in an immutable database. There are only writes. Here, updates are new data that is written while pointers to this data are changed. This enables another great Snowflake feature, time travel, that won’t be addressed in this blog. From an ETL perspective, the lack of contention between the writes and updates allows ETL to be run at any time.

The ability to spin up or resume a compute cluster (Snowflake calls them compute warehouses) at any time and the fact that compute scales independently of storage means that “regular business use” of the data is handled by different compute clusters than ETL. When it’s time to run ETL, all you have to do is spin up a cluster just for the ETL. This takes only seconds, and Snowflake doesn’t charge when you aren’t using a compute cluster.


snowflake

If an ETL process would take ten hours with a 1cpu “cluster,” it should take five hours with a 2-node cluster, two-and-a-half hours with a 4-node cluster, and so on. The cost of the cluster for use doubles each time as well. But you are only paying for the cluster when you are using it. So, running a ten-hour process on a 1cpu “cluster” costs the same as running a five-minute process on a 128-node cluster. During development, you can test to find the “point of no further advantage” or at what size the cluster is optimum. Though, one can scale without paying for 365x24 costs for that capacity.

Who in ETL hasn’t come to work in the morning only to find that a six-hour ETL processed failed during the night? The task of fixing the bug and then rerunning the load without impacting the business might be impossible. There isn’t time to rerun before the users come in, expecting to run their reports. With Snowflake, you could shorten the time a rerun would take and rerun the process without impacting the users running reports.

This radical ability to scale performance and only pay for what you use has made Snowflake data warehouse “the” place to do all the processing that can be done. In essence, ETL has become ELT – Extract, Load, and then Transform.

You can move your data into Snowflake, then transform it with the massive scale and cost advantages of the Snowflake cloud database. But that’s a story for another blog. For now, living with Snowflake means sparing the ETL developer from working so many nights and weekends.

Contact Us Today

By submitting your information, you agree to our revised  Privacy Statement.

You might also like...

Get in Touch

Let us help you
transform and grow


By submitting your information, you agree to our revised  Privacy Statement.

Let’s Talk

x

Status message

We're eager to assist you! Please leave a message and we'll get back to you shortly.

By submitting your information, you agree to our revised  Privacy Statement.