This article is the first in a planned series on what it’s like to have Snowflake as your data warehouse/data lake. In teaching “Zero 2 Snowflake” workshops, I have found that people have difficulty wrapping their heads around what it means to have a groundbreaking technology that is truly “designed for the cloud” database.
It’s easy to get involved in the technical details of the architecture and miss the proverbial forest for the trees. This post describes how Snowflake’s architecture spells an end to run ETL at night and on the weekends and how it drastically reduces the time taken for ETL processes to run.
The Snowflake features that help achieve all these are:
Immutable data storage
Pointers to the data
On-demand compute clusters that you only pay for as you use
Separation of data from computing
Performance scales near linearly by increasing the size of your cluster
Why have companies been running ETL processes at night and on the weekend for all these years? The ETL process would run on the same computers running the reports. Also, for traditional databases, writes and reads contend with each other. Nobody wants the ETL process to bog down the system.
Snowflake data is stored in an immutable database. There are only writes. Here, updates are new data that is written while pointers to this data are changed. This enables another great Snowflake feature, time travel, that won’t be addressed in this blog. From an ETL perspective, the lack of contention between the writes and updates allows ETL to be run at any time.
The ability to spin up or resume a compute cluster (Snowflake calls them compute warehouses) at any time and the fact that compute scales independently of storage means that “regular business use” of the data is handled by different compute clusters than ETL. When it’s time to run ETL, all you have to do is spin up a cluster just for the ETL. This takes only seconds, and Snowflake doesn’t charge when you aren’t using a compute cluster.
If an ETL process would take ten hours with a 1cpu “cluster,” it should take five hours with a 2-node cluster, two-and-a-half hours with a 4-node cluster, and so on. The cost of the cluster for use doubles each time as well. But you are only paying for the cluster when you are using it. So, running a ten-hour process on a 1cpu “cluster” costs the same as running a five-minute process on a 128-node cluster. During development, you can test to find the “point of no further advantage” or at what size the cluster is optimum. Though, one can scale without paying for 365x24 costs for that capacity.
Who in ETL hasn’t come to work in the morning only to find that a six-hour ETL processed failed during the night? The task of fixing the bug and then rerunning the load without impacting the business might be impossible. There isn’t time to rerun before the users come in, expecting to run their reports. With Snowflake, you could shorten the time a rerun would take and rerun the process without impacting the users running reports.
This radical ability to scale performance and only pay for what you use has made Snowflake data warehouse “the” place to do all the processing that can be done. In essence, ETL has become ELT – Extract, Load, and then Transform.
You can move your data into Snowflake, then transform it with the massive scale and cost advantages of the Snowflake cloud database. But that’s a story for another blog. For now, living with Snowflake means sparing the ETL developer from working so many nights and weekends.
Contact Us Today
What are the Differences? Though often used interchangeably, data pipelines and ETL are two different methodologies for managing and structuring data. ETL tools are used for data extraction, transformation, and loading. Whereas data pipelines encompass the entire set of processes applied to data as it moves from one system to another. Sometimes data pipelines involve transformation, and sometimes they do not.Explore
One Unified Dashboard In the past, most enterprises would have used a legacy business management system to track business needs and understand how IT resources can fulfill these needs. The problem with these legacy systems is the manual data collection process, which introduces the risk of human error and is much slower than newer automated solutions.Explore
Intelligent automation in the workplace is becoming more relevant in the modern market. As automation technology becomes more refined and smart business models allow business owners to optimize their workflow, more and more are turning to intelligent automation for their internal and client-facing processes alike.Explore
What is a Hybrid Data Center? A hybrid data center is a computing environment that combines on-premise and cloud-based infrastructure to enable the sharing of applications and data across physical data centers and multi-cloud environments. This allows organizations to balance the security provided by on-premise infrastructure and the agility found with a public cloud environment.Explore
Leverage Your Data to Discover Hidden Potential The amount of data in the insurance industry is exploding, and the number of opportunities to leverage this data to achieve large-scale business value has exploded along with it. Rapid integration of technology makes it possible to use advanced business analytics in insurance to discover potential markets, risks, customers, and competitors, as well as plan for natural disasters.Explore
Increased Use of Data Lakes As volumes of big data continue to explode, data lakes are becoming essential for companies to leverage their data for competitive advantage. Research by Aberdeen shows that organizations that have deployed and are using data lakes outperform similar companies by nine percent in organic revenue growth.Explore