Today’s DBA typically manages ten, hundreds or even thousands of databases -- RDBMS, NoSQL DBMS, and/or Hadoop clusters -- often from multiple vendors, both on-premises and in the cloud.
While management automation has made substantial strides enabling DBAs to handle larger workloads, the care and feeding of these databases is still all too often a burdensome task.
Data warehouses, data marts, and data lakes usually require the most attention. Let’s discuss how using Snowflake RDBMS can dramatically reduce the DBA’s workload!
It consists of three distinct layers.
- “Cloud Services” manages the cloud vendor services as well as the meta-data and SQL execution plan generation.
- “Virtual Data Warehouse(s) (VDW)” provides the compute engine(s). VDWs are completely separate and independent of each other, even when operating on the same data. A VDW supporting BI is not affected by another VDW performing ETL/ELT. They are very simple to create and manage, with either a simple SQL statement or in the web UI. Note that a VDW is pure compute; don’t confuse this with the traditional use of “data warehouse” to mean both compute and storage.
- “Storage” aka “database” provides a “shared disk” architecture, separate from the VDWs that are using it. Storage is AWS S3 or Azure Blob.
A VDW can operate against any database; it is not allocated to a specific database.
Let’s look at the common tasks and requirements for creation and management.
The initial creation of a database typically requires the following steps. This is neither an exhaustive list nor applicable to every DBMS:
- Sizing estimation. This is critical to ensuring smooth operation, both at normal loads and peak loads. This can be a daunting task and usually results in over-provisioning.
- Infrastructure procurement and installation. Typically based on the peak load estimate, often leaving unused capacity for significant periods of time.
- Software installation.
- Creation of the database object itself (CREATE DATABASE …). Common steps include specifying the file system, file naming conventions, storage management, system parameters, etc.
- Database schema and object creation. We will assume that a suitable logical data model has been created and basic DDL generated. However, if the platform supports indexing, the DBA will typically be called upon to perform initial review and implementation of bit-mapped or b-tree indexes. Additional indexing is to be expected!
- Further, for many RDBMS, additional indexes and possibly other support features may be required for semi-structured data performance.
- Identification of ETL loads. This further requires:
- Determining the size and performance impact of each load.
- Determining the appropriate time of day/day of the week for each ETL load.
- Scheduling of jobs to perform ETL and ELT.
- Create Account. Select the appropriate account type; no other steps necessary.
- Issue CREATE DATABASE SQL (or use web UI). The primary parameter is “Time Travel”, i.e. backup data retention in days (1-90) for recovery from user errors, disaster recovery and “point in time” querying. No file system, storage management, etc. required. Snowflake databases are inherently elastic, with automatic purging of obsolete data.
- Note that all data is automatically replicated across availability zones. Coupled with “Time Travel,” this typically eliminates the need for other forms of backup. However, creating a manual backup is also a single, nearly instantaneous operation of creating a new database by “cloning”.
- Create database object i.e. tables, sequences and views. No indexes. No additional support objects required for semi-structured data performance.
- Create initial VDW(s). Again, simple SQL with simple, easily changeable parameters:
- Size – number of servers per cluster, ranging from 1 – 128.
- Single or multiple clusters, possibly with auto-scaling.
Snowflake’s separation of compute and storage means that ETL/ELT does not run on the same compute as operational DML queries. ETL/ELT processing is completely isolated and self-contained, leaving time for the DBA to work on legacy problems!
Tuning a legacy database platform varies tremendously depending on the platform. Common tuning work includes:
- Partitioning tables. All too often, this is required after the data warehouse has been operating for some time. This may involve downtime, online repartitioning with attendant compute and disk load, creating new tables, etc.
- Gathering statistics. This may be required after each data load. It may be automated by job a scheduler, but often in off-hours or on weekends. Planning and analysis are often required to determine which tables need regular statistics gathered and which do not.
- Adjusting System Parameters. This often requires restarting the database.
- Indexing. “Row store” databases typically require indexes for performance. Indexes are also typically needed for the performance of semi-structured data such as JSON.
None of the performance tuning discussed previously is required for Snowflake.
- Statistics are collected when data is loaded and become part of the meta-data. They are always current.
- All tables in Snowflake are automatically “micro-partitioned”.
- Snowflake is a column store, so no indexing is needed.
- There are no “system parameters” to tune. In rare cases, a cluster key may be needed to improve performance on multi-TB (compressed) tables. Snowflake performs clustering when loading data. Adding a cluster key to existing large table results in reclustering as a background process.
Snowflake’s approach to performance tuning is based on the elasticity of its VDW.
For large, complex queries, the solution is to scale up the VDW to a larger size, i.e. more servers in a cluster. Again, this is a simple SQL or web UI action. A VDW can be scaled up (or down) dynamically; this won’t affect running queries, only queries submitted after the size change.
A multi-cluster VDW is used to support high concurrency workload which can be created with a simple SQL or web UI action. This type of VDW may also be “auto-scaled”, allowing dynamic activation and suspension of the number of clusters.
Another key feature for Snowflake performance is the ability to create multiple VDWs, each supporting a different type of workload or business area. The VDWs are completely independent of each other.
Finally, Snowflake’s “micro-partition” architecture eliminates the need for traditional performance tuning. This approach takes partition pruning to a new level, using an incredibly rich meta-data store, enabling both vertical elimination of partitions and horizontal examination only of relevant columns. Once again, leaving the DBA time to address legacy problems!
Director of Analytics
Jeff Jacobs is a senior data technology professional in the Data and Analytics Practice at Trianz. The Data and Analytics Practice works with enterprises to achieve significant competitive advantage via modern cloud technologies, with a particular focus on Snowflake Computing ecosystem.