What Is a Data Lakehouse?

The Best of Both Worlds

A data lakehouse is a new, open data management architecture designed to combine the analytic benefits of a data warehouse and a data lake. By leveraging the machine learning capabilities of a data lake combined with the support of a data warehouse’s BI insights, the lakehouse approach can address data staleness, reliability, scalability, data lock-in, and limited use-case support.

While the lakehouse approach is a new concept, AWS and other cloud managed service providers have made it clear that the ability to derive intelligence from unstructured data — without having to manage multiple systems — will address the current limitations in data management.

In the following article, we will discuss more about data lakes, data warehouses, and why combining the two into a single unified platform can enable faster and more powerful analytics.


Data Warehouse – Pros and Cons


Data warehouses were created to store massive amounts of fragmented data that resided in silos. By processing the data via an extract, transform and load (ETL) pipeline, a data warehouse employs data integration, staging, and access layers in its key functions. The staging layer stores the raw, unstructured data taken from multiple data sources. The integration layer merges the data by translating it and transferring it to an operational data store database.

the architecture showing how a data warehouse operates.

This data is then moved to the data warehouse database where it is organized into hierarchical groups known as dimensions. Finally, the access layer allows users to retrieve the translated and organized data where it becomes a single source of truth (SSOT). As an organization's SSOT, the data can then be analyzed timely and accurately to obtain actionable business insights.

However, many data warehouses are beginning to show their age as the need to manage and store several exabytes of data has become increasingly complex, making it nearly impossible to derive actionable insights from diverse data sets. Therefore, data lakes have emerged as a practical solution to scale big data without the complexity of a data warehouse.

Data Warehouse Pros:


  • Business Intelligence

  • Improves data quality

  • Historical insights

  • High integration with OLAP tools

  • Improves business decision making

Data Warehouse Cons:


  • Expensive to build and maintain

  • Compatibility issues

  • Requires data cleaning

  • No support for data science & ML


Data Lakes – Pros and Cons


A data lake is a storage repository that holds a vast amount of raw, free-flowing data in its native format until ready to be analyzed. The difference between a data lake and a data warehouse is that while a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.

What makes data lakes unique is that each piece of data in a lake is assigned an identifier and tagged with a set of extended meta-data tags. This makes it possible for when a question arises, the data lake can be queried for relevant data. That smaller set of data can then be analyzed rather than having to process all the data in the lake.

However, without proper forethought and setup, data lakes can lack governance and the tools and skills to handle large volumes of disparate data — and as a result, disintegrate into massive repositories of data that are inaccessible to end-users.

A data lake and its applications.

Data Lake Pros:


  • Flexibility

  • Unlimited scalability

  • Diverse data sources are stored in raw format

  • Support of advanced algorithms

  • Excellent for integration with ML, AI, and IoT technologies

  • Lower storage costs

Data Lake Cons:


  • Chances of data integrity loss

  • May take months to implement

  • Lack of support for ACID transactions

  • Poor organization will lead to a “data swamp”


How Do Data Lakehouses Combine the Best of Both Worlds?


A data lakehouse is not about integrating a data lake and warehouse, but rather connecting the data lake to the data warehouse and other purpose-built services to form a consolidated system with a more holistic architecture.

This architecture is achieved by having a data lake at the center. This is where users will input all their structured and unstructured data sets. Around the data lake sits various purpose-built data services that are attached by SQL query tools. This enables applications for:

  • Log analytics

  • Data warehousing

  • Machine learning

  • Non-relational databases

  • Relational databases

  • Big Data processing

A data lake sitting in the middle of its purpose-built services.
AWS lakehouse architecture and purpose-built services

With AWS lakehouse architecture, we see that at the center sits Amazon S3 as the data lake, Amazon Glue allows for the seamless data movement between services, and AWS Lake Formation allows for the data to be centralized, curated, and secured as a data lakehouse. Amazon provides connectors for AWS purpose-built services such as:

  • Aurora (relational database service)

  • DynamoDB (NoSQL service)

  • SageMaker (ML service)

  • Redshift (data warehousing)

  • Elasticsearch Service (log analytics)

  • EMR (Big Data service)

Data Lakehouse Pros:


  • Less time and money to spend on administration

  • Simplified schema

  • Better compliance

  • Performant SQL querying

  • Reduced data redundancy

  • Direct access to data to analysis tools

  • Cost-effective data storage

Data Lakehouse Cons:


  • Technology needs to advance before replacing highly optimized DBMS

  • Still in early stages of adoption


How Athena Rapid Analytics Can Help You Move Into a Lakehouse


To extend beyond the AWS ecosystem, Trianz has recently partnered with the AWS product team to develop Athena Rapid Analytics. With our growing library of AFQ extensions, users can scan data from S3 and execute the Lambda-based connectors to read data from on-prem Teradata, Cloudera, Hortonworks, Azure, Snowflake, Google BigQuery, SAP HANA, and many other data sources to simplify BI and facilitate cross data-source analytics.

The out-of-the-box connectors require zero infrastructure, resulting in straightforward implementation and faster response to perform federated query function. With no training necessary, no need to prepare data models, Athena users can get started using familiar SQL constructs to combine data across multiple sources for quick analysis.

Our lakehouse solution provides the freedom of using preexisting vendors or picking the best fit — all of which can be connected seamlessly with Trianz AFQ connectors.

Trianz AFQ connectors unifying multiple data sources

Experience the Trianz Difference

Trianz enables digital transformations through effective strategies and excellence in execution. Collaborating with business and technology leaders, we help formulate and execute operational strategies to achieve intended business outcomes by bringing the best of consulting, technology experiences and execution models.

Powered by knowledge, research, and perspectives, we enable clients to transform their business ecosystems and achieve superior performance by leveraging infrastructure, cloud, analytics, digital and security paradigms. Reach out to get in touch or learn more.

Try It Out and See

If you would like demonstration of the speed of deployment, accuracy, and performance of our lakehouse solution, we offer a free 7-day proof of value that is jointly executed by Trianz and AWS.

×

You might also like...

Get in Touch

Let us help you
transform and grow


By submitting your information, you agree to our revised  Privacy Statement.

Let’s Talk

x

Status message

We're eager to assist you! Please leave a message and we'll get back to you shortly.

By submitting your information, you agree to our revised  Privacy Statement.