A data lakehouse is a new, open data management architecture designed to combine the analytic benefits of a data warehouse and a data lake. By leveraging the machine learning capabilities of a data lake combined with the support of a data warehouse’s BI insights, the lakehouse approach can address data staleness, reliability, scalability, data lock-in, and limited use-case support.
While the lakehouse approach is a new concept, AWS and other cloud managed service providers have made it clear that the ability to derive intelligence from unstructured data — without having to manage multiple systems — will address the current limitations in data management.
In the following article, we will discuss more about data lakes, data warehouses, and why combining the two into a single unified platform can enable faster and more powerful analytics.
Data warehouses were created to store massive amounts of fragmented data that resided in silos. By processing the data via an extract, transform and load (ETL) pipeline, a data warehouse employs data integration, staging, and access layers in its key functions. The staging layer stores the raw, unstructured data taken from multiple data sources. The integration layer merges the data by translating it and transferring it to an operational data store database.
This data is then moved to the data warehouse database where it is organized into hierarchical groups known as dimensions. Finally, the access layer allows users to retrieve the translated and organized data where it becomes a single source of truth (SSOT). As an organization's SSOT, the data can then be analyzed timely and accurately to obtain actionable business insights.
However, many data warehouses are beginning to show their age as the need to manage and store several exabytes of data has become increasingly complex, making it nearly impossible to derive actionable insights from diverse data sets. Therefore, data lakes have emerged as a practical solution to scale big data without the complexity of a data warehouse.
Improves data quality
High integration with OLAP tools
Improves business decision making
Expensive to build and maintain
Requires data cleaning
No support for data science & ML
A data lake is a storage repository that holds a vast amount of raw, free-flowing data in its native format until ready to be analyzed. The difference between a data lake and a data warehouse is that while a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.
What makes data lakes unique is that each piece of data in a lake is assigned an identifier and tagged with a set of extended meta-data tags. This makes it possible for when a question arises, the data lake can be queried for relevant data. That smaller set of data can then be analyzed rather than having to process all the data in the lake.
However, without proper forethought and setup, data lakes can lack governance and the tools and skills to handle large volumes of disparate data — and as a result, disintegrate into massive repositories of data that are inaccessible to end-users.
Diverse data sources are stored in raw format
Support of advanced algorithms
Excellent for integration with ML, AI, and IoT technologies
Lower storage costs
Chances of data integrity loss
May take months to implement
Lack of support for ACID transactions
Poor organization will lead to a “data swamp”
A data lakehouse is not about integrating a data lake and warehouse, but rather connecting the data lake to the data warehouse and other purpose-built services to form a consolidated system with a more holistic architecture.
This architecture is achieved by having a data lake at the center. This is where users will input all their structured and unstructured data sets. Around the data lake sits various purpose-built data services that are attached by SQL query tools. This enables applications for:
Big Data processing
With AWS lakehouse architecture, we see that at the center sits Amazon S3 as the data lake, Amazon Glue allows for the seamless data movement between services, and AWS Lake Formation allows for the data to be centralized, curated, and secured as a data lakehouse. Amazon provides connectors for AWS purpose-built services such as:
Aurora (relational database service)
DynamoDB (NoSQL service)
SageMaker (ML service)
Redshift (data warehousing)
Elasticsearch Service (log analytics)
EMR (Big Data service)
Less time and money to spend on administration
Performant SQL querying
Reduced data redundancy
Direct access to data to analysis tools
Cost-effective data storage
Technology needs to advance before replacing highly optimized DBMS
Still in early stages of adoption
To extend beyond the AWS ecosystem, Trianz has recently partnered with the AWS product team to develop Athena Rapid Analytics. With our growing library of AFQ extensions, users can scan data from S3 and execute the Lambda-based connectors to read data from on-prem Teradata, Cloudera, Hortonworks, Azure, Snowflake, Google BigQuery, SAP HANA, and many other data sources to simplify BI and facilitate cross data-source analytics.
The out-of-the-box connectors require zero infrastructure, resulting in straightforward implementation and faster response to perform federated query function. With no training necessary, no need to prepare data models, Athena users can get started using familiar SQL constructs to combine data across multiple sources for quick analysis.
Our lakehouse solution provides the freedom of using preexisting vendors or picking the best fit — all of which can be connected seamlessly with Trianz AFQ connectors.
If you would like demonstration of the speed of deployment, accuracy, and performance of our lakehouse solution, we offer a free 7-day proof of value that is jointly executed by Trianz and AWS.
What Is an SQL Query Engine? SQL query engine architecture was designed to allow users to query a variety of data sources within a single query. While early SQL-based query engines such as Apache Hive allowed analysts to cut through the clutter of analytical data, they found running SQL analytics on multi-petabyte data warehouses to be a time-intensive process that was difficult to visualize and hard to scale.Explore
A Winning Base for Successful Digital Transformations When it comes to developing a successful digital strategy, it is not just corporations planning to maximize the benefits of data assets and technology-focused initiatives. The Government of Western Australia recently unveiled four key priorities for digital reform in its new Digital Strategy for 2021-2025.Explore
Engage Your Workforce with a Modern Employee Intranet Solution The employee intranet has changed significantly since it was first introduced in the early 1990s. What started as HTML-based static portals have now evolved into intuitive communication tools complete with search engines, user profiles, blogs, event planners, and more. Today, many organizations are taking a second look at employee intranets to bridge gaps between teams, build company culture, centralize information, increase productivity, and improve workflow.Explore
Adopting emerging cloud technologies, consolidating resources, and improving processes is the key. “IT no longer just supports corporate operations as it traditionally has but is fully participating in business value delivery. Not only does this shift IT from a back-office role to the front of business, but it also changes the source of funding from an overhead expense that is maintained, monitored, and sometimes cut, to the thing that drives revenue,” said John-David Lovelock, research vice president at Gartner.Explore
Deliver Powerful Insights Instantaneously with Federated Queries - No Matter Where Your Data Resides The concept of federated queries isn’t new. Facebook PrestoDB popularized the idea of distributed structured query language (SQL) query engines in 2013. Over the years, AWS, Google, Microsoft, and many others in the industry have accelerated the adoption of a distributed query engine model within their products. For example, AWS developed Amazon Athena on top of the Presto code base, while Google’s BigQuery is based on Cloud SQL.Explore
What is Unstructured Data? Almost 80% of the data that enterprises and organizations collect is unstructured - data without a set record format or structure. Unstructured data includes data such as emails, web pages, PDFs, documents, customer feedback, in-app reviews, social media, video files, audio files, and images.Explore