Data professionals with plans to build lakehouses atop the Apache Iceberg table format have two new Iceberg services to choose from, including one from Tabular, the company founded by Iceberg’s co-creator, and another from Dremio, the query engine developer that is holding its Subsurface 2023 conference this week.
Apache Iceberg has emerged as one of the core technologies upon which to build a data lakehouse, in which the scalability and flexibity of data lakes is merged with the data governance, predictability, and proper SQL behavior associated with traditional data warehouses.
Originally created by engineers at Netflix and Apple to deal with data consistency issues in Hadoop clusters, among other problems, Iceberg is emerging as a defacto data storage standard for open data lakehouses that work with all analytics engines, including open source offerings like Trino, Presto, Dremio, Spark, and Flink, as well as commercial offerings from Snowflake, Starburst, Google Cloud, and AWS.
Ryan Blue, who co-created Iceberg while at Netflix, founded Tabular in 2021 to build a cloud storage service around the Iceberg core. Tabular has been in a private beta for a while now, but today the company announced that it is now open for business with its Iceberg service.
According to Blue, the new Tabular service basically works as a universal table store running in AWS. “It manages Iceberg tables in a customer’s S3 bucket and allows you to connect up any of the compute engines that you want to use with that data,” he says. “It comes with the catalog you need to track what tables and metadata are there, and it comes with integrated RBAC security and access controls.”
In addition to bulk and streaming data load options, Tabular provides automated management tasks for maintaining the lakehouse going forward, including compaction. According to Blue, Tabular’s compaction routines can shrink the size of customers’ Parquet files by up to 50%.
“Iceberg was the foundation for all of this and now we’re just building on top of that foundation,” says Blue, a Datanami 2022 Person to Watch. “It’s a matter of being able to detect that someone wrote 1,000 small files and clean them up for them if they’re using our compaction service, rather than relying on people, data engineers in particular, who are expected to not write a thousand small files into a table, or not write pipelines that are wasteful.”
Tabular built its own metastore, sometimes called a catalog, which is necessary for tracking the metadata used by the various underlying compute engines. Tabular’s metastore is based on a distributed database engine, and scales better than the Apache Hive metastore, Blue says. “We’re also targeting a lot better features than what’s provided by the Hive metastore or wire-compatible Hive metastores like Glue,” he says.
Tabular’s service will also protect against the ramifications of accidentally dropping a table from the lakehouse. “It’s really easy to be in the wrong database, to drop a table, and then realize, uh oh, I’m going to break a production pipeline with what I just did!” Blue says. “How do I quickly go and restore that? Well, there is no way in Hive metastore to quickly restore a table that you’ve dropped . What we’ve done is we’ve built a way to just keep track of dropped tables and clean then up… That way, you can go and undrop a table.”
Blue, who spoke today during Dremio’s Subsurface event and timed the launch of Tabular to the event, describes Tabular as the bottom half of a data warehouse. Users get to decide for themselves what analytical engine or engines they use to populate the upper half of the warehouse, or lakehouse.
“We’re purposefully going after the storage side of the data warehouse rather than the compute side, because there’s a lot of great compute engines out there. There’s Trino, Snowflake, Spark, Dremio, Cloudera’s suite of tools. There’s a lot of things that are good at various pieces of this. We want all of those to be able to interoperate with one central repository of tables that make up your analytical data sets. We don’t want to provide any one of those. And we actually think it’s important that we separate the compute from the storage at the vendor level.”
Users can get started with the Tabular service for free, and are free to use it until the 1TB limit is hit. Blue says that should give testers enough time to familiarize themselves with the service, see how it works with their data, and “fall in love” with the product. “Up to 1TB we’re managing for free,” he says. “Once you get there we have base, professional, and enterprise plans.”
Tabular is available only on AWS today. For more information see www.tabular.io and Blue’s blog post from today.
Dremio Discusses Arctic
Meanwhile, Dremio is also embracing Iceberg as a core component of its data stack, and today during the first day of its Subsurface 2023 conference, it discussed a new Iceberg-based offering dubbed Dremio Arctic.
Arctic is a data storage offering from Dremio that’s built atop Iceberg and available on AWS. The offering brings its own metadata catalog that can work with an array of analytic engines, including Dremio, Spark, and Presto, among others, along with automated routines for cleaning up, or “vacuuming” Iceberg tables.
Arctic also provides fine-grained access control and data governance, according to Tomer Shiran, Dremio’s founder and chief product officer.
“You can see exactly who changed what, in what table and when, down to the level of what SQL command has changed this table in the last week,” Shiran says, “or was there a Spark job and what is the ID that changed the data. and you can see all the history of every single table in the system.”
Arctic also enables another feature that Dremio calls “data as code.” Just as Git is used to manage source code for computer programs and enable users to easily roll back to previous versions, Iceberg (via Arctic) can enable data professionals to work more easily with data.
Shiran says he’s very excited about the potential for data as code within Arctic. He says there are a variety of obvious use cases for treating data as code, including ensuring the quality of ETL pipelines by using “branching;” enabling experimentation by data scientists and analysts; delivering reproducibility for data science models; recovering from mistakes; and troubleshooting.
“At Dremio, in terms of our product and technology, we’ve worked very hard to make Apache Iceberg easy,” Shiran says. “You don’t really need to understand any of the technology.”
Subsurface 2023 continues on Thursday, March 2. Registration is free at www.dremio.com/subsurface/live/winter2023.
Related Items:
Open Table Formats Square Off in Lakehouse Data Smackdown
Snowflake, AWS Warm Up to Apache Iceberg
Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
December 20, 2024
- Reltio Recognized as Best-of-Breed Representative Vendor in 2024 Gartner Market Guide for Master Data Management Solutions
- CapStorm Releases Salesforce Connector Offering Seamless Data Integration with Snowflake
- LogicMonitor and AppDirect Partner to Bring Hybrid Observability Solutions to IT Service Providers
- Patronus AI Launches Small, High-Performance Judge Model for Fast and Explainable AI Evaluations
- Equinix Unveils Private AI Solution with Dell and NVIDIA for Secure, Scalable AI Workloads
December 19, 2024
- Hydrolix Reports Technology Partner Ecosystem Momentum
- EQTY Lab, Intel, and NVIDIA Introduce Verifiable Compute AI Framework for Governed AI Workflows
- NeuroBlade Empowers Next-Gen Data Analytics on New Amazon Elastic Compute Cloud F2 Instances
- Quantum Announces Support for NVIDIA GPUDirect Storage with Myriad All-Flash File System
- Kurrent Charges Forward with $12M for Event-Native Data Platform
- Timescale Details PostgreSQL’s Growing Adoption Across Industries in 2024
- Altair Enhances RapidMiner with Graph-Powered AI Agent Framework
- Esri Releases 2024 Update of Ready-to-Use US Census Bureau Data for ArcGIS Users
December 18, 2024
- Qlik Shares 2025 AI and Data Trends: Authenticity, Applied Value, and Agents
- Domo Releases 12th Annual ‘Data Never Sleeps’ Report
- Dresner Advisory Services Publishes 2024 Embedded Business Intelligence Market Study
- Starburst Helps Arity Streamline Data Insights with Scalable Lakehouse Architecture
- Vultr Expands Global Reach with New Funding at $3.5B Valuation
- Ataccama Extends Generative AI Capabilities to Accelerate Enterprise Data Quality Initiatives
- Menlo Ventures Announces Cohort Backed by $100M Anthology Fund Launched in Partnership with Anthropic