Page tree
Skip to end of metadata
Go to start of metadata

This post will introduce how the partitioning feature is implemented in Jethro. In a nutshell, we added a simple, automatic range partitioning mechanism that is very easy to work with.

Why Use Partitioning

Generally, there are two reasons to partition a large table:

  • Ease of maintenance – Partitioning allows implementing a data retention policy and enables efficient purging of old data when it is no longer needed (rolling window). Also, it allows removing part of the data if invalid data was accidentally loaded (for example, remove a specific day).
  • Performance and Scalability – Regular parallel databases (like Impala or Hive on Tez in the Hadoop space) execute queries by doing a full scan of the local data in all nodes, in parallel. Partitioning allows each node to scan less data (partition pruning), improving performance. Also, it improves scalability – if a report accesses only one month of data, it will have the same performance even if we add many more months of data to the table.

Partitioning in Jethro

JethroData uses indexes to read only the relevant data from HDFS, with or without partitioning. So, we don't see partitioning as a major performance feature (though it can somewhat help performance and scalability occasionally).
Instead, partitioning in Jethro mostly helps ease of maintenance. This is especially important as JethroData format is append only – so partitioning enables users to delete old data from fact tables or fix operational data problems (like loading the same file twice by mistake) without having to drop and reload large tables.
One design goal for our partitioning feature was ease of use. Partitioning a table should not require complicated CREATE TABLE syntax, manually adding partitions or having to explicitly name them.

Creating a Partitioned Table

To create a partitioned table, you just need to provide two additional bits of information when you create the table – a partitioning key (one of the table's columns) and a partitioning interval. For example:

CREATE TABLE events (id BIGINT, event_ts TIMESTAMP, value INT) 
PARTITION BY RANGE (event_ts) EVERY (INTERVAL '1' month); 
CREATE TABLE events2 (id BIGINT, event_day_id INT, value INT) 
PARTITION BY RANGE (event_day_id) EVERY (1);

Adding a New Partition

There is no command to add new partitions to a table (like ALTER TABLE… ADD PARTITION).
Instead, JethroLoader automatically creates new partitions as it loads data when needed. So, the administrator never needs to worry about it or create empty partitions in advance.

Listing Table Partitions

While partitioning is mostly transparent, the administrator may occasionally want to see all existing partitions. use this command:


ID| Start Value| End Value | Rows | Column(MB) | Index(MB)
p1 | 1998-01-01 00:00:00 | 1999-01-01 00:00:00 | 19791072 | 809.944 | 556.415
p2 | 1999-01-01 00:00:00 | 2000-01-01 00:00:00 | 20024903 | 819.405 | 562.247
p3 | 2000-01-01 00:00:00 | 2001-01-01 00:00:00 | 20086331 | 822.694 | 564.032
p4 | 2001-01-01 00:00:00 | 2002-01-01 00:00:00 | 19891053 | 815.420 | 558.946
p5 | 2002-01-01 00:00:00 | 2003-01-01 00:00:00 | 19989221 | 821.732 | 561.340
p6 | 2003-01-01 00:00:00 | 2004-01-01 00:00:00 | 217420 | 9.483 | 11.599

Dropping a Partition

When dropping a partition, we identify it by value – specifying any value from its range. For example:

ALTER TABLE my_table DROP PARTITION FOR ('1998-01-01 00:00:00');

How to Pick a Partition Key

The partitioning key should be the main timestamp column that is used both for maintenance (keeping data for n days) and in queries. In large fact tables, it is usually the timestamp of the event that is recorded in that table.

How Big Should Each Partition Be

Generally, you should align the partition range to the retention policy. For example, if you plan to keep data for 12 months, purging once a month, start with monthly partitions. If you plan to keep data for 60 days, purging once a day, start with daily partitions.
However, it is usually recommended to limit each partition to a several billion rows. Many very small partitions are inefficient and may stress the HDFS NameNode with a lot of small files. A few super large partitions of many billions of rows each are harder to maintain – for example, harder to correct the data after loading one bad file.
Want to know more? download Jethro and immediately access documentation, installer, driver and tutorial so you can try it out yourself!

  • No labels