Oct. 6, 2021
Introduction
I like to google and get more content about the following ideas:
- block cache vs key value cache
- HBase and BigTable compression algorithms
- HBase cannot map storage files into memory, something that is available in Bigtable - Memtable
- locality groups - compression
- Column families in Bigtable are used for accounting and access control
- Commit log - learn more about this topic
- Study two statements:
- BigTable: Bigtable can memory-map entire storage files and use them to perform lookups without a single disk seek.
- HBase: HBase has an in-memory option per column family and uses its LRU cache‡ to retain blocks for subsequent use.
Overall, HBase implements close to all of the features described in Chapter 1. Where it differs, it may have to because either the Bigtable paper was not very clear to begin with, or it relies on other open source projects to provide various services and those simply work differently.
HBase stores timestamps in milliseconds—as opposed to Bigtable, which uses microseconds. This is not much of an issue and can possibly be attributed to C and Java having different preferred timer resolutions.
While we have not yet addressed the specific details, it should be pointed out that both also use different compression algorithms. HBase uses those supplied in Java, but can also use LZO (with a bit of work; we will look into this later).* Bigtable has a two-phase compression using BMDiff and Zippy.
HBase has coprocessors that are different from what Sawzall, the scripting language used in Bigtable to filter or aggregate data, or the Bigtable Coprocessor framework,† provides. The details on Google’s coprocessor implementation are rather sketchy, so if there are more differences, they are unknown. On the other hand, HBase has support for server-side filters that help reduce the amount of data being moved from the server to the client.
HBase does primarily work with the Hadoop Distributed File System (HDFS), while Bigtable uses GFS. But HBase can also work on other filesystems thanks to the pluggable FileSystem class provided by Hadoop. There are implementations for Amazon S3 (raw or emulated HDFS), as well as EBS.
HBase cannot map storage files into memory, something that is available in Bigtable. There is ongoing work in HBase to optimize I/O performance, and with the addition of more widespread use of Java’s New I/O (NIO), it may be something that could be enhanced.
Bigtable has a concept called locality groups, which allow the client to group specific column families together and apply shared features, such as compression. This is also useful when the contained columns are accessed together, as all the data is stored in the same storage files. Column families in Bigtable are used for accounting and access control. In HBase, on the other hand, there is only the concept of column families, combining the features that Bigtable has in two distinct concepts.
Apart from the block cache that both systems have, Bigtable also implements a key/value cache, probably for cells that are accessed a lot.
The handling and implementation of the commit log also differs slightly. Bigtable has two commit logs to handle slow writes and is able to switch between them to compensate for that. This could be implemented in HBase, but it does not seem to be a topic for discussion, and therefore is omitted for the time being.
In contrast, HBase has an option to skip the commit log completely on writes for performance reasons and when the possibility of not being able to replay those logs after a server crash is acceptable.
The METADATA table in Bigtable is also used to store secondary information such as log events related to each tablet. This historical data can be used to analyze tablet transitions, splits, and/or merges. HBase had the notion of a historian in earlier versions that implemented the same concept, but its performance was not good enough and it has been removed.
While splitting regions/tablets is the same for both, merging is handled differently. HBase has a tool that helps you to merge regions manually, while in Bigtable this is handled automatically by the master. Merging in HBase is a delicate operation and currently is left to the operator to decide what is best.
Another very minor difference is that the master in Bigtable is doing the garbage collection of obsolete storage files. One reason for this could be the fact that, in Bigtable, the storage files are tracked in the METADATA table. For HBase, the cleanup is done by the region server that has done the split and no file location is recorded explicitly.
Bigtable can memory-map entire storage files and use them to perform lookups without a single disk seek. HBase has an in-memory option per column family and uses its LRU cache‡ to retain blocks for subsequent use.
There are also some differences in the compaction algorithms. For example, a merging compaction also includes a memtable flush. Mostly, though, they are the same and simply use different names.
Region names, as stored in the meta table in HBase, are a combination of the table name, the start row key, and an ID. In Bigtable, the corresponding tablet names consist of the table identifier and the end row. This has a few implications when it comes to locating data in the storage files (see “Read Path” on page 342).
Finally, it can be noted that HBase has two separate catalog tables, -ROOT and .META., while in Bigtable the root table, since in both systems it only ever consists of one single region/tablet, is stored as part of the meta table. The first tablet in the METADATA table is the root tablet, and all subsequent ones are the meta tablets. This is just an implementation detail.
No comments:
Post a Comment