In the previous posts in this series, we explained why a new storage subsystem is needed to effectively implement the butterfly architecture. Ampool has built exactly such an “Active Data Store”.

Ampool Active Data Store (ADS) is a memory-centric, distributed, data-aware, object store optimized for unifying streaming, transactional and analytical workloads. We describe each of these features of Ampool ADS next.

  • Memory-Centric: Although DRAM costs have been rapidly declining over the years, they are still very high compared to other storage media, such as SSD, and Hard Disk drives. Fortunately, not all the enterprise data that needs to be analyzed requires DRAM-level performance. Also, as the data becomes older, it is accessed less and less frequently. Thus, colder data can be stored on hard disk drives, warm data can be stored on SSDs, and only data that is most frequently accessed, and needs the fastest access can be stored in DRAM. Manually moving data across these storage tiers is cumbersome, and error prone. Ampool ADS implements smart tiering that monitors the usage of data, and automatically moves the data across tiers, as it gets accessed infrequently.
  • Distributed: Although the DRAM and storage density has increased dramatically over the years, constantly adding more DRAM and storage to a single system (i.e. scaling up), does not scale the overall system performance proportional to the cost. Thus, even memory-centric storage systems need to be clustered and distributed, for linear scalability, fault tolerance, and disaster recovery. Ampool ADS is designed as a distributed system from ground up. Data is replicated across the address spaces of various machines in the cluster in order to be highly available. In addition, the changes in the data are propagated via a scalable message bus across wide area network for disaster recovery.
  • Object Store: Historically, the most common types of storage systems were categorized into block stores or file systems. Each has its own advantages. A block store can be shared across different operating systems, and has a much lower overhead of accessing a random piece of data. However, network round-trips to fetch individual blocks is often inefficient for today’s large scale data workloads. In addition, since the basic unit of read and write is a 4KB block, small updates, as well as small reads result in a lot of unnecessary data traffic over the network, or on the local disks. Filesystems are the most commonly used abstraction for storage, and are available in various flavors across multiple operating systems. In addition, several scalable distributed file systems are available from multiple vendors. However implementing the file system semantics, which involves maintenance and navigation of a hierarchical name space structure, maintaining consistency and atomicity across file system operations, and providing random reads and writes in place in files, imposes a lot of overhead for the file system servers, as well as clients. Typically, the file system read/write access has 50-100 microsecond latency. When the file system was implemented on top of slow rotating disks, which had a 10 millisecond latency of its own, the file system latency was negligible compared to the underlying storage media latency. However, with the new fast random access storage, such as SSDs & NVRAM, which have only a few microseconds latency, the file system abstraction has overwhelmingly high overheads. In the last decade, because of emergence of public clouds, and their hosted storage solutions, a third kind of storage abstraction, Object Store, has become popular. Object stores organize data without deep hierarchy. In order to access an object, one only needs a bucket ID and an object ID, rather than navigating a hierarchical name space. In addition, Object stores have rich metadata associated with each object and bucket. So operations such as filters & content-searches, can be pushed down to the storage layer, reducing network bandwidth requirement, and load on the CPU. Object stores are ideal for the new classes of storage media because of the low CPU overhead, simpler semantics, and scalability, especially with large amount of data stored as objects.
  • Data-Aware: Most of the existing object stores do not interpret the contents of the objects natively, which limits its utility. Indeed, the most common use of object stores is as a BLOB (Binary Large Object) stores to store and retrieve multimedia, such as images or video. If one were to implement analytical workloads on data stored in an object store, it would need fetching the entire object (which may be megabytes or gigabytes in size) to the CPU, imposing a schema on it, deserializing it, and then performing the necessary analytical computations on it. The Ampool ADS stores extensive metadata about objects, such as schema, versions, partitioning key, and various statistics about the contents of the objects. It enables common operations such as projections, filtering, and aggregates to be pushed down to the object store. This helps in speeding up most analytical computations, and avoids network bottlenecks that are prevalent in other distributed storage systems.

Ampool Active Data Store, with Integrations with Compute Frameworks, and Other Storage Engines

In addition to the core memory-centric object store, the Ampool product includes several optimized connectors that allow existing computational engines to efficiently store and retrieve data from the Ampool object store. While the number of connectors is rapidly increasing with every version of Ampool, the current connectors provided out of the box include Apache Spark, Apache Hive, Apache Trafodion (in collaboration with Esgyn, Inc.), Apache Apex (in collaboration with Datatorrent, Inc.), and CDAP (Cask Data Application Platform, in collaboration with Cask Data, Inc.). Although the Ampool system is in itself a fully distributed storage system able to maintain large volumes of operational persistent data, it provides several persistent storage connectors to load data from and store data to. Connectors available include Hadoop Distributed File System (HDFS), Apache Hive (ORC), and Apache HBase. Ampool can be deployed as a separate system with Hadoop components, or with an existing running Hadoop cluster, either with Apache Ambari or Cloudera Manager. It can be monitored and managed with provided tools, or by connecting the JMX metrics produced by Ampool to any JMX-compatible monitoring system. By providing fast analytical storage for both immutable and mutable dataframes, and datasets, as well as for extensions to support event streams, Ampool provides the missing piece for implementing the butterfly architecture, and allows unification of various transactional and analytical workloads.

In the next post in this series, we will take a look at clean separation between computation frameworks, and storage systems for unified data processing as required by the butterfly architecture, and roles and responsibilities for each, to describe how Ampool ADS will evolve.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *