Almost all applications are powered by one or more persistent data stores. Relational OLTP DBMSs have been most often used as data serving backends. However, in recent years, for large scale-out applications, non-relational (NoSQL) data stores, that sacrifice consistency for availability, are being used for data serving. Most modern mobile or web-based applications’ user experience on the front end is powered by making multiple API calls to the backend services which are in turn backed by multiple data serving platforms.

Previous State

Typical application architectures are composed of multiple tiers (and sometime with load balancers & proxies between these tiers), with applications making API calls to the application servers, and application servers translating these API calls to multiple requests to appropriate data serving systems in the backend, collecting responses from the data serving backends, combining them using business logic, and sending them back to the application, where the application renders it on the front-end. When there is temporal locality (data that is fetched recently, will be fetched again soon), the data serving backends are augmented with a lookaside in-memory caching layer, such as Redis of MemcacheD in order to reduce load on persistent data serving systems.

Challenges & Deficiencies

As number of applications, and number of concurrent users of each application increases, the data serving systems are overwhelmed by the number of concurrent requests, and they tend too spend a lot of time (query latency) waiting for individual requests to complete. As a single application request may result in multiple requests to underlying data stores, the application response is blocked for the last response from the data stores. These data fetch requests can be arbitrarily complex, and may result in sequential scan of all records in the data store, exhibiting a long tail behavior.

We identify the core problem in such systems as inability to efficiently construct a logical view over multiple data stores, serving these views concurrently to multiple applications, allowing these views to be updated, and propagating these view updates to underlying data stores.

Adding in-memory caching layer to each underlying data store take advantage of temporal locality, and speeds up data access from each individual data store, however, the application server still has to execute business logic and combine responses (or issue additional requests), before data is returned to the application. In addition, since the single-node lookaside cache is not fault-tolerant, when the cache goes down, applications will see severely degraded performance. A number of distributed & fault tolerant caching systems, such as Apache Geode, Apache Ignite, and Hazelcast aim to solve the cache availability issues. These distributed caching systems are essentially distributed in-memory hash tables with a primary key, and an arbitrary value, and provide fast key-based lookups on this hash table. However, they do not solve the fundamental issue of serving updateable, materialized view that may require fast short scan-oriented queries.

With Ampool

Above block diagram illustrates the data flow, when Ampool is used as Application Acceleration middleware.

  1. Ampool pre-loads initial datasets
  2. Construct in-memory materialized views, by embedding business logic as stored procedures
  3. Application requests data from View access layer
  4. View access layer parses request, adds metadata, and forwards to Ampool
  5. Ampool checks authorization at view-level, and serves projection/subset of view
  6. View access layer performs additional transformations
  7. Response is sent back to Application
  8. In case of updates, Ampool asynchronously updates the underlying databases

Why Ampool?

Ampool has many features that enable it to be used as multi-tenant application acceleration platform.

  • Transparent, and customizable bulk loader functionality, can use JDBC, NoSQL (e.g. MongoDB), or File System APIs
  • Low-Latency (~20µs) lookups & in-place updates allow fast construction of materialized views, and synchronous updates
  • Pre-processors (similar to database triggers) allow customization of fine-grained authorizations
  • Row versioning enables fast lock-free inserts & updates
  • High throughput parallel query to rapidly serve materialized views with projection & filter pushdowns
  • Change Data Capture (CDC) mechanism to asynchronously update underlying tables
  • High query concurrency (~100s of concurrent clients) at low (sub-millisecond) latency
  • JDBC Connectivity with Apache Calcite integration

Please note that Ampool does not enforce distributed transactional updates across multiple non-collocated tables by default. However by integrating Ampool clients with external transaction managers (such as Apache Omid or Apache Tephra) one can implement transactional behavior (only if needed.)

When to consider Ampool for Application Acceleration?

You should consider Ampool as middleware for application acceleration if:

  • Your applications need to serve data concurrently from multiple RDBMS or NoSQL data stores (including file systems)
  • Your application servers are getting overwhelmed by business logic that results in sequential scans on underlying data stores that exhibit long-tail behavior of such queries
  • Your business logic is written in Java (current limitation of Ampool, will go away in future)
  • Your data model consists of both structured & semi-structured data
  • Working set of your multi-tenant applications does not fit within a single machine’s memory

If you are interested in exploring Ampool, write to us to schedule a demo.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *