DynamoDB Accelerator (DAX) Overview

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that enables you to benefit from fast in-memory performance for demanding applications. DAX improves the performance of read-intensive DynamoDB workloads so repeat reads of cached data can be served immediately with extremely low latency.

DAX will automatically retrieve data from DynamoDB tables upon a cache miss. Operations that involve caching are GetItem, BatchGetItem, Query and Scan (including GSI and LSI for Query and Scan). Writes are designated as write-through (data is written to DynamoDB first and then updated in the DAX cache). Write operations include BatchWriteItem,  UpdateItem, DeleteItem and PutItem.

You can simply use the DynamoDB API to write and retrieve data, and DAX handles all of the caching logic behind the scenes to deliver improved performance. DAX provides detailed CloudWatch monitoring metrics. You can set up CloudWatch alarms. DAX also provides information on cache hit, miss, query and cluster performance via the Management Console.

 

Use Cases

  1. Applications that require the fastest possible response times for reads. Examples include real-time bidding, social gaming, and trading applications.

  2. Applications that read a small number of items more frequently than others. Example: consider an e-commerce system that has a one-day sale on a popular product.

  3. Applications that are read-intensive, but are also cost-sensitive. You can offload the read-intensive activity from your application to a DAX cluster, and reduce the amount of read capacity units you'd need to purchase otherwise.

  4. Applications that require repeated reads against a large set of data. For example, a long-running analysis of regional weather data would negatively impact other applications that need to access the same data.

 

Consistency Model for DAX

When reading from DAX, users can specify the read to be eventually consistent or strongly consistent:

  1. Eventually Consistent Reads (Default)

    1. On a cache hit, the DAX client will return the result from the cache. On a cache miss, DAX will query DynamoDB, update the cache, and return the result set.

  2. Strongly Consistent Reads

    1. Is pass-through for DAX, does not cache the results in DAX, and returns a result that reflects all writes that received a successful response in DynamoDB prior to the read.

 

Data Eviction Policy

DAX handles cache eviction in three different ways.

  1. It uses a Time-to-Live (TTL) value that denotes the absolute period of time that an item is available in the cache.

  2. When the cache is full, a DAX cluster uses a Least Recently Used (LRU) algorithm to decide which items to evict.

  3. With the write-through functionality, DAX evicts older values as new values are written through DAX.

DAX will not be able to detect changes in data written directly to DynamoDB without DAX.

 

Item Cache vs. Query Cache

Within a DAX cluster, there are two different caches: 1) item cache and 2) query cache. The item cache manages GetItem, PutItem, and DeleteItem requests for individual key-value pairs. The query cache manages the result sets from Scan and Query requests. The Scan/Query text is the key.

While both the item cache and the query cache are managed in the same cluster, they do not overlap. An update to the item cache does not either update or invalidate result sets in query cache. The best way to mitigate inconsistencies between these caches is to have an appropriate TTL.

TTL specifies the duration of time in which an item in cache is valid. You can specify different TTL values for item cache and query cache.

 

DAX Clusters

You can provision multiple DAX clusters for the same DynamoDB table. These clusters will provide different endpoints. You can create a DAX cluster using the AWS Console, the AWS SDK or the DAX CLI.

A single DAX cluster supports up to 10 nodes, offering millions of requests per second. The only way to connect to your DAX cluster from outside of your VPC is through a VPN connection.

 

High Availability, Failover, Scaling

HA: DAX provides built-in multi-AZ support, letting you choose the preferred AZs for the nodes in your DAX cluster. DAX uses asynchronous replication to provide consistency between the nodes. AWS recommend that you deploy at least three nodes in three separate availability zones.

Failover: A DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for primary node, DAX will automatically fail over and elect a new primary. DAX also provisions a new node in the same AZ of the failed primary and replaces the newly-promoted read replica. If the primary fails due to a temporary AZ disruption, the new replica will be launched as soon as the AZ has recovered. If a single-node cluster fails, DAX launches a new node in the same availability zone.

Scaling: Scaling can be done either by adding more nodes or adding larger nodes to cluster. When creating a DAX cluster, all nodes in the cluster must be of the same instance type. If you desire to change the instance type for your DAX cluster (i.e., scale up from r3.large to r3.2xlarge), you must create a new DAX cluster with the desired instance type. DAX does not currently support online scale-up or scale-down operations.

Increasing Write Throughput: Within a DAX cluster, only the primary node handles write operations. So adding more nodes to cluster will increase the read throughput, but not the write throughput. To increase write throughput, you will need to either scale-up to a larger instance size or provision multiple DAX clusters and shard your key-space in the application layer.

 

Maintenance Window

Maintenance event will be done as per window you identify or a default is assigned (can be modified). Required patching is automatically scheduled only for patches that are security and reliability related.

For multi-node clusters, updates in the cluster are performed serially. For a single-node cluster, AWS will provision a replica (at no charge to you), sync the replica, and then perform a failover to make it primary.

 

DAX Pricing

DAX is priced per node-hour consumed, from time a node is launched until it is terminated. Pricing applies to all individual nodes in the DAX cluster.

Each partial node-hour consumed will be billed as a full hour.

 

DAX - Usage Expectations and Limits

  1. Currently DAX only supports DynamoDB tables in the same AWS region as the DAX cluster.

  2. If DAX is either reading or writing to a table and receives a throttling exception, DAX will return the exception back to the DAX client. DAX service does not attempt server-side retries.

  3. While DAX does not support cache pre-warming as a feature, the DAX cache can be pre-warmed for an application by running an external script/application that reads the desired data.

  4. DAX only supports on-demand instances. No Reserved Instances or the AWS Free Usage Tier currently.  Valid node types of DAX are as follows: dax.r3.large (13 GiB), dax.r3.xlarge (26 GiB), dax.r3.2xlarge (54 GiB), dax.r3.4xlarge (108 GiB) and dax.r3.8xlarge (216 GiB).

Contact

We learn together, do innovations and then document them at cloudericks.com.

Offline Contact
We connect physically only in Bengaluru currently, but most of our activities happen online. Please follow us here or in social media for more details.
WhatsApp (Primary): (+91) 7411174113
Phone (Escalations): (+91) 7411174114

Business newsletter

Complete the form below, and we'll send you an e-mail every now and again with all the latest news.

About us

Team Cloudericks is a community started by Heartin Kanikathottu to learn about and master cloud computing. Our current focus is on Amazon Web Servivces.

Cloudericks.com aims to document what we learn. We believe that knowledge is useless unless you share it; the more you share, the more you learn.

Recent comments

Photo Stream

,
,