App Engine supports two classes of the memcache service: shared and dedicated.
Shared versus dedicated memcache
Shared memcache is the free default for App Engine applications. It provides cache capacity on a best-effort basis and is subject to the overall demand of all applications served by App Engine.
Dedicated memcache provides a fixed cache capacity assigned exclusively to your application. It's billed by the GB-hour of cache size. Having control over cache size means your app can perform more predictably and with fewer accesses to more costly durable storage. (Note that dedicated memcache is currently only available to HRD apps in the US.)
Whether shared or dedicated, memcache is not a durable storage. Keys may be evicted when the cache fills up, according to the cache's LRU policy. Changes in the cache configuration or datacenter maintenance events may also flush some or all of the cache.
Both memcache classes use the same API. Class selection and configuration is made through the Admin Console.
The following table summarizes the differences between the two classes of memcache service:
|Feature||Dedicated Memcache||Shared Memcache|
|Price||$0.12 per GB per hour||Free|
|Capacity||1 to 20GB||No guaranteed capacity|
|Performance||Up to 10k operations per second per GB (items < 1KB)||Not guaranteed|
Dedicated memcache billing is charged in 15 minute increments.
If your app needs more than 20GB of cache, please contact us at email@example.com
Dedicated memcache is rated in operations per second per GB, where an operation is defined as an individual cache item access. The operation rate varies by item size approximately according to the following table. Exceeding these ratings may result in increased API latency or errors.
|Item size (KB)||Rated ops/s
per GB of cache
An app configured for multiple GB of cache can in theory achieve an aggregate operation rate computed as the number of GB times the per-GB rate. For example, an app configured for 5GB of cache could reach 50,000 small-item memcache operations/sec. Achieving this level requires a good distribution of load across the memcache keyspace (see Best practices below).
Memcache compute units
A memcache compute unit (MCU) is an alternate way to measure cache traffic capacity, rather than using operations per second. Reserved memcache is rated at 10,000 MCU per second per GB. Each cache operation has its own corresponding MCU cost. For a get that returns a value, the MCU depends on the size of the returned value; it is calculated as follows:
|Get returned value size (KB)||MCU|
The MCU for a set depends on the value size. It is 2 times the cost of a successful get-hit operation. Other operations are assigned MCU as follows:
All billing enabled applications can edit the memcache configuration via the App Engine admin console. When you select dedicated memcache you'll also be prompted for a cache size.
The UI will alert you to any flush consequences of your configuration change by a notice in red. The changes are applied when you click Save Settings. Once applied, the changes take effect immediately.
The following flush properties apply when making configuration changes. It's prudent to schedule full-flush changes for a time when the application is not under heavy load.
- Changing the memcache class will flush all data in the cache.
- Decreasing the dedicated memcache size will flush all data.
- Increasing the dedicated memcache size will only flush an amount of data proportional to the size change. For example, increasing from 9 to 10GB will be a 10% flush.
The Admin Console's main dashboard charts include the following reports related to memcache usage:
- Memcache Operations/Second.
- Memcache Traffic (Bytes/Second).
- Memcache Total Cache Size (MB) - only displays data when there is no filtering by app module or version.
- Memcache Compute Units/Second.
There is also a memcache viewer page in the Admin Console data section. This page shows information about your app's memcache usage, including total cache size, number of items, cache hit rate, and age of the oldest item.
To help identify problematic "hot keys" the memcache viewer lets you see the 20 top keys by either MCU or operation rate over the past hour. The list is created by sampling API calls; only the most frequently accessed keys are tracked. Although the viewer displays 20 keys, more may have been tracked. The list gives each key's relative operation count as a percentage of all memcache traffic. If the app is a heavy user of memcache and some keys are particulary hot, the display may include warning indicators.
Following are some best practices for using memcache:
- Handle memcache API failures gracefully. Memcache operations can fail for various reasons. Applications should be designed to catch failed operations without exposing these errors to end users. This applies especially to Set operations.
- Use the batching capability of the API when possible, especially for small items. This will increase the performance of your app.
- Distribute load across your memcache keyspace. Having a single or small set of memcache items represent a disproportionate amount of traffic will hinder your app from scaling. This applies to both operations/sec and bandwidth. The problem can often be alleviated by explicit sharding of your data. For example, a frequently updated counter can be split among several keys, reading them back and summing only when a total is needed. Likewise, a 500K piece of data that must be read on every HTTP request can be split across multiple keys and read back using a single batch API call. (Even better would be to cache the value in instance memory.) For dedicated memcache, the peak access rate on a single key should be 1-2 orders of magnitude less than the per-GB rating.