Friday 3 February 2012

Micro Caching in ASP.NET

I was recently involved in rewriting the homepage for a major airline company. Performance was critical as the site receives many hundred hits every second just on the homepage alone. It required heavy use of the database due to a high level of personalized content and targeted advertising and it had to meet very strict performance criteria. It was a difficult challenge on many fronts, but the performance aspect was especially daunting. Many hundred hits (page views) translate into several thousand requests as each page view also requires associated images, JavaScript and CSS. Well, it’s not often you get to work on one of the top 10 busiest ecommerce websites in the UK, so enjoy my micro caching experience!
I knew that caching would be the key to the success of the project, however there were several interesting aspects to the caching which I thought would be worth sharing. The implementation of the caching solution will be of interest to those looking at addressing caching on similar high traffic websites.

Database Caching

The concept of caching isn’t that hard, so it can’t be hard to implement, right? Well, yes and no. When you are caching the results of database queries on a high volume website, new problems emerge, problems which just don’t exist at low volumes. Take the case where a database operation takes T milliseconds and on average you get a request every R milliseconds, then if R is less than T, something interesting happens: if your cache is empty and you get a request, you need to fetch the data from the cache, but before the data has been returned, a new request occurs, which also makes a request for the same data because the cache is still empty. If your data takes a long time to fetch compared to the interval between requests, your database server may get saturated. This was certainly one of the edge cases I had to deal with, and in fact the empty cache (or cold start) scenario is a very important caching consideration.

Cache Hit Ratio

For caching to be useful, there must be much more read activity than write activity, if there isn’t forget it! Don’t bother! The ratio of read versus write activity is called the cache hit ratio, and this should be a lot higher than 1!

Caching Strategies

In ASP.NET there are two main ways to cache that are built in. Output caching which caches the HTML output of whole pages or fragments of a page (i.e. user control caching). Also the HTTP runtime provides a cache object in which you can cache data. Due to the high level of personalization, output caching couldn't be used and the HTTP runtime cache didn't perform nearly as well as the micro cache implementation I wrote and didn't provide the necassary fetch locking which I'll explain below.

Fetch Locking

The main problem with the built in cache strategies is that they don’t cope with the high volume situations very well where you might end up with multiple fetches for the same data because requests keep coming for the same page before the data for that page is fully retrieved and cached from the database. The trick to prevent database saturation and also prevent wasted trips to the database is to wait for the original fetch to finish rather than initiate a new fetch for the same data. This can be achieved relatively inexpensively. This equates to ensuring only one fetch occurs per cache key. Fetch locking ensures that only one trip to the database occurs for a given cache key. It makes all other requests for a given cache key wait for the original request to finish. This actually ends up being very light and fast for R < T situations and works especially well with micro caching to act as a transient buffer between the database and web server.

Micro Caching

Micro caching is a strategy used to cache data for relatively short amounts of time (in human terms). Most businesses can tolerate a large subset of data being cached for a few minutes and when there are many hundred requests happening per second, a few minutes is an eternity!

Micro Caching and Fetch Locking

Enter a micro cache solution with fetch locking. I wanted to design something really light weight which meant that any locked sections of code ran very efficiently to minimise the lock time. So my micro cache ensures that reads are efficient (as locks are only during writes) and writes are done as quickly as possible.

Cache Lookup

First off, for the cache lookup I choose to use a binary search tree. Microsoft’s SortedDictionary is an implementation of a red-black binary search tree which has very fast and stable lookup time. This suited me well as I wanted to lock the dictionary myself during reads and writes rather than using one of the ready made concurrent collections as none of them guarantee that GetOrAdd will not result in more than one call to their value factory due to the way they are implemented.
I use a slim reader-writer lock which doesn’t actually lock read operations unless a write is in progress, so it’s very efficient. I my case I knew I was going to get many more reads than writes to the dictionary.

Cache Writing

I chose to implement a GetOrAdd style of cache writing so that I could manage the two levels of locking. The first level of locking required that I lock the SortedDictionary while updating its internal state so that it was thread-safe. The second level of locking, was required to ensure that each data fetch for a given cache key would ensure only one fetch (the first fetch) was guaranteed to occur and all other fetches for that cache item would wait for the first to return. Also, a different lock would be used for each cache key.

The Cache Solution

using System;
using System.Threading;
using System.Collections.Generic;

public sealed class MicroCache<TKey>
{
   private IDictionary<TKey, LazyLock> _dictionary;
   private ReaderWriterLockSlim _lock;
   private Timer _timer;

   public MicroCache(TimeSpan period)
      : this(period, Comparer<TKey>.Default)
   {
   }

   public MicroCache(TimeSpan period, IComparer<TKey> comparer)
   {
      _lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
      _dictionary = new SortedDictionary<TKey, LazyLock>(comparer);
      _timer = new Timer(state => Clear(), null, period, period);
   }

   public void Clear()
   {
      _lock.EnterWriteLock();

      try
      {
         _dictionary.Clear();
      }
      finally
      {
         _lock.ExitWriteLock();
      }
   }

   public TValue GetOrAdd<TValue>(TKey key, Func<TValue> activator)
   {
      LazyLock lazy;
      bool success;

      _lock.EnterReadLock();

      try
      {
         success = _dictionary.TryGetValue(key, out lazy);
      }
      finally
      {
         _lock.ExitReadLock();
      }

      if (!success)
      {
         _lock.EnterWriteLock();

         try
         {
            if (!_dictionary.TryGetValue(key, out lazy))
            {
               _dictionary[key] = lazy = new LazyLock();
            }
         }
         finally
         {
            _lock.ExitWriteLock();
         }
      }

      return lazy.Get(activator);
   }

   sealed class LazyLock
   {
      private volatile bool _got;
      private object _value;

      public TValue Get<TValue>(Func<TValue> activator)
      {
         if (!_got)
         {
            lock (this)
            {
               if (!_got)
               {
                  _value = activator();

                  _got = true;
               }
            }
         }

         return (TValue)_value;
      }
   }
}
I chose to implement a lighter version of the Lazy<T> class already provided by .NET. I wanted the constructor to be very efficient as I didn't want unnesassary code within the write lock. The built in version has logic in its constructor that I didn’t want or need. The LazyLock class implements the lazy lock pattern and ensures that a lock is only aquired to populate the value field. This ensures reads are non-blocking. The micro cache implementation effectively stores a lazy lock for each cache key which is responsible for the fetching based on an activator (or value factory). By using the GetOrAdd approach we can ensure the cache check and data fetch are guaranteed to result in only one fetch per key no matter how many cache lookups occur for the same key.
The HTTP runtime cache is a lot slower than the SortedDictionary and would still have needed locking to avoid multiple fetches for an item with the same cache key.
I use a timer to clear all the items after a given time span which is a clean and efficient way to flush the micro cache periodically.

Using the Micro Cache

// class to hold micro cache singleton
static class CacheManager
{
   public static MicroCache<string> Cache = new MicroCache<string>(TimeSpan.FromMinutes(2));
}


// fetch data from the cache
Data[] data = CacheManager.Cache.GetOrAdd("data", () => QueryDatabase());
The GetOrAdd method takes a value factory as its second parameter which is called only when the cache is empty for the given key (first parameter).

Conclusion

We did two months of performance testing in parallel with development of the solution of which the micro caching was only a small part. The solution performed amazingly well. It was very fast for lookups once the cache was populated, and – importantly – it was very fast at populating the cache. In fact without the locks, the database saturated and the website collapsed when the cache was empty, but hardly went above 10% CPU when the locking was introduced. Also the locking added no noticeable overhead to the CPU usage when benchmarked before and after the fetch locking was added in.
I’m also happy to say that we deployed the solution over 16 web servers and 4 database servers across two data centres with four web servers to a database, and the solution had four times redundant capacity at peak load and everyone was very pleased!
In essence the micro cache will be useful to you if you need to reduce your database load due to high traffic where there are multiple queries for the same data. As well as a reduction in database load, you will also experience an increase in throughput of your web server.

No comments:

Post a Comment