cache-strategy-from-http-to-db

Series: blog

found some more articles on caching:

reducing the network load for HTTP by caching without ever shipping deprecated data

The Cache Headers Could Probably be More Aggressive article points out, why for many general use cases, the Cache-Control header with public, max-age=0, must-revalidate are a reasonable choice:

But if the asset is immutable anyway (think of a JavaScript library of a certain version), then max-age=0max-age=31560000 (1 year) and must-revalidateimmutable is more reasonable and saves HTTP requests which would return 304 anyway.

Regarding changes of assets which only change irregularly but where a prolonged use of the cached data is not acceptable: fallback to fingerprinting, which most frameworks support. (From HTTP point of view: by including a fingerprint in the path, ensure that it is a different resource from client point of view, thus nothing is cached yet)

structuring caches for relational database tasks

For a relational database the Fine-grained caching strategies of dynamic queries article is a nice read.

Starting from the access pattern requirements in the scenario: that database mutations happen more frequently than querying and they can happen to any data.

First the easier variant without immutable data is discussed. Then moving on to in place mutations without partitioning and how to invalidate only as much cache as necessary. Especial if you use hash( search querry ) XOR nonce( how you want to seperate the caches ) as lookup key for your cache.

For data with a sequential property (alike dates) a cache bucket strategy might be worth to consider, e.g. time intervals.

If low latency for known queries is needed, the cache can be asynchronously updated after a mutation/write. Thus the queries will experience less cache misses.

For reasonable big data sets, it makes sense to evaluate hierarchical date-based partitioning.