Here is the article.
Performance is critical to the success of any given microservice. Overall performance is the result of applying ‘performance friendly’ techniques at various points in the design, development, and delivery of microservices. In many cases, however, you can make vast performance improvements through basic techniques like implementing and optimizing caching at various points in between the consumers of data (users and applications) and servers that store data. Caches can return data much faster than the disk-based databases that originate the data because of caches’ use of memory to provide lower latency access. Caches are also usually located much closer to the consumers of data from a network topology perspective.
A cache can be inserted anywhere in the infrastructure where there is congestion with data delivery. In this post, we’ll focus on look-aside caching that serves as a highly performant alternative to accessing data from a microservice’s backing store. We will also clarify the meaning of various terms associated with caching patterns - such as read-aside, read thru, write through, and write behind caches - and when to choose each pattern.
Look-Aside Cache vs. Inline Cache
The two main caching patterns are the look-aside caching pattern and the inline caching pattern. The descriptions and differences between these patterns are shown in the table below.
Look-aside cache
How it reads - explain it to myself
Application requests data from cache
Cache delivers data, if available
If data not available, application gets data from backing store and writes it to the cache for future requests (read aside)
How it writes - explain to myself
Application writes new data or updates to existing data in both the cache and the backing store - or - all writes are done to the backing store and the cache copy is invalidated.
Pattern - look-aside cache
Inline cache - pattern, try to separate from look-aside cache
How it reads - explain it to myself
How it writes - explain to myself
No comments:
Post a Comment