Spring Boot Caching: From Basics to Best Practices
Supercharge your application's performance with efficient caching strategies
In today's digital landscape, milliseconds matter. When users interact with your application, every delay in response time can lead to decreased engagement and lost opportunities. This is where caching comes to the rescue. By implementing a robust caching strategy using Spring Boot, you can reduce response times significantly and provide a better user experience. In this practical guide, we'll explore how you can master Spring Boot's caching capabilities to supercharge your application's performance.
Suppose you are an entrepreneur running BuyBright – an up and coming e-commerce store with ambitions of taking on Amazon. You’ve done everything right – you have a curated catalog of products, your social media presence is bussin’, and people are coming in droves to your website trying to buy what you sell.
But there’s a problem though. Many of your users have reported that your website takes a lot of time to load. They have wait for several seconds before the product page even comes up, and that’s a big problem – most of them don’t choose to wait that long and quickly move away. You’re losing sales!
Your preliminary investigation shows that a backend service – ProductService
, which is responsible for assembling data for display on your product pages, is responding very slowly, owing to an increased load, and an increased latency between your data store and application layer. Your store is getting popular, but your infrastructure is not catching up!
This is a good problem to have, but also a pesky one to solve. How do you approach it?
You can scale up your DB server. You can add more pods to your ProductService
. You can invest in a faster interconnect between your application and DB server. Or, you can cache your query results for faster reads.
In this post, we will learn how introducing a cache can be a simple but effective way of scaling up your application’s performance by an order of magnitude, as well as talk about some best practices to follow and pitfalls to be aware of.
What is caching?
Caching is an act of storing a subset of data in a faster storage layer (a cache) so that the slower data store (a database) does not need to be accessed frequently. Whenever a request for a piece of data is made, the application first checks whether it is availabe in the cache (a cache hit) and quickly returns it, otherwise it fetches the data from the slower data source (a cache miss).
Caches are usually implemented as key-value stores optimized for O(1)
(or close) random access, and store data in the main memory for even smaller access latencies1. One caveat of such an arrangement is that caches are not durable, that is, they tend to lose data whenever the application is restarted (or the server power-cycles).
Think of caching like a barista’s prep station during rush hour. Instead of grinding fresh beans for each coffee order (slow), they keep some pre-ground coffee ready (fast) for quick access. The pre-ground coffee is your cache—it might not be as fresh as grinding on demand, but it’s much faster to access.
Caching in Spring Boot
Fortunately, Spring Boot provides a very easy way to introduce caching in your applications, called cache abstraction. It takes away most of the complexities involved in setting up caching, and provides a declarative syntax to mark methods to be cached. A couple of POM dependencies, some carefully placed annotations, and you are in business!
As caching in Spring Boot applies to methods, both CPU-bound and IO-bound processes can be cached with it. Moreover, the entire process of caching remains transparent to the invoker of the method—it doesn’t get to know whether the return value is coming from a cache, or is a result of an actual invokation.
To enable caching in a Spring Boot application, you need to include spring-boot-starter-cache
dependency in your POM:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
As well as add @EnableCaching
annotation to your @SpringBootApplication
class, or to any of the @Configuration
classes:
@EnableCaching
@SpringBootApplication
public class YourNewSpringBootApp {
public static void main(String[] args) {
SpringApplication.run(YourNewSpringBootApp.class);
}
}
With this configuration you can start caching data in your Spring Boot application. To start caching return values of methods, mark them with the @Cacheable
annotation. For example, to make BuyBright’s ProductService
more performant, we can start caching its return values by a single annotation:
@Cacheable("products")
public Product getProductById(long id) {
// business logic to fetch the Product object
return product;
}
In the above example, whenever the getProductById
will be called, the @Cacheable
annotation will first check whether a Product object with the requested id
is present in the cache, and will directly return it if found. Function execution will be entirely skipped.
In case the Product object is not found in the cache, the method will be executed and its return value will be cached, before returning it to the invoking method.
This is the power of Spring Boot’s cache abstraction. We were able to enable a working caching behavior in our application by declaring a simple set of three annotations. Of course, a lot is happening in the background, which can be customized in a lot different ways—which will become the basis for the rest of this post.
But first, let us understand what is happening behind the scenes.
Caching behind the scenes
The minimal caching implementation we did in the previous section abstracted a lot of implementation details away. Details like:
- Which caching strategy is applied?
- What is the backing store for the cache?
- What is the lifecycle of a cached object?
Much of the magic here is done by the @EnableCaching
annotation. When this annotation is present in your Spring Boot app, it automatically sets up a CacheManager
instance based on the specific cache-backend library added to the classpath. If no such library is found, then it creates an instance of ConcurrentMapCacheManager
, which sets up an in-memory cache backed by a ConcurrentHashMap
.
Err… Cache Backend?
Spring Boot’s approach to caching involves a crucial distinction: while it provides a cache abstraction layer, it doesn’t include an actual cache implementation. Think of it like a universal remote control - Spring Boot gives you the interface to control different types of caches, but you’ll need to choose and connect a specific cache system (like Redis, EhCache, or Caffeine) to actually store your data. This separation allows you to easily switch between different caching solutions without changing your application code.
Spring Boot supports a number of different cache backends:
… and many more. Each of these cache backends support all functionality provided by Spring Boot’s cache abstraction, but are subtly different to warrant a careful consideration about your caching use cases before selecting one.
Spring Boot automatically configures the appropriate cache backend when it finds a dependency in the classpath. No explicit configuration is required if only one cache backend is present in the classpath2.
Caching objects
The @Cacheable
annotation transforms a regular method with the one supporting caching capabilities. At its simplest, you only need to specify the cache name through the value
parameter:
@Cacheable(cacheNames = "products")
cacheNames =
can be omitted if it is the only parameter being supplied:
@Cacheable("products")
When you call a method decorated with this annotation, Spring Boot follows a straightforward process:
First, it looks in the products cache for any previously computed result matching your current method arguments. If it finds a match, Spring Boot bypasses the method execution entirely and returns the cached result instead - saving valuable computation time.
If no matching result exists in the cache, Spring Boot executes the method normally, stores its return value in the cache, and then returns it to the caller. This cached result will be available for future calls with the same arguments.
Cache Keys
At the heart of Spring Boot’s caching system lies a key-value store, where cache keys play a crucial role in retrieving and storing data. By default, Spring Boot automatically generates these keys based on a method’s parameters, but you have several options to customize this behavior.
Let’s explore how Spring Boot handles different scenarios:
For methods without parameters, Spring Boot generates an empty key:
@Cacheable("products")
public List<Products> findAllProducts() {...}
When a method has a single parameter, that parameter becomes the key:
@Cacheable("products")
public Product getProductById(Long id) {...}
For methods with multiple parameters, Spring Boot creates a composite key combining all parameters:
@Cacheable("products")
public Product getProductByNameAndCategory(
String name,
String category) {...}
You can take control of key generation using the key parameter in the @Cacheable
annotation. This allows you to specify exactly which parameters should be part of your cache key:
@Cacheable(cacheNames="products", key="#name")
public Product getProductByNameAndCategory(
String name,
String category) {...}
The key parameter accepts any valid Spring Expression Language (SpEL) expression, giving you powerful flexibility in key generation. For example, you could use string manipulation in your key:
@Cacheable(cacheNames="products", key="#name.substring(1,5)")
public Product getProductByNameAndCategory(
String name,
String category) {...}
Customizing Key Generation Further
While Spring Expression Language (SpEL) handles many caching scenarios elegantly, sometimes you need more sophisticated key generation logic. Spring Boot accommodates this through custom KeyGenerator
implementations, giving you complete control over how cache keys are constructed.
Let’s build a custom key generator that combines timestamps, method information, and parameters in a structured way:
@Component
public class MyCustomKeyGenerator implements KeyGenerator {
private static final DateTimeFormatter FORMATTER =
DateTimeFormatter.ofPattern("YYYYMMdd-HHmmss");
@Override
public Object generate(Object target, Method method, Object... params) {
var timestamp = LocalDateTime.now().format(FORMATTER);
var methodName = method.getName();
var paramsString = Arrays.stream(params)
.map(Object::toString)
.collect(Collectors.joining(","));
return Arrays.asList(timestamp, methodName, paramsString)
.stream()
.filter(str -> !str.isEmpty())
.collect(Collectors.joining("_"));
}
}
This KeyGenerator
creates cache keys with three components:
- A timestamp in YYYYMMdd-HHmmss format (e.g., “20240205-143022”)
- The name of the cached method
- A comma-separated list of all method parameters
To integrate this custom KeyGenerator
into your application, register it as a Spring bean3:
@Configuration
@EnableCaching
public class CacheConfig {
@Bean
public KeyGenerator myCustomKeyGenerator() {
return new MyCustomKeyGenerator();
}
}
Now you can use your custom generator in any cached method:
@Cacheable(cacheNames = "products", keyGenerator = "myCustomKeyGenerator")
public Product getProductDetails(String productId, String category) {
// Method implementation
}
You can only use either key
or keyGenerator
parameter in the @Cacheable
annotation at once. Using both together results in an error.
More Cache Annotations
While @Cacheable
is commonly used for caching in Spring Boot, the caching abstraction provides several other powerful annotations. Let’s explore these additional options:
@CachePut
The @CachePut
annotation updates cache entries while ensuring the annotated method always executes. Unlike @Cacheable
, which may skip method execution for cached values, @CachePut
guarantees method execution and cache updates. This makes it ideal for data modification operations.
Here are two common use cases:
@CachePut(cacheNames = "products", key = "#productDto.id")
public Product insert(ProductDto productDto) {
// Method implementation
}
@CachePut(cacheNames = "products", key = "#id")
public Product update(Long id, ProductDto productDto) {
// Method implementation
}
To maintain cache consistency, always use @CachePut
on methods that modify data. This prevents serving stale data to users by keeping the cache synchronized with your data store4.
@CacheEvict
The @CacheEvict
annotation removes entries from the cache. Like @CachePut
, it always executes the underlying method while also handling cache removal. This makes it perfect for data deletion operations.
For single entry removal:
@CacheEvict(cacheNames = "products", key = "#id")
public void delete(Long id) {
// Method implemenation
}
For clearing the entire cache:
@CacheEvict(cacheNames = "products", allEntries = true)
public void emptyCache() {
// Method implementation
}
Always use @CacheEvict
on delete operations to maintain cache consistency and prevent serving stale data to users.
@Caching
@Caching
is a shorthand annotation for applying multiple caching behaviours on a single method. It is especially useful when you need complex caching behaviour that involves multiple caches or different caching operations.
For example, you might have a use case where, whenever a product is updated:
- It is updated in the
products
cache - It is updated in the
productsByCategory
cache, where it is stored with a different key - The
productCount
cache is evicted
To implement this, we can use the @Caching
annotation:
@Caching(
put = {
@CachePut(cacheNames = "products", key = "#product.id"),
@CachePut(cacheNames = "productsByCategory", key = "#product.categoryId")
},
evict = {
@CacheEvict(cacheNames = "productCount", allEntries = true)
}
)
public Product updateProduct(Product product) {
// Method implementation
}
You can also use multiple @Cacheable
annotations:
@Caching(cacheable = {
@Cacheable(cacheNames = "users", key = "#username"),
@Cacheable(cacheNames = "usersByEmail", key = "#email")
})
public User findUser(String username, String email) {
// Method implementation
return user;
}
Some other use cases of the @Caching
annotations include:
- Keeping two (or more) caches in sync
- Building cache hierarchies where data is stored in different caches with different levels of granularity (like we saw in the example with
products
andproductsByCategory
caches) - Handling scenarios where a single operation affects multiple cache values
Conditional Caching
While caching can significantly improve application performance, it’s not always desirable to cache every method result. Spring Boot provides a mechanism for conditional caching through the condition
and unless
parameters.
Why use Conditional Caching?
Caching is always done on a subset of data, as cache memory is limited and is not designed to fit your entire data store in it. While it is a common practice to set up cache expiration policies in your configuration to periodically remove items from the cache, not adding unnecessary items to the cache is a powerful tool to keep cached data to a minimum.
There are many examples of data that benefits from not being cached:
- Rarely accessed data that would waste cache memory
- Frequently changing data where cache hits would be minimal
- Low-value data where the caching overhead outweighs the benefits
An effective cache stores minimum amount of data that is accessed for the maximum amount of time. A cached item accessed twice only reduces database load by 50%, whereas the same item accessed 10,000 times reduces the load by 99.99%.
Spring Boot’s conditional caching mechanism
Spring Boot provides two parameters: condition
and unless
, to enable conditional caching.
The condition
parameter accepts a SpEL expression that evaluates to a boolean. This evaluation occurs before method execution:
@Cacheable(
cacheNames = "products",
key = "#product.id",
condition = "#product.category != 'FMCG' and #product.price > 1000"
)
public Product getProduct(Long id) {
// Method implementation
}
When condition
evaluates to:
true
: Normal caching behaviour appliesfalse
: The method executes without any caching
On the other hand, the unless
parameter provides post-execution cache control through a SpEL expression. It’s evaluated after method execution and can override the caching decision:
@Cacheable(
cacheNames = "products",
key = "#id",
unless = "#result.stockLevel < 10"
)
public Product getProduct(Long id) {
// Method implementation
}
Control Flow
The condition
and unless
parameters work together in a specific sequence to determine whether a value should be cached. Here’s a detailed breakdown:
- Before method execution:
- If
condition
isfalse
: Method executes without caching - If
condition
istrue
: Continue to cache evaluation
- If
- Cache evaluation:
- If key exists: Return cached value
- If key doesn’t exist: Execute method
- After method execution (if
condition
wastrue
):- If
unless
istrue
: Don’t cache the result - If
unless
isfalse
: Cache the result
- If
Working with different cache backends
Spring Boot’s caching functionality is built on two distinct components: the Cache Abstraction and the Cache Implementation. While Spring Boot provides the abstraction layer out of the box, you’ll need to choose and include your preferred cache implementation. But don’t worry – this process is remarkably simple and, in many cases, optional.
There are a number of reasons why Spring Boot demarcates between the cache abstraction layer and its implementation:
- Easily switching between caching providers: As all caching providers are expected to implement a common interface, they can be easily switched out without changing any of the application code.
- Easy testability of your application: Test code can use a simple in-memory cache, while production deployments can leverage the power of distributed caching, again without any change to the business logic.
- Future proofing: Newer cache implementations can be easily plugged into your existing application.
If you don’t specify a cache implementation, Spring Boot gracefully falls back to using ConcurrentMapCache
. Even better, when you do add a specific cache implementation to your project, Spring Boot automatically detects and configures it. For instance, if you want to use Caffeine (a high-performance caching library) as your cache backend, you simply need to add its dependency to your POM file.
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>3.2.0</version>
</dependency>
You can also configure your cache backend by providing a configuration class:
@Configuration
public class CacheConfiguration {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager("products");
cacheManager.setCaffeine(caffeineCacheBuilder());
return cacheManager;
}
Caffeine<Object, Object> caffeineCacheBuilder() {
return Caffeine.newBuilder()
.maximumSize(10_000)
.initialCapacity(100)
.expireAfterWrite(Duration.ofMinutes(10));
}
}
Note that these configuration options are relevant to Caffeine. Other cache backends have different configuration options, and can be found in their specific documentation.
Multiple Cache Providers
In modern applications, different parts of your system often have distinct caching requirements. Spring Boot’s flexible caching architecture allows you to address these varying needs by implementing multiple cache providers within a single application. This approach enables you to leverage the strengths of different caching solutions where they make the most sense.
For instance, you might want to use Redis, a distributed cache, for managing user sessions across multiple application instances. At the same time, you could employ a local, in-memory cache like Caffeine for storing relatively static data such as product categories. Spring Boot makes this dual-cache setup straightforward, allowing you to configure and manage multiple cache providers with minimal complexity.
Let’s explore how to implement this multi-cache architecture in a Spring Boot application:
First, you’ll need to include both cache provider dependencies in your pom.xml
:
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<version>3.4.2</version>
</dependency>
Next, create a configuration class that defines qualified beans for each cache manager. This separation allows you to configure each cache provider according to its specific requirements and use cases:
@Configuration
public class CacheConfig {
@Bean("caffeineCacheManager")
public CacheManager caffeineCacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCacheNames(Arrays.asList("users"));
return cacheManager;
}
@Bean("redisCacheManager")
public CacheManager redisCacheManager(RedisConnectionFactory redisConnectionFactory) {
RedisCacheManager cacheManager = RedisCacheManager.builder(redisConnectionFactory)
.cacheDefaults(RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(10)))
.withCacheConfiguration("products",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(5)))
.build();
return cacheManager;
}
}
Finally, in your service classes, you can specify which cache manager to use for each operation by using the cacheManager
parameter in the caching annotations:
@Service
public class OmniService {
@Cacheable(value = "users", cacheManager = "redisCacheManager")
public User getUser(String id) {
return userRepository.findById(id);
}
@Cacheable(value = "products", cacheManager = "caffeineCacheManager")
public Product getProduct(String id) {
return productRepository.findById(id);
}
}
This multi-cache approach provides several advantages. It allows you to optimize caching strategies for different types of data, balance between performance and consistency requirements, and scale different parts of your caching infrastructure independently. However, it’s important to carefully consider your caching strategy to avoid unnecessary complexity.
Caching Best Practices
While implementing caching in your Spring Boot application seem like a straightforward piece of work (after all, it’s just a few annotations, right?), building an effective caching strategy requires careful planning and consideration. A well-architectured cache can significantly improve your application’s performance, while a poorly implemented one can lead to stale data, memory issues, or even degraded performance.
Let’s dive into key areas you should consider when implementing caching in your application.
Key Design
Spring Boot’s default behavior of using all method parameters to construct cache keys, while convenient, isn’t always optimal. Understanding how to customize cache keys can significantly improve your cache’s effectiveness and memory usage.
Consider this seemingly straightforward method for fetching user preferences:
@Cacheable("userPreferences")
public UserPreferences getUserPreferences(
User user,
Locale locale
) {
// Method implementation
}
In this example, Spring Boot creates a cache key combining both the User object and Locale. However, this approach has several drawbacks. The locale parameter doesn’t influence the core data being retrieved, and including the entire User object in the key is unnecessarily verbose when only the user’s ID is needed for uniqueness.
We can improve this by explicitly specifying the cache key to comprise of only the User’s ID:
@Cacheable(cacheNames = "userPreferences", key = "#user.id")
public UserPreferences getUserPreferences(
User user,
Locale locale
) {
// Method implementation
}
This optimization brings several benefits. First, it eliminates redundant cache entries that would otherwise be created for the same user accessing data with different locales. Second, it reduces the key size by using only the essential identifier rather than the entire User object5.
Here’s another example showing key optimization for a more complex scenario:
@Cacheable(
cacheNames = "orderHistory",
key = "#customer.id + '_' + #startDate.getYear() + '_' + #startDate.getMonthValue()"
)
public List<Order> getOrderHistory(
Customer customer,
LocalDate startDate,
boolean includeDetails, // Display preference - not needed in key
String requestId, // Tracking parameter - not needed in key
Locale locale // Formatting preference - not needed in key
) {
return orderRepository.findOrders(customer, startDate);
}
Thoughtful key design leads to more efficient cache utilization. By including only essential parameters in cache keys, you can:
- Reduce memory usage by storing fewer unique cache entries
- Improve cache hit rates by avoiding unnecessary key variations
- Simplify cache maintenance and monitoring
- Enhance overall application performance
When implementing caching in your Spring Boot application, take time to analyze each cached method’s parameters and determine which ones truly influence the cached data. Create cache keys that include only the essential components needed to uniquely identify the cached content. This careful attention to key design will result in a more efficient and performant caching system. Remember to regularly review your key design decisions as your application evolves to ensure they continue to meet your performance requirements.
Memory Management
One of the major tasks for managing a cache installation is managing the memory associated with it. Most of the cache implementations store all (or a part) of the data inside RAM, which is usually at a premium. There’s also a need to cap the amount of memory that a cache can use, so that it does not end up using all the available memory. A smaller cache even performs better. This is why it is necessary to trim the fat when it comes to cache memory!
Spring boot, as well as its cache implementations, provide a number of methods for memory management, listed below.
Cache Size Limits
One of the most critical aspects of cache configuration is setting appropriate size limits. Most cache implementations provide mechanisms to control the maximum number of entries they can hold, which directly impacts memory usage and cache effectiveness. A well-configured cache size helps prevent memory issues while maintaining optimal performance for your application.
Different cache implementations offer various ways to set size limits. Here’s how you can configure it with Caffeine, a popular caching library:
@Configuration
public class CacheConfig {
@Bean("cacheManager")
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager("products");
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(10_000)
.recordStats()); // Enable statistics for monitoring
return cacheManager;
}
}
For applications with diverse caching needs, you might want to configure different caches with varying size limits. This approach allows you to fine-tune each cache according to its specific requirements:
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
Map<String, Caffeine<Object, Object>> caches = new HashMap<>();
caches.put("products", Caffeine.newBuilder().maximumSize(10_000));
caches.put("categories", Caffeine.newBuilder().maximumSize(1_000));
caches.put("users", Caffeine.newBuilder().maximumSize(5_000));
cacheManager.setCacheNames(caches.keySet());
cacheManager.setCaffeine(caches::get);
return cacheManager;
}
Determining the optimal cache size requires a methodical approach combining initial estimation with ongoing monitoring. A good starting point is to allocate 5-10% of your total dataset size to the cache, but this should be adjusted based on your available memory resources and anticipated peak loads.
Regular monitoring of cache performance metrics is essential for optimizing cache size. Implement monitoring to track key metrics such as hit ratios, eviction rates, and memory usage patterns:
@Scheduled(fixedRate = 1800000) // Every 30 minutes
public void monitorCacheMetrics() {
Cache cache = cacheManager.getCache("products");
CacheStats stats = cache.getNativeCache().stats();
log.info("Cache Statistics:");
log.info("Hit ratio: {}", stats.hitRate());
log.info("Eviction count: {}", stats.evictionCount());
log.info("Average load penalty: {}", stats.averageLoadPenalty());
}
The size of your cache has significant implications for application performance. A cache that’s too small will suffer from high eviction rates and low hit ratios, ultimately increasing the load on your backend systems and raising latency for frequently accessed data. Conversely, an oversized cache can consume excessive memory, lead to longer garbage collection pauses, and potentially serve stale data while reducing system resources available for other operations.
Eviction Policies
Cache eviction policies are crucial mechanisms that help maintain the efficiency and reliability of your caching system. These policies determine how and when data should be removed from the cache, ensuring optimal memory usage and data freshness. Understanding and implementing appropriate eviction policies is essential for building a robust caching strategy.
Types of Eviction Policies
- Size-Based Eviction
- LRU (Least Recently Used): Removes items that haven’t been accessed for the longest time
- LFU (Least Frequently Used): Removes items that are accessed least often
- Window TinyLFU: A modern algorithm that combines recency and frequency, used by Caffeine
- These policies activate when the cache reaches its maximum size.
- Time-Based Eviction
- TTL (Time To Live): Removes items after a specified duration from creation
- TTI (Time To Idle): Removes items after a specified duration of no access
- These policies operate independently of cache size
Here’s an example of configuring Caffeine with different eviction policies:
@Configuration
public class CacheConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(1000) // Size-based eviction
.expireAfterWrite(1, TimeUnit.HOURS) // TTL
.expireAfterAccess(30, TimeUnit.MINUTES)); // TTI
return cacheManager;
}
}
Choosing the right policy
The choice of eviction policy depends on your specific use case: 2. Use TTL/TTI when: - Data has a clear expiration time - Cache items become stale after a known period - You’re caching scheduled job results - Regulatory requirements mandate data freshness 3. Use LRU/LFU when: - Memory is the primary constraint - Data relevance depends on usage patterns - You need to maintain a fixed cache size - Data doesn’t have a natural expiration time
For example, Redis allows combining both approaches:
@Bean
public RedisCacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) {
return RedisCacheManager.builder(redisConnectionFactory)
.cacheDefaults(RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(1))
.maxmemory("512mb")
.maxmemoryPolicy(EvictionPolicy.LRU))
.build();
}
Effective cache eviction is a balancing act between memory usage, data freshness, and application performance. While modern cache implementations like Caffeine provide sophisticated default policies, understanding the trade-offs between different eviction strategies is crucial. The right policy choice depends on your specific requirements around data freshness, memory constraints, and access patterns. Regular monitoring of cache metrics and adjustment of eviction policies ensures your caching solution remains efficient and reliable over time.
Remember that no single eviction policy fits all scenarios, and it’s common to use different policies for different types of cached data within the same application. The key is to align your eviction strategy with your data’s natural lifecycle and your application’s resource constraints.
Common Caching Pitfalls
While caching can significantly improve application performance, it’s not a silver bullet. A poorly implemented caching strategy can introduce subtle issues that degrade performance rather than enhance it. Let’s explore the most common pitfalls and learn how to avoid them.
Cache Pollution
One of the most insidious issues in caching is cache pollution, where your cache becomes cluttered with rarely-used data. This typically occurs when caching is implemented too liberally or when cache keys are poorly designed. Consider this example:
// Problematic: Caching with user-specific parameters
@Cacheable(value = "products", key = "#userId + '_' + #productId")
public Product getProduct(String userId, String productId) {
return productRepository.findById(productId);
}
// Better: Cache only based on product ID since product data is user-independent
@Cacheable(value = "products", key = "#productId")
public Product getProduct(String userId, String productId) {
return productRepository.findById(productId);
}
In the problematic version, we’re creating separate cache entries for each user-product combination, even though the product data is user-independent. This leads to unnecessary cache entries and reduced cache effectiveness. The improved version caches products based solely on their IDs, resulting in better cache utilization.
Memory Leaks
Memory leaks present another significant challenge, particularly when caches grow unbounded or when cached objects maintain references that prevent garbage collection. Here’s how to handle this properly:
// Potential memory leak: No size limit or TTL
@Bean
public CacheManager leakyCacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
return cacheManager;
}
// Better: Bounded cache with size limit and TTL
@Bean
public CacheManager safeCacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterWrite(1, TimeUnit.HOURS));
return cacheManager;
}
The safeCacheManager
implementation prevents unbounded growth by setting clear size limits and expiration policies. This approach ensures that your cache doesn’t become a memory sink over time.
Stale Data
Perhaps the most common caching challenge is managing data freshness. Stale data occurs when cached information doesn’t reflect the current state of the underlying data source. Here’s an example of how to handle this:
// Risky: Indefinite caching without update strategy
@Cacheable("configurations")
public AppConfig getAppConfig(String configId) {
return configRepository.findById(configId);
}
// Better: Using TTL and cache eviction
@Cacheable(value = "configurations", unless = "#result == null")
@CacheEvict(value = "configurations", key = "#configId", condition = "#result != null")
public AppConfig getAppConfig(String configId) {
AppConfig config = configRepository.findById(configId);
if (isStale(config)) {
config = refreshConfig(config);
}
return config;
}
This improved implementation includes both TTL and explicit cache eviction strategies, ensuring that configurations remain fresh and accurate. Additionally, implementing versioning or timestamps for cache entries can help track and manage data freshness more effectively.
Building an effective caching system requires careful consideration of these potential pitfalls. Regular monitoring, appropriate configuration, and thorough testing form the foundation of a healthy cache implementation. Remember that caching isn’t a set-and-forget solution – it requires ongoing maintenance and adjustment as your application evolves.
Through careful design and implementation, with particular attention to key design, memory management, and data freshness, you can create a robust caching system that genuinely enhances your application’s performance while avoiding these common pitfalls.
Conclusion
Spring Boot’s caching abstraction provides a powerful yet elegant way to implement caching in your applications. Through its comprehensive annotation support and flexible backend configurations, it allows developers to focus on the business logic while the framework handles the caching complexities. The ability to switch between different cache providers without changing application code demonstrates the true power of Spring’s abstraction layer.
However, effective caching requires more than just adding a few annotations. As we’ve explored in this post, careful consideration must be given to cache key design, memory management, and data freshness. By following the best practices and avoiding common pitfalls outlined here, you can implement a robust caching strategy that significantly improves your application’s performance without introducing new problems.
-
This is not entirely true, though. Some caches, like Redis can write data to durable storage (preferably SSDs), in addition to main memory. This lets them provide durability guarantees for your cached data, and can even reduce main memory usage while maintaining a considerably faster access than traditional databases. ↩︎
-
In case multiple cache backends are present in the classpath, Spring will automatically configure the first one it finds. However, you can provide explicit configuration (by defining multiple
CacheManager
beans) to configure them. ↩︎ -
Note that the
@EnableCaching
annotation is required only once in the entire application. If you have already annotated the main application class (@SpringBootApplication
) with@EnableCaching
, then you needn’t annotate the configuration class with it again. ↩︎ -
How? Suppose a particular key,
A
, is already present in the cache, and itsget
method is annotated with@Cacheable
. As the value is already present, Spring will skip the method’s execution and directly serve the cached value. Now, ifA
is modified in the data store using anupdate
method, and the value is not updated in the cache via@CachePut
, subsequent calls toget
will continue to return the old, stale value. ↩︎ -
It can be said that if it doesn’t affect the cache, the parameter should be done away with. The
locale
parameter, for example, is typically used in the presentation layer to format the information received from the backend service. It is not really required as a parameter to the method, and should ideally be removed. Look how an analysis of the cache key has helped identify redundant method parameters here! ↩︎