Salesforce · · 26 min read

Using the Platform Cache in Apex

Boost Salesforce performance with the Platform Cache — org cache vs session cache, partition setup, Cache.Org and Cache.Session classes, the CacheBuilder interface, TTL strategies, eviction policies, and testing patterns.

Part 49: Using the Platform Cache in Apex

Welcome back to the Salesforce series. Over the past several posts, we have been building increasingly sophisticated Apex applications — triggers, batch jobs, SOQL queries, custom metadata lookups, and more. As your codebase grows and user traffic increases, you will inevitably run into a performance ceiling. Queries that run on every page load, metadata that gets fetched on every transaction, API responses that rarely change — all of these are candidates for caching.

Salesforce provides a built-in caching layer called the Platform Cache. It lets you store data in memory so that subsequent requests can retrieve it without hitting the database, making callouts, or recalculating expensive results. When used correctly, Platform Cache can dramatically reduce SOQL queries, speed up page loads, and lower your governor limit consumption.

This post covers everything you need to know to use Platform Cache effectively in Apex: what it is, how the two cache types differ, how to set up partitions, how to read and write cache data, how to use the CacheBuilder interface for automatic loading, and how to handle testing, monitoring, and error scenarios. Let us get started.


What Is the Platform Cache?

The Platform Cache is a key-value store that lives in Salesforce’s application server memory. Unlike the database (which stores data on disk and requires SOQL to retrieve), the cache holds data in RAM and returns it almost instantly. Think of it as a fast scratchpad that sits between your Apex code and the database.

Cache Types

Salesforce offers two distinct cache types:

  • Org Cache — Shared across all users, all sessions, and all requests in the org. Data stored in the org cache is visible to every user. Use this for data that is the same for everyone, like custom metadata records, configuration settings, or reference data that rarely changes.
  • Session Cache — Scoped to an individual user’s session. Each user gets their own isolated session cache. When the session ends (user logs out or the session times out), the session cache is cleared. Use this for user-specific data like preferences, recent search results, or personalized calculations.

Capacity Allocation

Platform Cache capacity depends on your Salesforce edition and any purchased add-ons:

EditionDefault Capacity
Enterprise Edition10 MB (org cache)
Unlimited Edition30 MB (org cache)
Performance Edition30 MB (org cache)
Session CacheEqual to org cache allocation

You can purchase additional cache capacity in 10 MB increments. The total capacity is split between org cache and session cache based on how you configure your partitions (more on this below).

When to Use Platform Cache

Platform Cache is a good fit when:

  • You have read-heavy data that does not change frequently — custom metadata records, picklist values, currency conversion rates, org-wide settings.
  • You make repeated SOQL queries for the same data within a transaction or across transactions — looking up the same configuration on every trigger invocation.
  • You call external APIs that return data which stays valid for minutes or hours — exchange rates, weather data, product catalogs from an external system.
  • You perform expensive calculations that produce the same result for a given set of inputs — complex rollup summaries, permission matrices, or pricing calculations.
  • You need to share computed data across requests without writing it to a custom object — intermediate results that do not belong in the database.

When NOT to Use Platform Cache

Platform Cache is the wrong tool when:

  • The data changes on every transaction — caching data that is immediately stale wastes cache space.
  • You need transactional consistency — the cache does not participate in database transactions. If a transaction rolls back, cached data is not rolled back with it.
  • The data is security-sensitive and user-specific in the org cache — org cache is visible to all users regardless of sharing rules. Never put data in the org cache that some users should not see.
  • The data must survive restarts — Salesforce can evict cache entries at any time due to memory pressure. The cache is not durable storage.
  • The data is very large — cache capacity is limited. Storing multi-megabyte payloads defeats the purpose.

When to Use Session Cache vs Org Cache

Choosing between session cache and org cache is one of the most important design decisions when implementing caching. Here is a comparison:

FactorOrg CacheSession Cache
ScopeAll users, all sessionsOne user, one session
VisibilityEvery Apex request in the orgOnly the user who created it
LifetimeUntil TTL expires or evictedUntil TTL expires, session ends, or evicted
Max TTL48 hours8 hours
Use caseShared reference dataUser-specific preferences
SecurityNo sharing/FLS enforcementIsolated per user
CapacityFrom partition allocationFrom partition allocation

Decision Framework

Ask yourself these questions:

  1. Is the data the same for every user? If yes, use org cache. Examples: custom metadata records, global configuration, exchange rates.
  2. Is the data different per user? If yes, use session cache. Examples: user preferences, recently viewed records, user-specific calculations.
  3. Does the data contain sensitive information that varies by user permissions? If yes, use session cache. Org cache ignores sharing rules and field-level security.
  4. Does the data need to persist across sessions? If yes, org cache is better (up to 48-hour TTL). Session cache is cleared when the user logs out.
  5. Is the data accessed by background processes (batch, schedulable, queueable)? Use org cache. Background processes do not have a user session, so session cache is not available.

Setting Up Cache Partitions

Before you can use Platform Cache in Apex, you must create a partition. A partition is a named segment of cache capacity that your code references when storing and retrieving data. Partitions let you organize cache usage by application or feature and control how capacity is split between org and session cache.

Step-by-Step Partition Setup

  1. Navigate to Setup in your Salesforce org.
  2. In the Quick Find box, search for Platform Cache.
  3. Click Platform Cache under the Performance section.
  4. Click New Platform Cache Partition.
  5. Enter a Label — for example, MyAppCache. The API name will auto-populate (e.g., MyAppCache).
  6. Optionally check Default Partition if you want this to be the fallback partition when no partition is specified in code.
  7. Allocate capacity:
    • Set the Org Cache size in MB (e.g., 5 MB).
    • Set the Session Cache size in MB (e.g., 5 MB).
    • The total across both must not exceed your available capacity.
  8. Click Save.

You can create multiple partitions for different applications or teams. Each partition gets its own capacity allocation, so one application cannot starve another.

Namespace Considerations

If your code is in a managed package, the partition’s fully qualified name includes the namespace:

myNamespace.MyAppCache

If you are working in an unmanaged org (no namespace), you reference the partition by its API name alone:

local.MyAppCache

The local prefix is used for partitions in orgs without a namespace. Always use local. in unmanaged orgs to avoid ambiguity.


Using the Cache in Apex

Salesforce provides two primary classes for interacting with Platform Cache:

  • Cache.Org — For org cache operations.
  • Cache.Session — For session cache operations.

Both classes expose the same core methods: put(), get(), remove(), and contains().

Writing to the Org Cache

// Store a value in the org cache with the default TTL (48 hours)
Cache.Org.put('local.MyAppCache.exchangeRates', ratesMap);

// Store a value with a custom TTL of 3600 seconds (1 hour)
Cache.Org.put('local.MyAppCache.exchangeRates', ratesMap, 3600);

// Store a value with TTL and visibility control
// The third parameter is TTL in seconds
// The fourth parameter controls cache visibility:
//   Cache.Visibility.ALL — visible to all namespaces (default)
//   Cache.Visibility.NAMESPACE — visible only within your namespace
Cache.Org.put('local.MyAppCache.exchangeRates', ratesMap, 3600, Cache.Visibility.NAMESPACE);

Reading from the Org Cache

// Retrieve a value from the org cache
Map<String, Decimal> rates = (Map<String, Decimal>) Cache.Org.get('local.MyAppCache.exchangeRates');

// Always check for null — the value may have been evicted or expired
if (rates == null) {
    // Cache miss — fetch from database or API
    rates = fetchExchangeRatesFromApi();
    Cache.Org.put('local.MyAppCache.exchangeRates', rates, 3600);
}

Checking if a Key Exists

Boolean hasRates = Cache.Org.contains('local.MyAppCache.exchangeRates');

Removing a Key

Cache.Org.remove('local.MyAppCache.exchangeRates');

Writing to and Reading from the Session Cache

The session cache API is identical, but uses Cache.Session instead of Cache.Org:

// Store user-specific data in the session cache
Cache.Session.put('local.MyAppCache.recentSearches', searchTerms);

// Retrieve user-specific data
List<String> searches = (List<String>) Cache.Session.get('local.MyAppCache.recentSearches');

// Check existence
Boolean hasSearches = Cache.Session.contains('local.MyAppCache.recentSearches');

// Remove
Cache.Session.remove('local.MyAppCache.recentSearches');

TTL (Time-to-Live) Rules

  • Org cache: TTL can range from 300 seconds (5 minutes) to 172800 seconds (48 hours). Default is 172800.
  • Session cache: TTL can range from 300 seconds (5 minutes) to 28800 seconds (8 hours). Default is 28800.
  • If you specify a TTL below the minimum, Salesforce rounds up to 300 seconds.
  • TTL is a maximum lifetime. Salesforce can evict entries before TTL expires if the platform is under memory pressure.

Key Format Rules

Cache keys must follow these rules:

  • Consist of alphanumeric characters and underscores only.
  • Begin with a letter.
  • Maximum length of 50 characters.
  • Case-sensitive — myKey and MyKey are different keys.

The fully qualified key format is:

namespace.partitionName.keyName

For unmanaged orgs: local.MyAppCache.myKey


The CacheBuilder Interface

The basic put/get pattern works, but it leads to repetitive code: check for a cache miss, fetch the data, store it, return it. The CacheBuilder interface eliminates this boilerplate by letting you define a class that automatically loads data on a cache miss.

How CacheBuilder Works

  1. You create a class that implements Cache.CacheBuilder.
  2. The class has a single method: doLoad(String key) that returns the data to cache.
  3. Instead of calling Cache.Org.get(), you call Cache.Org.get(CacheBuilderImpl.class, key).
  4. If the key exists in the cache, the cached value is returned. If the key does not exist, Salesforce calls your doLoad method, caches the result, and returns it.

Implementing CacheBuilder

public class ExchangeRateCacheBuilder implements Cache.CacheBuilder {

    /**
     * Called automatically when the requested key is not in the cache.
     * The return value is stored in the cache and returned to the caller.
     */
    public Object doLoad(String key) {
        // key could be used to determine which rates to load
        // For this example, we load all exchange rates
        Map<String, Decimal> rates = new Map<String, Decimal>();

        for (Exchange_Rate__mdt rate : [
            SELECT DeveloperName, Rate__c
            FROM Exchange_Rate__mdt
            WHERE IsActive__c = true
        ]) {
            rates.put(rate.DeveloperName, rate.Rate__c);
        }

        return rates;
    }
}

Using CacheBuilder with Org Cache

// Salesforce automatically calls ExchangeRateCacheBuilder.doLoad()
// on a cache miss and stores the result
Map<String, Decimal> rates = (Map<String, Decimal>) Cache.Org.get(
    ExchangeRateCacheBuilder.class,
    'exchangeRates'
);

Notice that when using CacheBuilder, you do not include the partition prefix in the key. The data is stored in the default partition. This is why setting a default partition during setup is important when using CacheBuilder.

Using CacheBuilder with Session Cache

Map<String, Object> userPrefs = (Map<String, Object>) Cache.Session.get(
    UserPreferencesCacheBuilder.class,
    'preferences'
);

CacheBuilder Considerations

  • The doLoad method must be idempotent — calling it multiple times with the same key should produce the same result.
  • doLoad runs in the same transaction as the caller, so it counts against governor limits.
  • If doLoad returns null, the null value is cached. Subsequent calls will return null from cache rather than calling doLoad again.
  • The cached value from CacheBuilder uses the default TTL for the cache type (48 hours for org, 8 hours for session). You cannot specify a custom TTL when using CacheBuilder.

Practical Examples

Example 1: Caching Custom Metadata Records

Custom Metadata queries do not count against SOQL limits, but they still take execution time. If your application reads the same metadata repeatedly in a single transaction or across transactions, caching avoids redundant processing.

public class AppConfigCache {

    private static final String PARTITION = 'local.MyAppCache';
    private static final String CONFIG_KEY = 'appConfig';
    private static final Integer TTL_SECONDS = 7200; // 2 hours

    /**
     * Returns a map of all active application configuration metadata records.
     * Uses org cache to avoid redundant metadata queries across users and requests.
     */
    public static Map<String, App_Config__mdt> getConfig() {
        String fullKey = PARTITION + '.' + CONFIG_KEY;
        Map<String, App_Config__mdt> configMap =
            (Map<String, App_Config__mdt>) Cache.Org.get(fullKey);

        if (configMap == null) {
            configMap = new Map<String, App_Config__mdt>();
            for (App_Config__mdt config : [
                SELECT DeveloperName, Value__c, Is_Enabled__c, Category__c
                FROM App_Config__mdt
                WHERE Is_Enabled__c = true
            ]) {
                configMap.put(config.DeveloperName, config);
            }
            Cache.Org.put(fullKey, configMap, TTL_SECONDS);
        }

        return configMap;
    }

    /**
     * Clears the config cache. Call this after deploying new metadata records.
     */
    public static void clearCache() {
        String fullKey = PARTITION + '.' + CONFIG_KEY;
        if (Cache.Org.contains(fullKey)) {
            Cache.Org.remove(fullKey);
        }
    }
}

Usage:

Map<String, App_Config__mdt> config = AppConfigCache.getConfig();
App_Config__mdt featureFlag = config.get('EnableNewCheckoutFlow');
if (featureFlag != null && featureFlag.Is_Enabled__c) {
    // Run new checkout flow logic
}

Example 2: Caching API Responses

When your Apex code calls an external API that returns data which stays valid for a period of time, caching the response avoids unnecessary callouts and helps you stay within callout limits.

public class WeatherService {

    private static final String PARTITION = 'local.MyAppCache';
    private static final Integer TTL_SECONDS = 1800; // 30 minutes

    /**
     * Returns weather data for the given city.
     * Caches the result in org cache for 30 minutes.
     */
    public static Map<String, Object> getWeather(String city) {
        String cacheKey = PARTITION + '.weather_' + city.replaceAll('[^a-zA-Z0-9]', '_');

        Map<String, Object> weatherData =
            (Map<String, Object>) Cache.Org.get(cacheKey);

        if (weatherData != null) {
            System.debug('Cache HIT for weather: ' + city);
            return weatherData;
        }

        System.debug('Cache MISS for weather: ' + city);
        weatherData = callWeatherApi(city);

        if (weatherData != null) {
            Cache.Org.put(cacheKey, weatherData, TTL_SECONDS);
        }

        return weatherData;
    }

    private static Map<String, Object> callWeatherApi(String city) {
        HttpRequest req = new HttpRequest();
        req.setEndpoint('callout:WeatherAPI/current?city=' + EncodingUtil.urlEncode(city, 'UTF-8'));
        req.setMethod('GET');
        req.setTimeout(10000);

        Http http = new Http();
        HttpResponse res = http.send(req);

        if (res.getStatusCode() == 200) {
            return (Map<String, Object>) JSON.deserializeUntyped(res.getBody());
        }

        System.debug('Weather API error: ' + res.getStatusCode() + ' ' + res.getBody());
        return null;
    }
}

Example 3: Caching Expensive SOQL Results

Some queries involve aggregations, subqueries, or joins across many objects. Caching the results avoids re-running the query on every request.

public class AccountStatsCache {

    private static final String PARTITION = 'local.MyAppCache';
    private static final Integer TTL_SECONDS = 900; // 15 minutes

    /**
     * Returns aggregated opportunity statistics for an account.
     * Caches the result to avoid repeated expensive aggregation queries.
     */
    public static AccountStats getStats(Id accountId) {
        String cacheKey = PARTITION + '.acctStats_' + String.valueOf(accountId).left(15);

        AccountStats stats = (AccountStats) Cache.Org.get(cacheKey);

        if (stats != null) {
            return stats;
        }

        // Expensive aggregation query
        List<AggregateResult> results = [
            SELECT
                COUNT(Id) totalOpps,
                SUM(Amount) totalAmount,
                AVG(Amount) avgAmount,
                COUNT_DISTINCT(OwnerId) uniqueOwners
            FROM Opportunity
            WHERE AccountId = :accountId
                AND IsClosed = true
                AND IsWon = true
        ];

        stats = new AccountStats();
        if (!results.isEmpty()) {
            AggregateResult ar = results[0];
            stats.totalOpportunities = (Integer) ar.get('totalOpps');
            stats.totalAmount = (Decimal) ar.get('totalAmount');
            stats.averageAmount = (Decimal) ar.get('avgAmount');
            stats.uniqueOwners = (Integer) ar.get('uniqueOwners');
        }

        Cache.Org.put(cacheKey, stats, TTL_SECONDS);
        return stats;
    }

    /**
     * Invalidate the cache for a specific account.
     * Call this from a trigger when opportunities are modified.
     */
    public static void invalidate(Id accountId) {
        String cacheKey = PARTITION + '.acctStats_' + String.valueOf(accountId).left(15);
        if (Cache.Org.contains(cacheKey)) {
            Cache.Org.remove(cacheKey);
        }
    }

    public class AccountStats {
        public Integer totalOpportunities = 0;
        public Decimal totalAmount = 0;
        public Decimal averageAmount = 0;
        public Integer uniqueOwners = 0;
    }
}

Example 4: Session Cache for User Preferences

public class UserPreferencesCache {

    private static final String PARTITION = 'local.MyAppCache';
    private static final Integer TTL_SECONDS = 14400; // 4 hours

    /**
     * Returns the current user's application preferences.
     * Uses session cache since preferences are user-specific.
     */
    public static Map<String, String> getPreferences() {
        String cacheKey = PARTITION + '.userPrefs';

        Map<String, String> prefs =
            (Map<String, String>) Cache.Session.get(cacheKey);

        if (prefs != null) {
            return prefs;
        }

        // Query user preferences from a custom object
        prefs = new Map<String, String>();
        for (User_Preference__c pref : [
            SELECT Preference_Key__c, Preference_Value__c
            FROM User_Preference__c
            WHERE User__c = :UserInfo.getUserId()
        ]) {
            prefs.put(pref.Preference_Key__c, pref.Preference_Value__c);
        }

        Cache.Session.put(cacheKey, prefs, TTL_SECONDS);
        return prefs;
    }

    /**
     * Update a preference and refresh the cache.
     */
    public static void setPreference(String key, String value) {
        // Update the database
        List<User_Preference__c> existing = [
            SELECT Id, Preference_Value__c
            FROM User_Preference__c
            WHERE User__c = :UserInfo.getUserId()
                AND Preference_Key__c = :key
            LIMIT 1
        ];

        if (!existing.isEmpty()) {
            existing[0].Preference_Value__c = value;
            update existing;
        } else {
            insert new User_Preference__c(
                User__c = UserInfo.getUserId(),
                Preference_Key__c = key,
                Preference_Value__c = value
            );
        }

        // Invalidate the session cache so next read picks up the change
        String cacheKey = PARTITION + '.userPrefs';
        Cache.Session.remove(cacheKey);
    }
}

Error Handling

Cache operations can throw Cache.Org.OrgCacheException or Cache.Session.SessionCacheException. Common scenarios include:

  • The partition does not exist or is misconfigured.
  • The cache capacity is exhausted.
  • The value being stored exceeds the maximum size for a single cached item (100 KB).
  • Serialization or deserialization failures.

Graceful Fallback Pattern

The most robust approach is to wrap cache operations in try-catch blocks and fall back to the direct data source on any cache failure:

public class ResilientCache {

    /**
     * Safely retrieves a value from org cache.
     * Returns null on any cache failure rather than throwing an exception.
     */
    public static Object safeGet(String key) {
        try {
            return Cache.Org.get(key);
        } catch (Cache.Org.OrgCacheException e) {
            System.debug(LoggingLevel.WARN, 'Org cache GET failed for key: ' + key + ' — ' + e.getMessage());
            return null;
        }
    }

    /**
     * Safely stores a value in org cache.
     * Silently fails on any cache error — the application continues without caching.
     */
    public static void safePut(String key, Object value, Integer ttlSeconds) {
        try {
            Cache.Org.put(key, value, ttlSeconds);
        } catch (Cache.Org.OrgCacheException e) {
            System.debug(LoggingLevel.WARN, 'Org cache PUT failed for key: ' + key + ' — ' + e.getMessage());
        }
    }

    /**
     * Safely removes a value from org cache.
     */
    public static void safeRemove(String key) {
        try {
            if (Cache.Org.contains(key)) {
                Cache.Org.remove(key);
            }
        } catch (Cache.Org.OrgCacheException e) {
            System.debug(LoggingLevel.WARN, 'Org cache REMOVE failed for key: ' + key + ' — ' + e.getMessage());
        }
    }
}

Usage with the resilient wrapper:

public static Map<String, Decimal> getExchangeRates() {
    String key = 'local.MyAppCache.exchangeRates';

    Map<String, Decimal> rates = (Map<String, Decimal>) ResilientCache.safeGet(key);

    if (rates == null) {
        rates = fetchRatesFromSource();
        ResilientCache.safePut(key, rates, 3600);
    }

    return rates;
}

This pattern ensures your application never breaks because of a cache problem. The worst case is a performance degradation, not a runtime error.


Best Practices

Key Naming Conventions

Establish a consistent naming scheme for cache keys to avoid collisions and make debugging easier:

// Pattern: featureArea_entityType_identifier
local.MyAppCache.billing_exchangeRates
local.MyAppCache.billing_taxRates_US
local.MyAppCache.weather_forecast_SanFrancisco
local.MyAppCache.user_permissions_00558000003ABCD
  • Use lowercase with underscores for readability.
  • Include a prefix that identifies the feature or module.
  • For record-specific keys, include the record ID or a unique identifier.
  • Keep keys under 50 characters (the Salesforce limit).

TTL Strategy

Choose TTL values based on how frequently the underlying data changes:

Data TypeSuggested TTLReasoning
Custom Metadata24–48 hoursChanges only on deployment
Org configuration4–12 hoursChanged infrequently by admins
API responses (weather, rates)15–60 minutesChanges periodically
Aggregation results5–15 minutesData changes with user activity
User preferences4–8 hours (session)Stable within a session

Cache Invalidation

The two hardest problems in computer science are cache invalidation, naming things, and off-by-one errors. Here are strategies for keeping your cache fresh:

  • Time-based expiration — Set a reasonable TTL and accept slightly stale data. This is the simplest approach and works for most use cases.
  • Event-driven invalidation — Clear the cache from a trigger, platform event handler, or flow when the underlying data changes. Use this for data where staleness is unacceptable.
  • Manual invalidation — Provide an admin-accessible mechanism (custom button, Lightning action) to flush the cache when needed.

Data Serialization

Everything stored in Platform Cache must be serializable. Most standard Apex types (primitives, collections, sObjects, custom classes) serialize automatically. However:

  • Transient fields in custom classes are not cached.
  • Non-serializable types like Http, HttpRequest, HttpResponse, Database.SaveResult, and Blob cannot be stored. Extract the data you need into a serializable wrapper first.
  • Be mindful of cached object size. Each cached value has a maximum size of 100 KB. Large sObject lists or deeply nested maps can exceed this.
// Bad — trying to cache an HttpResponse directly
Cache.Org.put('local.MyAppCache.apiResult', response); // Will fail

// Good — extract the data you need into a serializable structure
Map<String, Object> data = (Map<String, Object>) JSON.deserializeUntyped(response.getBody());
Cache.Org.put('local.MyAppCache.apiResult', data, 3600);

Testing Strategies

Platform Cache presents a unique testing challenge: cache operations are not supported in test context. Calling Cache.Org.get() or Cache.Org.put() in a test method throws an exception unless you take precautions.

Strategy 1: Conditional Cache Access

The simplest approach is to check Test.isRunningTest() and bypass cache operations during tests:

public class ConfigService {

    public static Map<String, String> getConfig() {
        if (!Test.isRunningTest()) {
            Map<String, String> cached =
                (Map<String, String>) Cache.Org.get('local.MyAppCache.config');
            if (cached != null) {
                return cached;
            }
        }

        // Direct data access — works in both test and production context
        Map<String, String> config = loadConfigFromDatabase();

        if (!Test.isRunningTest()) {
            Cache.Org.put('local.MyAppCache.config', config, 7200);
        }

        return config;
    }

    private static Map<String, String> loadConfigFromDatabase() {
        Map<String, String> result = new Map<String, String>();
        for (App_Config__mdt c : App_Config__mdt.getAll().values()) {
            result.put(c.DeveloperName, c.Value__c);
        }
        return result;
    }
}

Strategy 2: Abstraction Layer with Dependency Injection

A cleaner approach is to abstract cache operations behind an interface and inject a mock implementation during tests:

public interface ICacheService {
    Object get(String key);
    void put(String key, Object value, Integer ttlSeconds);
    void remove(String key);
    Boolean contains(String key);
}

Production implementation:

public class OrgCacheService implements ICacheService {
    public Object get(String key) {
        return Cache.Org.get(key);
    }
    public void put(String key, Object value, Integer ttlSeconds) {
        Cache.Org.put(key, value, ttlSeconds);
    }
    public void remove(String key) {
        Cache.Org.remove(key);
    }
    public Boolean contains(String key) {
        return Cache.Org.contains(key);
    }
}

In-memory mock for testing:

@IsTest
public class MockCacheService implements ICacheService {
    private Map<String, Object> store = new Map<String, Object>();

    public Object get(String key) {
        return store.get(key);
    }
    public void put(String key, Object value, Integer ttlSeconds) {
        store.put(key, value);
    }
    public void remove(String key) {
        store.remove(key);
    }
    public Boolean contains(String key) {
        return store.containsKey(key);
    }
}

Service class that accepts the cache implementation:

public class ProductCatalogService {

    private ICacheService cacheService;

    public ProductCatalogService(ICacheService cacheService) {
        this.cacheService = cacheService;
    }

    public List<Product2> getActiveProducts() {
        String key = 'local.MyAppCache.activeProducts';
        List<Product2> products = (List<Product2>) cacheService.get(key);

        if (products == null) {
            products = [
                SELECT Id, Name, ProductCode, IsActive
                FROM Product2
                WHERE IsActive = true
                ORDER BY Name
            ];
            cacheService.put(key, products, 3600);
        }

        return products;
    }
}

Test class:

@IsTest
private class ProductCatalogServiceTest {

    @TestSetup
    static void setup() {
        insert new List<Product2>{
            new Product2(Name = 'Widget A', IsActive = true),
            new Product2(Name = 'Widget B', IsActive = true),
            new Product2(Name = 'Widget C', IsActive = false)
        };
    }

    @IsTest
    static void testGetActiveProducts_cacheMiss() {
        MockCacheService mockCache = new MockCacheService();
        ProductCatalogService service = new ProductCatalogService(mockCache);

        Test.startTest();
        List<Product2> products = service.getActiveProducts();
        Test.stopTest();

        System.assertEquals(2, products.size(), 'Should return only active products');
        // Verify the result was cached
        System.assert(mockCache.contains('local.MyAppCache.activeProducts'),
            'Products should be stored in cache after first load');
    }

    @IsTest
    static void testGetActiveProducts_cacheHit() {
        MockCacheService mockCache = new MockCacheService();
        // Pre-populate the mock cache
        List<Product2> cachedProducts = new List<Product2>{
            new Product2(Name = 'Cached Widget')
        };
        mockCache.put('local.MyAppCache.activeProducts', cachedProducts, 3600);

        ProductCatalogService service = new ProductCatalogService(mockCache);

        Test.startTest();
        List<Product2> products = service.getActiveProducts();
        Test.stopTest();

        System.assertEquals(1, products.size(), 'Should return cached products');
        System.assertEquals('Cached Widget', products[0].Name,
            'Should return the pre-cached data');
    }
}

Strategy 3: Using the Partition in Tests

If you have a partition configured and want to run integration-style tests that actually hit the cache, you can use Cache.Org.getPartition() to check if the partition exists:

@IsTest
static void testCacheIntegration() {
    // Check if partition exists before testing
    try {
        Cache.OrgPartition partition = Cache.Org.getPartition('local.MyAppCache');
        // Partition exists — run cache-specific assertions
        partition.put('testKey', 'testValue', 300);
        System.assertEquals('testValue', (String) partition.get('testKey'));
        partition.remove('testKey');
    } catch (Cache.Org.OrgCacheException e) {
        // Partition does not exist in this test context — skip cache assertions
        System.debug('Cache partition not available in test context, skipping cache assertions');
    }
}

Cache Diagnostics and Monitoring

Checking Cache Usage in Setup

Navigate to Setup > Platform Cache to view:

  • Total capacity allocated to each partition.
  • Current usage (how much of the allocated capacity is in use).
  • Org vs session breakdown.

Programmatic Diagnostics

You can inspect partition details from Apex code:

// Get a reference to a partition
Cache.OrgPartition orgPartition = Cache.Org.getPartition('local.MyAppCache');

// Check capacity
System.debug('Org Capacity (bytes): ' + orgPartition.getCapacity());

// Check how many keys are in the partition
// Note: getKeys() is available on the partition object
// to list all stored keys for debugging

Monitoring with Debug Logs

Add strategic debug statements around cache operations to track hit/miss ratios:

public class CacheMonitor {

    private static Integer hits = 0;
    private static Integer misses = 0;

    public static Object getWithMonitoring(String key) {
        Object value = Cache.Org.get(key);
        if (value != null) {
            hits++;
            System.debug(LoggingLevel.INFO, 'CACHE HIT [' + key + '] — Total hits: ' + hits);
        } else {
            misses++;
            System.debug(LoggingLevel.INFO, 'CACHE MISS [' + key + '] — Total misses: ' + misses);
        }
        return value;
    }

    public static String getStats() {
        Integer total = hits + misses;
        Decimal hitRate = total > 0 ? (Decimal.valueOf(hits) / total * 100).setScale(1) : 0;
        return 'Cache stats — Hits: ' + hits + ', Misses: ' + misses + ', Hit rate: ' + hitRate + '%';
    }
}

Eviction Policies

Salesforce uses a Least Recently Used (LRU) eviction policy. When a partition runs out of capacity:

  1. The platform identifies the entry that was accessed least recently.
  2. That entry is removed to make room for the new one.
  3. This happens automatically — you do not control eviction directly.

This means high-frequency keys naturally stay in the cache, while rarely accessed keys get evicted first. Design your caching strategy around this: cache the data you access most often, and accept that infrequently accessed data may not survive in the cache.


Section Notes

Capacity Limits Summary

  • Individual cached value: Maximum 100 KB.
  • Partition capacity: Allocated from your org’s total cache capacity.
  • Total org capacity: Varies by edition (10–30 MB default, purchasable in 10 MB increments).
  • Number of partitions: No hard limit, but total capacity across all partitions cannot exceed your org’s allocation.

Gotchas and Edge Cases

  • Cache is not durable storage. Salesforce can clear the cache at any time. Your application must always function correctly when the cache is empty.
  • No cache in @future or Queueable sessions for Session Cache. Background processes do not have a user session, so Cache.Session throws an exception. Use Cache.Org in asynchronous contexts.
  • Serialization size matters. A list of sObjects with many fields can easily exceed 100 KB. Cache only the fields you need, or store serialized JSON strings instead of full sObjects.
  • Cache is shared across transactions. Two concurrent transactions can read the same org cache entry. If both modify it and write it back, the last write wins. There is no locking mechanism.
  • Default partition is required for CacheBuilder. If you use the CacheBuilder interface, you must have a default partition configured. Otherwise, Salesforce does not know where to store the data.
  • Namespace prefix in managed packages. If your code runs inside a managed package, always use the namespace prefix when referencing partitions. If the code is in an unmanaged org, use local..

Cache vs Other Storage Options

FeaturePlatform CacheCustom SettingsCustom MetadataCustom Objects
SpeedFastest (in-memory)Fast (cached by platform)Fast (cached by platform)Slowest (SOQL required)
DurabilityNone (volatile)DurableDurableDurable
Capacity10–30+ MBLimitedLimitedLarge
Updatable in ApexYesYes (hierarchy only)No (metadata deploy only)Yes
Counts against SOQLNoNoNoYes
Best forTemporary performance boostUser/org preferencesApp configurationBusiness data

Summary

Platform Cache is a powerful tool for improving Salesforce application performance. The key takeaways:

  1. Org cache is for shared, non-sensitive data that every user can see. Session cache is for user-specific, session-scoped data.
  2. Partitions must be created in Setup before using cache in Apex. Use the local. prefix in unmanaged orgs.
  3. The put/get/remove/contains methods on Cache.Org and Cache.Session are your primary API.
  4. The CacheBuilder interface eliminates boilerplate by automatically loading data on cache misses.
  5. Always code defensively — wrap cache operations in try-catch, provide fallback logic, and never assume the cache contains your data.
  6. Platform Cache is not available in standard test context. Use conditional checks, dependency injection, or partition-level testing to handle this.
  7. Monitor your cache usage through Setup and debug logs to ensure you are getting value from your cache capacity.

The cache is not a silver bullet. It trades memory for speed and adds complexity to your application. Use it when profiling shows a clear performance bottleneck, not as a premature optimization. Start with the simplest caching strategy — a short TTL with graceful fallback — and iterate from there.


In the next post, Part 50, we will begin a new section on integrations. We will cover The Basics of Integrations in Apex — how Salesforce communicates with external systems using HTTP callouts, Named Credentials, and the fundamentals of REST and SOAP APIs. See you there.