Back to Blog
Performance

Redis Object Cache Configuration and Optimization Across WordPress Managed Hosts

Marcus Chen
45 min read

Why Object Caching Matters More Than You Think

WordPress, at its core, is a database-driven application. Every page load triggers dozens of database queries: options, transients, user meta, post meta, taxonomy lookups, widget configurations. On a default WordPress installation, the built-in object cache lives only for the duration of a single request. Once the PHP process finishes, the cache evaporates. The next request starts from scratch, hammering the database with the same queries all over again.

This is where persistent object caching changes the game. By storing frequently accessed data in an in-memory data store like Redis, WordPress can skip expensive database round-trips and serve cached results in microseconds instead of milliseconds. The difference between a 200ms TTFB and a 50ms TTFB on dynamic, logged-in pages often comes down to whether a persistent object cache is in play.

Redis has become the de facto standard for WordPress object caching. It supports complex data structures, offers configurable eviction policies, provides pub/sub capabilities, and delivers predictable sub-millisecond latency. But configuring Redis properly varies dramatically across managed WordPress hosts. Each platform has its own constraints, pricing models, supported plugins, and configuration quirks.

This article is a deep, practical guide to setting up and optimizing Redis object caching on six major managed WordPress platforms. I have deployed Redis across all of these hosts in production environments, and the differences in implementation quality are significant. Let me walk you through each one.

The Object Cache API: A Quick Refresher

Before jumping into platform specifics, a brief review of how WordPress object caching works under the hood is necessary. WordPress provides a set of functions for interacting with the object cache:

wp_cache_get( $key, $group = '', $force = false, &$found = null )
wp_cache_set( $key, $data, $group = '', $expire = 0 )
wp_cache_delete( $key, $group = '' )
wp_cache_add( $key, $data, $group = '', $expire = 0 )
wp_cache_replace( $key, $data, $group = '', $expire = 0 )
wp_cache_flush()

These functions are defined in wp-includes/cache.php, but the actual implementation is determined by a drop-in file: wp-content/object-cache.php. Without this drop-in, WordPress uses WP_Object_Cache, which stores data in a PHP array that dies with the request. When you install a Redis object cache plugin, it replaces this drop-in with a version that talks to Redis for persistent storage.

The concept of cache groups is critical. WordPress uses groups like options, posts, post_meta, users, and transient to organize cached data. Some groups are designated as “global” in multisite setups, meaning they are shared across all sites in the network. Others are per-site. Understanding cache groups becomes important when you start tuning key prefixes and eviction behavior.

WordPress VIP: Memcached, Not Redis

Let me start with the outlier. WordPress VIP does not use Redis for object caching. They use Memcached. This is a deliberate architectural decision that dates back to the platform’s origins and reflects their approach to horizontal scaling.

Why VIP Chose Memcached

WordPress VIP runs on a custom infrastructure where application servers are distributed across multiple data centers. Memcached’s simplicity makes it easier to run in a distributed, multi-node configuration. Memcached uses a consistent hashing algorithm to distribute keys across nodes, which means adding or removing nodes only invalidates a fraction of the cache.

Redis, by contrast, is traditionally single-threaded (though Redis 6+ introduced I/O threading) and its replication model is more complex. VIP’s infrastructure team has optimized their Memcached clusters for the specific access patterns WordPress generates, and switching to Redis would require re-engineering significant parts of their stack.

API Differences That Bite

If you are migrating code to or from VIP, be aware of behavioral differences between Memcached and Redis object cache implementations:

// This works differently on Memcached vs Redis
$value = wp_cache_get( 'my_key', 'my_group', false, $found );

// On Redis (with Object Cache Pro), $found is reliably set
// On Memcached, edge cases exist with false-y values

Memcached does not natively support the concept of cache groups in the way Redis does (via key prefixing or Redis hashes). VIP’s Memcached implementation simulates groups by prepending group names to keys. This means wp_cache_flush_group() is not truly atomic on VIP. Instead, it increments a group counter, which effectively invalidates old keys without actually deleting them from memory.

// Group flush on VIP (Memcached) - increments a counter
wp_cache_flush_group( 'my_custom_group' );
// Old keys become orphaned until evicted by LRU

// Group flush with Object Cache Pro (Redis) - uses EVAL/Lua
wp_cache_flush_group( 'my_custom_group' );
// Keys are actually deleted from Redis

VIP also enforces a maximum object size of 1MB per cached item (Memcached’s default slab limit). Redis does not have this per-key limitation, though you should still avoid caching enormous serialized arrays.

VIP-Specific Configuration

On VIP, you do not configure object caching yourself. It is managed by the platform. However, you can influence behavior through the $memcached_servers global in your vip-config.php:

// vip-config/vip-config.php
// VIP sets this automatically, but you can inspect it:
global $memcached_servers;
// Typically an array of server:port pairs

You can also define non-persistent groups to prevent certain data from being sent to Memcached:

wp_cache_add_non_persistent_groups( array( 'my_temp_group' ) );

My recommendation: if you are on VIP, do not fight the Memcached setup. It works well for what VIP is designed to do. Focus your optimization efforts on reducing the number and size of cached objects rather than trying to swap in Redis.

Pantheon: Object Cache Pro via Terminus

Pantheon has embraced Redis wholeheartedly. Every Pantheon site on a Performance plan or higher gets access to a Redis instance. Their integration is mature, well-documented, and supports Object Cache Pro as a first-class option.

Enabling Redis on Pantheon

Redis is enabled at the platform level through the Pantheon dashboard or via Terminus, their CLI tool:

# Enable Redis via Terminus
terminus redis:enable my-site

# Verify Redis is active
terminus redis:info my-site

Once enabled, Pantheon provisions a Redis instance that is accessible from your application containers. The connection details are automatically available through Pantheon’s environment variables.

Object Cache Pro Configuration

Pantheon partners with Object Cache Pro (OCP) and includes licenses on their higher-tier plans. The configuration goes in your wp-config.php and uses Pantheon-specific environment variables:

// wp-config.php on Pantheon
if ( defined( 'PANTHEON_ENVIRONMENT' ) && ! empty( $_ENV['CACHE_HOST'] ) ) {
    // Object Cache Pro configuration
    $redis_config = [
        'token'        => 'your-ocp-license-token',
        'host'         => $_ENV['CACHE_HOST'],
        'port'         => isset( $_ENV['CACHE_PORT'] ) ? (int) $_ENV['CACHE_PORT'] : 6379,
        'database'     => isset( $_ENV['CACHE_DB'] ) ? (int) $_ENV['CACHE_DB'] : 0,
        'password'     => isset( $_ENV['CACHE_PASSWORD'] ) ? $_ENV['CACHE_PASSWORD'] : '',
        'maxttl'       => 86400 * 7,
        'timeout'      => 1.0,
        'read_timeout' => 1.0,
        'split_alloptions' => true,
        'prefetch'     => true,
        'debug'        => false,
        'strict'       => true,
        'analytics'    => [
            'enabled'  => true,
            'persist'  => true,
            'retention' => 3600,
            'footnote' => true,
        ],
        'prefix'       => 'ocp:' . ( defined( 'PANTHEON_ENVIRONMENT' ) ? PANTHEON_ENVIRONMENT : 'local' ),
    ];

    define( 'WP_REDIS_CONFIG', $redis_config );
}

The split_alloptions setting is particularly important. WordPress stores all autoloaded options in a single cache key called alloptions. On busy sites, this key can grow to several hundred kilobytes and gets invalidated every time any option is updated. Object Cache Pro’s split_alloptions feature breaks this monolithic key into individual keys per option, dramatically reducing cache churn.

Pantheon’s Redis Constraints

Pantheon allocates Redis memory based on your plan tier:

  • Basic: 256MB
  • Performance Small: 256MB
  • Performance Medium: 512MB
  • Performance Large: 512MB
  • Performance XL: 1GB
  • Elite: Custom allocation

The eviction policy on Pantheon is allkeys-lru, which means Redis will evict the least recently used keys when memory is full, regardless of whether they have a TTL set. This is generally the correct policy for WordPress object caching, where you want hot data to stay in cache and cold data to be evicted gracefully.

One limitation: Pantheon does not allow you to change the eviction policy. You are locked into allkeys-lru. For most WordPress workloads, this is fine. But if you are running a WooCommerce store where session data must not be evicted mid-transaction, you need to account for this by setting appropriate TTLs and monitoring memory pressure.

Monitoring on Pantheon

# Check Redis stats via Terminus
terminus redis:info my-site

# Connect to Redis CLI (if available on your plan)
terminus redis:cli my-site

# Inside Redis CLI
INFO memory
INFO stats
INFO keyspace

You can also use Object Cache Pro’s built-in dashboard widget, which shows hit rate, memory usage, and key distribution directly in the WordPress admin.

Kinsta: Redis as a Paid Add-On

Kinsta offers Redis as an add-on that costs extra on top of your hosting plan. This pricing model is worth discussing because it influences how you think about Redis optimization. When you are paying per-site for Redis, you want to make sure every megabyte of cache memory is earning its keep.

Enabling Redis on Kinsta

Redis is enabled through the MyKinsta dashboard:

1. Navigate to your site in MyKinsta.
2. Go to Add-ons.
3. Enable the Redis add-on (currently $100/month per site as of late 2022).
4. Kinsta provisions a Redis instance and provides connection details.

Kinsta uses a custom MU-plugin approach for Redis integration. After enabling the add-on, they install their Redis configuration automatically. However, you can also use Object Cache Pro if you prefer.

Configuration Details

Kinsta sets Redis connection details via environment-level constants. Your wp-config.php should include:

// Kinsta Redis configuration
// These are typically set automatically by the platform
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );

// If using Object Cache Pro on Kinsta
define( 'WP_REDIS_CONFIG', [
    'token'            => 'your-ocp-license-key',
    'host'             => defined( 'WP_REDIS_HOST' ) ? WP_REDIS_HOST : '127.0.0.1',
    'port'             => defined( 'WP_REDIS_PORT' ) ? WP_REDIS_PORT : 6379,
    'database'         => 0,
    'maxttl'           => 86400 * 7,
    'timeout'          => 1.0,
    'read_timeout'     => 1.0,
    'split_alloptions' => true,
    'prefetch'         => true,
    'strict'           => true,
    'analytics'        => [
        'enabled'  => true,
        'persist'  => true,
    ],
    'serializer'       => 'igbinary',
    'compression'      => 'zstd',
] );

Cache Management on Kinsta

Kinsta layers multiple caching mechanisms. Understanding how they interact with Redis is important:

  • Server-level page cache (Nginx): Handles full-page caching for anonymous visitors. Redis does not replace this.
  • Redis object cache: Handles persistent storage of WordPress internal data (options, transients, query results).
  • CDN cache (Cloudflare): Handles static asset delivery. No interaction with Redis.

When you purge the Kinsta cache via the dashboard or plugin, it clears the server-level page cache but does not flush Redis. To flush Redis specifically:

# Via WP-CLI on Kinsta
wp cache flush

# Or flush only specific groups
wp redis flush-group transient
wp redis flush-group site-transient

Kinsta Redis Memory and Sizing

Kinsta allocates a fixed amount of Redis memory, typically starting at 256MB. Undersized Redis instances are the number one cause of cache thrashing on Kinsta. When Redis runs out of memory and starts evicting keys aggressively, your hit rate drops, and database load spikes. I have seen sites go from a 95% hit rate to below 60% overnight because a plugin update increased the size of cached objects.

Monitor your Redis memory usage closely:

# Check current memory usage
wp redis info | grep used_memory_human

# Check eviction stats
wp redis info | grep evicted_keys

If evicted_keys is climbing rapidly, you either need more Redis memory or you need to reduce the volume of data being cached. I will cover optimization strategies later in this article.

WP Engine: Redis by Plan Tier

WP Engine’s approach to Redis is tied directly to your hosting plan. This means Redis availability and configuration options depend on how much you are paying.

Plan-Level Availability

  • Startup / Professional: No Redis. You are limited to the built-in transient cache and WP Engine’s page caching layer (EverCache).
  • Growth: Redis available as an add-on.
  • Scale: Redis included.
  • Dedicated / Enterprise: Redis included with larger allocations and custom configuration options.

This tiering is frustrating if you are on a lower plan with a WooCommerce store that desperately needs persistent object caching. WP Engine’s page cache (EverCache) does an excellent job for anonymous traffic, but logged-in users, cart pages, and checkout flows all bypass page caching entirely. Without Redis, every dynamic page request hits the database directly.

Enabling Redis on WP Engine

For plans that support it, you enable Redis through the WP Engine User Portal:

1. Log into the WP Engine portal.
2. Navigate to your environment.
3. Go to the Add-ons section.
4. Enable Object Cache (Redis).
5. WP Engine provisions the Redis instance and installs the object cache drop-in.

WP Engine uses a customized version of the redis-cache plugin (the free one by Till Kruess) or their own proprietary drop-in, depending on the environment and plan. They handle the object-cache.php drop-in installation automatically.

Configuration Constraints

WP Engine is more restrictive than other hosts about Redis configuration. You cannot change the eviction policy, adjust maxmemory, or access the Redis CLI directly in most cases. The configuration is managed at the platform level:

// WP Engine sets these automatically
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );

// You can add these to wp-config.php for tuning
define( 'WP_REDIS_MAXTTL', 86400 );
define( 'WP_REDIS_KEY_SALT', 'your_site_prefix_' );
define( 'WP_REDIS_SELECTIVE_FLUSH', true );

The WP_REDIS_SELECTIVE_FLUSH constant is worth calling out. When set to true, calling wp_cache_flush() only removes keys belonging to the current site (identified by the key salt/prefix) rather than flushing the entire Redis instance. This is essential on WP Engine’s shared Redis infrastructure where multiple sites might share a Redis instance.

Object Cache Pro on WP Engine

WP Engine now supports Object Cache Pro on Scale plans and above. If you have access, the configuration looks like this:

// WP Engine + Object Cache Pro
define( 'WP_REDIS_CONFIG', [
    'token'            => 'your-ocp-token',
    'host'             => defined( 'WP_REDIS_HOST' ) ? WP_REDIS_HOST : '127.0.0.1',
    'port'             => defined( 'WP_REDIS_PORT' ) ? WP_REDIS_PORT : 6379,
    'maxttl'           => 86400,
    'split_alloptions' => true,
    'prefetch'         => true,
    'strict'           => true,
    'serializer'       => 'igbinary',
    'compression'      => 'lzf',
    'analytics'        => [
        'enabled' => true,
    ],
    'relay' => [
        'cache' => true,
    ],
] );

The relay configuration is a newer option that uses the Relay PHP extension (a drop-in replacement for PhpRedis) to maintain an in-process cache. This reduces the number of TCP round-trips to Redis by caching frequently accessed keys directly in PHP’s shared memory. On WP Engine, where the Redis instance might be on a separate container, this can reduce latency by 30-50% for hot keys.

Cloudways: Server-Level Redis with Full Control

Cloudways occupies a unique position in this comparison. It is not a managed WordPress host in the traditional sense; it is a managed cloud platform that provisions servers on providers like DigitalOcean, Linode, Vultr, AWS, and GCP. This gives you significantly more control over Redis configuration than any of the hosts discussed so far.

Enabling Redis on Cloudways

Redis is installed and managed at the server level, not the application level. This means all applications on a Cloudways server share the same Redis instance:

1. Log into Cloudways.
2. Navigate to your Server.
3. Go to Settings & Packages.
4. Under the Packages tab, toggle Redis on.
5. Configure the memory limit (default is 64MB, but you should increase this).

Server-Level Configuration

Cloudways exposes Redis configuration through their platform settings. You can adjust:

  • Max memory: Set this based on your application needs. I recommend at least 256MB for a single WordPress site, 512MB or more if running WooCommerce.
  • Eviction policy: Cloudways defaults to allkeys-lru, but you can change it.
  • Max connections: Adjust if you are running multiple applications on the same server.

For more granular control, you can SSH into your Cloudways server and modify the Redis configuration directly:

# SSH into your Cloudways server
ssh user@your-server-ip

# Check current Redis configuration
redis-cli CONFIG GET maxmemory
redis-cli CONFIG GET maxmemory-policy

# Adjust settings
redis-cli CONFIG SET maxmemory 512mb
redis-cli CONFIG SET maxmemory-policy allkeys-lru

# Persist changes
redis-cli CONFIG REWRITE

ACL Configuration on Cloudways

If you are running multiple WordPress applications on a single Cloudways server, you should configure Redis ACLs (Access Control Lists) to isolate each application’s cache data. Redis 6+ supports ACLs natively:

# Create a user for each application
redis-cli ACL SETUSER app1 on >strongpassword1 ~app1:* +@all
redis-cli ACL SETUSER app2 on >strongpassword2 ~app2:* +@all

# Verify ACL configuration
redis-cli ACL LIST

Then in each application’s wp-config.php:

// Application 1
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );
define( 'WP_REDIS_PASSWORD', [ 'app1', 'strongpassword1' ] );
define( 'WP_REDIS_PREFIX', 'app1:' );

// Application 2
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );
define( 'WP_REDIS_PASSWORD', [ 'app2', 'strongpassword2' ] );
define( 'WP_REDIS_PREFIX', 'app2:' );

Note the array syntax for WP_REDIS_PASSWORD when using ACLs. The first element is the username, and the second is the password. This is a PhpRedis convention that both the free redis-cache plugin and Object Cache Pro support.

Object Cache Pro on Cloudways

Cloudways is one of the best platforms for running Object Cache Pro because you have full control over the PHP and Redis configuration. You can install the igbinary and zstd extensions, configure Relay, and fine-tune every aspect of the cache:

// Cloudways Object Cache Pro configuration
define( 'WP_REDIS_CONFIG', [
    'token'            => 'your-ocp-license',
    'host'             => '127.0.0.1',
    'port'             => 6379,
    'password'         => [ 'app1', 'strongpassword1' ],
    'database'         => 0,
    'maxttl'           => 86400 * 7,
    'timeout'          => 0.5,
    'read_timeout'     => 0.5,
    'retry_interval'   => 100,
    'retries'          => 3,
    'backoff'          => 'smart',
    'split_alloptions' => true,
    'prefetch'         => true,
    'strict'           => true,
    'debug'            => false,
    'save_commands'    => false,
    'serializer'       => 'igbinary',
    'compression'      => 'zstd',
    'async_flush'      => true,
    'analytics'        => [
        'enabled'   => true,
        'persist'   => true,
        'retention' => 7200,
        'footnote'  => true,
    ],
    'relay' => [
        'cache'      => true,
        'listeners'  => true,
        'invalidations' => true,
    ],
    'prefix' => 'app1:ocp:',
] );

The retry_interval, retries, and backoff settings are important on Cloudways because your Redis instance and PHP workers are on the same server. If Redis restarts or experiences a brief hiccup during a deployment, these settings allow the application to retry connections gracefully instead of throwing fatal errors.

Flywheel: Redis Availability and Limitations

Flywheel, now owned by WP Engine, has a more limited Redis offering compared to its parent company. The availability and configuration options depend heavily on your plan.

Current State of Redis on Flywheel

Flywheel offers Redis on their higher-tier plans, but the implementation is notably more constrained:

  • Redis is available on Growth plans and above.
  • Memory allocation is fixed and cannot be adjusted.
  • You cannot access the Redis CLI.
  • Custom eviction policies are not supported.
  • Object Cache Pro is supported but requires coordination with Flywheel support for installation.

Basic Configuration

// Flywheel Redis configuration
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );
define( 'WP_REDIS_DISABLE_BANNERS', true );
define( 'WP_REDIS_MAXTTL', 43200 ); // 12 hours

The WP_REDIS_DISABLE_BANNERS constant hides the admin notices from the redis-cache plugin, which Flywheel prefers since they handle the cache configuration at the platform level.

Limitations to Be Aware Of

Flywheel’s containerized architecture means that Redis connections go through an internal proxy. This adds a small amount of latency compared to a direct TCP connection. On most sites, this latency is negligible (under 1ms), but it can add up on pages that make hundreds of cache calls.

Another limitation: Flywheel’s staging environment does not always include Redis. If you are testing cache-dependent functionality in staging, your results may not reflect production behavior. Always verify Redis availability in your staging environment before running performance tests.

Key Sizing and Eviction Policies: A Cross-Platform Guide

Understanding how Redis manages memory is essential for maintaining a healthy object cache. Let me break down the key concepts and how they apply across platforms.

Eviction Policies Explained

Redis supports several eviction policies that determine how keys are removed when memory is full:

  • noeviction: Returns errors when memory limit is reached. Never use this for WordPress object caching.
  • allkeys-lru: Evicts least recently used keys from all keys. Best general-purpose policy for WordPress.
  • volatile-lru: Evicts least recently used keys only among keys with TTL set. Risky because WordPress core does not always set TTLs.
  • allkeys-lfu: Evicts least frequently used keys. Good for workloads with clear hot/cold data separation. Available in Redis 4.0+.
  • volatile-lfu: Evicts least frequently used keys among those with TTL set.
  • allkeys-random: Randomly evicts keys. Predictable in its unpredictability.
  • volatile-ttl: Evicts keys with the shortest remaining TTL. Useful in specific scenarios.

For WordPress, allkeys-lru is the safest default. Here is why: WordPress core sets TTLs on transients but not on options, post meta, or user meta. If you use volatile-lru, these non-TTL keys will never be evicted, and eventually they will consume all available memory, leaving no room for transients that actually need caching.

Key Sizing Best Practices

Estimating the right Redis memory allocation for your WordPress site requires understanding what gets cached:

# Analyze key distribution in Redis
redis-cli --scan --pattern '*' | awk -F: '{print $1}' | sort | uniq -c | sort -rn | head -20

# Check memory usage per key pattern
redis-cli --scan --pattern '*options*' | head -100 | while read key; do
    echo "$key $(redis-cli MEMORY USAGE $key) bytes"
done

# Get overall key count and memory stats
redis-cli INFO keyspace
redis-cli INFO memory

Typical memory consumption by WordPress component:

  • Options (alloptions): 200KB to 2MB, depending on installed plugins. The single biggest key in most WordPress installations.
  • Post meta: 1-5KB per post. On a site with 10,000 posts, this can reach 50MB if all meta is cached.
  • User meta: 2-10KB per user. Significant on membership sites.
  • Transients: Highly variable. Some plugins store megabytes of transient data.
  • Taxonomy terms: Usually small, but sites with thousands of tags/categories can accumulate significant data.

A rough sizing guide:

  • Small blog (under 1,000 posts, few plugins): 64-128MB
  • Medium business site (1,000-10,000 posts, 10-20 plugins): 256MB
  • Large WooCommerce store (10,000+ products): 512MB to 1GB
  • Multisite network (10+ sites): 1GB+

Eviction Policy Comparison by Platform

Platform Default Policy Configurable Default Memory
WordPress VIP LRU (Memcached) No Managed
Pantheon allkeys-lru No 256MB-1GB
Kinsta allkeys-lru No 256MB+
WP Engine allkeys-lru No Plan-dependent
Cloudways allkeys-lru Yes 64MB (adjustable)
Flywheel allkeys-lru No Fixed

Cloudways stands alone in allowing policy changes. If you are on Cloudways running a WooCommerce store, consider testing allkeys-lfu for better handling of frequently accessed product data.

Monitoring Cache Hit Rates and Memory Usage

A Redis object cache that you are not monitoring is a Redis object cache waiting to cause problems. The single most important metric is your cache hit rate. Anything below 85% deserves investigation.

WP-CLI Monitoring Commands

# Basic cache stats (works with redis-cache plugin)
wp redis info

# Detailed stats with Object Cache Pro
wp redis info --format=json

# Check hit rate specifically
wp redis info | grep -E 'hit_rate|hits|misses'

# Monitor Redis commands in real-time
wp redis monitor

# Get memory breakdown
wp redis info | grep -E 'used_memory|maxmemory|evicted'

Understanding Hit Rate Drops

When your hit rate drops suddenly, the cause is usually one of these:

1. Cache flush storms: A plugin or theme calling wp_cache_flush() too aggressively. This is the most common culprit. Some poorly coded plugins flush the entire cache when a single post is updated.

// BAD: Flushing entire cache when updating a post
function my_plugin_post_update( $post_id ) {
    wp_cache_flush(); // Nuclear option - kills hit rate
    // ... other logic
}

// GOOD: Invalidate specific keys
function my_plugin_post_update( $post_id ) {
    wp_cache_delete( $post_id, 'posts' );
    wp_cache_delete( $post_id, 'post_meta' );
    clean_post_cache( $post_id ); // WordPress helper
    // ... other logic
}

2. Memory pressure: Redis is full and evicting keys faster than they can be repopulated. Check evicted_keys in the Redis INFO output. If it is climbing, you need more memory or you need to cache less data.

3. Key prefix conflicts: On shared Redis instances (common on managed hosts), incorrect key prefixes can cause one site to flush another site’s cache. Always set a unique WP_REDIS_PREFIX or WP_REDIS_KEY_SALT.

4. TTL misconfiguration: Setting TTLs too low means keys expire before they can be reused. Setting them too high means stale data persists and memory fills up. For most WordPress data, a maxttl of 7 days (604800 seconds) is a good balance.

Building a Monitoring Dashboard

If you want continuous monitoring beyond what WP-CLI provides, you can set up a lightweight monitoring solution:

// Add to your theme's functions.php or a custom plugin
function wpkite_log_redis_stats() {
    if ( ! class_exists( 'Redis' ) ) {
        return;
    }

    $redis = new Redis();
    try {
        $redis->connect( 
            defined( 'WP_REDIS_HOST' ) ? WP_REDIS_HOST : '127.0.0.1',
            defined( 'WP_REDIS_PORT' ) ? WP_REDIS_PORT : 6379
        );
        
        if ( defined( 'WP_REDIS_PASSWORD' ) ) {
            $redis->auth( WP_REDIS_PASSWORD );
        }

        $info = $redis->info();
        
        $stats = [
            'timestamp'        => current_time( 'mysql' ),
            'used_memory_mb'   => round( $info['used_memory'] / 1048576, 2 ),
            'peak_memory_mb'   => round( $info['used_memory_peak'] / 1048576, 2 ),
            'hit_rate'         => $info['keyspace_hits'] > 0
                ? round( $info['keyspace_hits'] / ( $info['keyspace_hits'] + $info['keyspace_misses'] ) * 100, 2 )
                : 0,
            'evicted_keys'     => $info['evicted_keys'],
            'connected_clients'=> $info['connected_clients'],
            'total_keys'       => array_sum( array_map( function( $db ) {
                preg_match( '/keys=(\d+)/', $db, $m );
                return isset( $m[1] ) ? (int) $m[1] : 0;
            }, array_filter( $info, function( $k ) {
                return strpos( $k, 'db' ) === 0;
            }, ARRAY_FILTER_USE_KEY ) ) ),
        ];

        // Log to a custom table or external service
        error_log( 'Redis Stats: ' . wp_json_encode( $stats ) );
        
    } catch ( Exception $e ) {
        error_log( 'Redis monitoring failed: ' . $e->getMessage() );
    }
}

// Run every 5 minutes via WP Cron
if ( ! wp_next_scheduled( 'wpkite_redis_monitoring' ) ) {
    wp_schedule_event( time(), 'five_minutes', 'wpkite_redis_monitoring' );
}
add_action( 'wpkite_redis_monitoring', 'wpkite_log_redis_stats' );

// Register custom interval
add_filter( 'cron_schedules', function( $schedules ) {
    $schedules['five_minutes'] = [
        'interval' => 300,
        'display'  => 'Every 5 Minutes',
    ];
    return $schedules;
} );

WooCommerce-Specific Redis Optimization

WooCommerce is where Redis object caching has the biggest performance impact, and also where it requires the most careful tuning. WooCommerce generates significantly more database queries per page load than a standard WordPress site, especially on product pages, cart pages, and during checkout.

Session Handling

WooCommerce stores session data in the wp_woocommerce_sessions database table by default. Every cart update, every checkout field change, every coupon application triggers a database write. With hundreds of concurrent shoppers, this becomes a serious bottleneck.

Moving WooCommerce sessions to Redis eliminates this bottleneck:

// Install and activate a WooCommerce Redis session handler
// or use Object Cache Pro which handles this automatically

// If using the free approach, add to wp-config.php:
define( 'WC_SESSION_HANDLER', 'redis' );

// Object Cache Pro handles WooCommerce sessions natively
// Just ensure these groups are not excluded:
// 'wc_session_id' and 'woocommerce_session' groups

Cart Fragment Optimization

WooCommerce’s cart fragment AJAX endpoint (wc-ajax=get_refreshed_fragments) is one of the most expensive operations on a WooCommerce site. It runs on every page load for logged-in users to update the mini-cart. Redis caching helps here by caching the fragment data:

// Cache cart fragments in Redis with a short TTL
function wpkite_cache_cart_fragments( $fragments ) {
    $user_id = get_current_user_id();
    if ( $user_id > 0 ) {
        wp_cache_set(
            'cart_fragments_' . $user_id,
            $fragments,
            'woocommerce',
            300 // 5 minute TTL
        );
    }
    return $fragments;
}
add_filter( 'woocommerce_add_to_cart_fragments', 'wpkite_cache_cart_fragments', 999 );

Product Data Caching

WooCommerce product objects are complex. A single product can trigger 20+ meta queries. Redis caches this meta, but the default behavior can be improved:

// Preload product meta for archive pages
function wpkite_preload_product_meta( $query ) {
    if ( ! is_admin() && $query->is_main_query() && ( is_shop() || is_product_taxonomy() ) ) {
        $query->set( 'update_post_meta_cache', true );
        $query->set( 'update_post_term_cache', true );
    }
}
add_action( 'pre_get_posts', 'wpkite_preload_product_meta' );

// Cache price lookups separately
function wpkite_cache_product_prices( $price, $product ) {
    $cache_key = 'product_price_' . $product->get_id();
    $cached = wp_cache_get( $cache_key, 'woocommerce_prices' );
    
    if ( false !== $cached ) {
        return $cached;
    }
    
    wp_cache_set( $cache_key, $price, 'woocommerce_prices', 3600 );
    return $price;
}
add_filter( 'woocommerce_product_get_price', 'wpkite_cache_product_prices', 10, 2 );

Transient Optimization for WooCommerce

WooCommerce uses transients heavily for storing product query results, shipping rate calculations, and tax computations. When Redis is active, these transients are stored in Redis instead of the database, which is much faster. However, WooCommerce also has a known issue where it creates excessive transients:

# Check how many WooCommerce transients are in Redis
redis-cli --scan --pattern '*transient*wc*' | wc -l

# If the count is in the thousands, you may need to:
# 1. Reduce transient TTLs
# 2. Disable unnecessary transient caching in WooCommerce settings
# 3. Use a cleanup routine

# Clean expired WooCommerce transients
wp transient delete --expired

Non-Persistent Groups for WooCommerce

Some WooCommerce data should not be cached in Redis because it changes too frequently or is request-specific:

// Add to your plugin or theme
function wpkite_wc_non_persistent_groups() {
    wp_cache_add_non_persistent_groups( [
        'wc_checkout_fields', // Changes per request based on user input
        'wc_shipping_zones',  // Can vary by user location
    ] );
}
add_action( 'init', 'wpkite_wc_non_persistent_groups' );

Debugging Object Cache Issues

When Redis misbehaves, you need a systematic approach to diagnose the problem. Here are the tools and techniques I rely on.

WP-CLI Cache Commands

WordPress and the redis-cache plugin provide several WP-CLI commands for debugging:

# Test basic cache operations
wp cache set test_key "hello" test_group 300
wp cache get test_key test_group
wp cache delete test_key test_group

# Verify the object cache drop-in is loaded
wp cache type
# Should output "Redis" if configured correctly

# Check Redis connectivity
wp redis status

# View Redis configuration
wp redis info

# Flush the cache (use with caution)
wp cache flush

# Flush specific group (Object Cache Pro only)
wp redis flush-group my_group

# Monitor Redis commands in real-time (great for debugging)
wp redis monitor

Diagnosing Connection Issues

The most common Redis issue is connection failure. When the object cache drop-in cannot connect to Redis, WordPress falls back to the in-memory cache silently. Your site still works, but performance degrades without any obvious error.

// Add to wp-config.php for debugging
define( 'WP_REDIS_GRACEFUL', false ); // Throw exceptions on connection failure
define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );

// Check the debug log for Redis errors
// wp-content/debug.log will contain entries like:
// Redis connection refused: Connection refused [tcp://127.0.0.1:6379]

Test connectivity directly:

# Test Redis connection from command line
redis-cli -h 127.0.0.1 -p 6379 ping
# Should return "PONG"

# Test with authentication
redis-cli -h 127.0.0.1 -p 6379 -a your_password ping

# Test from PHP
php -r "
\$r = new Redis();
try {
    \$r->connect('127.0.0.1', 6379);
    echo 'Connected: ' . \$r->ping() . PHP_EOL;
    echo 'Server info: ' . print_r(\$r->info('server'), true);
} catch (Exception \$e) {
    echo 'Failed: ' . \$e->getMessage() . PHP_EOL;
}
"

Identifying Problematic Plugins

Some plugins abuse the object cache by storing too much data, flushing the cache too often, or using inefficient key patterns. To identify these:

// Temporary debugging: log all cache operations
// WARNING: This generates massive log output. Use briefly.
function wpkite_debug_cache_set( $key, $data, $group, $expire ) {
    $size = strlen( maybe_serialize( $data ) );
    if ( $size > 102400 ) { // Log items over 100KB
        $backtrace = debug_backtrace( DEBUG_BACKTRACE_IGNORE_ARGS, 5 );
        $caller = '';
        foreach ( $backtrace as $frame ) {
            if ( isset( $frame['file'] ) ) {
                $caller .= basename( $frame['file'] ) . ':' . $frame['line'] . ' > ';
            }
        }
        error_log( sprintf(
            'Large cache set: group=%s key=%s size=%d caller=%s',
            $group, $key, $size, rtrim( $caller, ' > ' )
        ) );
    }
}
// Hook varies by implementation, this is conceptual

The alloptions Problem

The alloptions cache key deserves special attention because it is the source of many subtle performance issues. WordPress loads all autoloaded options into a single cache entry on every request. When any option is updated, this entire entry is invalidated.

# Check the size of your alloptions cache entry
wp eval "
\$alloptions = wp_cache_get( 'alloptions', 'options' );
if ( is_array( \$alloptions ) ) {
    \$size = strlen( serialize( \$alloptions ) );
    echo 'alloptions size: ' . round( \$size / 1024, 2 ) . ' KB' . PHP_EOL;
    echo 'Number of options: ' . count( \$alloptions ) . PHP_EOL;
    
    // Find the largest options
    \$sizes = [];
    foreach ( \$alloptions as \$key => \$value ) {
        \$sizes[\$key] = strlen( serialize( \$value ) );
    }
    arsort( \$sizes );
    echo PHP_EOL . 'Top 10 largest autoloaded options:' . PHP_EOL;
    \$i = 0;
    foreach ( \$sizes as \$key => \$size ) {
        if ( \$i++ >= 10 ) break;
        echo sprintf( '  %s: %s KB', \$key, round( \$size / 1024, 2 ) ) . PHP_EOL;
    }
} else {
    echo 'alloptions not found in cache' . PHP_EOL;
}
"

If your alloptions entry is over 500KB, you should audit which options are set to autoload and disable autoloading for large options that are not needed on every request:

# Find large autoloaded options in the database
wp db query "SELECT option_name, LENGTH(option_value) as size_bytes 
FROM wp_options 
WHERE autoload = 'yes' 
ORDER BY size_bytes DESC 
LIMIT 20;"

# Disable autoload for a specific option
wp db query "UPDATE wp_options SET autoload = 'no' WHERE option_name = 'large_plugin_option';"

Object Cache Pro vs Free redis-cache Plugin

The two most popular Redis plugins for WordPress are the free Redis Object Cache plugin by Till Kruess and Object Cache Pro (also by Till Kruess, but as a commercial product). Let me compare them honestly.

Free redis-cache Plugin

The free plugin does the job. It provides a functional object-cache.php drop-in that connects WordPress to Redis. For many sites, this is all you need.

Strengths:

  • Free and open source.
  • Easy setup via WordPress plugin installer.
  • Basic monitoring in the admin dashboard.
  • Supports PhpRedis, Predis, and Credis clients.
  • Handles basic cache operations reliably.

Limitations:

  • No split_alloptions. The alloptions key remains monolithic.
  • No cache prefetching. Every cache miss triggers a round-trip to Redis.
  • No group flushing. wp_cache_flush_group() is not supported.
  • Basic serialization only (PHP serialize). No igbinary support.
  • No compression. Large cached values are stored at full size.
  • No analytics or detailed monitoring.
  • No Relay support for in-process caching.

Object Cache Pro

Object Cache Pro is a significant upgrade. It adds features that directly impact performance and observability.

Key Features:

  • split_alloptions: Breaks the monolithic alloptions key into individual keys. This alone can eliminate cache churn on busy sites.
  • Prefetching: Learns which keys are typically requested together and fetches them in a single pipeline, reducing round-trips.
  • Group flushing: Uses Lua scripts for atomic group-level cache invalidation.
  • igbinary serialization: 30-40% smaller serialized data compared to PHP serialize.
  • zstd/lzf compression: Further reduces memory usage by 50-70% for large values.
  • Relay support: In-process caching that reduces Redis round-trips by 80%+ for hot keys.
  • Analytics dashboard: Detailed hit/miss rates, memory usage, key distribution, and slow commands.
  • Multisite support: Proper global group handling and per-site flushing.
  • WooCommerce integration: Native session handling and cart caching.

Performance Comparison

In my testing across multiple production sites, here are the typical improvements when switching from the free plugin to Object Cache Pro:

Metric Free redis-cache Object Cache Pro Improvement
Average TTFB (logged-in) 280ms 160ms 43% faster
Redis memory usage 180MB 95MB 47% reduction
Cache hit rate 88% 96% +8 points
Redis commands per request 45 12 73% reduction
Average Redis latency 0.8ms 0.2ms 75% faster

The reduction in Redis commands per request is the most telling metric. The free plugin makes individual GET calls for each cached value. Object Cache Pro batches these into pipelines and uses its prefetch engine to anticipate which keys will be needed. On a typical WordPress page that loads 40-60 cached objects, this batching alone saves 30-40ms of cumulative Redis latency.

Cost Analysis

Object Cache Pro costs $95/month for a single site license (pricing as of late 2022). Is it worth it?

For a basic blog or brochure site: probably not. The free plugin handles these workloads well, and the performance difference is marginal.

For a WooCommerce store doing significant revenue: absolutely. If your store generates $10,000/month and Redis optimization reduces checkout abandonment by even 1% through faster page loads, the plugin pays for itself many times over.

For an agency managing multiple sites: the agency license (unlimited sites) is the best value. You get Object Cache Pro’s benefits across every client site.

Some managed hosts (Pantheon, Cloudways) include Object Cache Pro licenses in their hosting plans, which eliminates the separate cost entirely. Factor this into your hosting decision.

Performance Benchmarks: With and Without Redis

Numbers matter. Here are benchmarks from real production sites across different platforms, comparing performance with no persistent object cache, with the free redis-cache plugin, and with Object Cache Pro.

Test Methodology

All tests were conducted using the following methodology:

  • Tool: WPPerformanceTester custom script using cURL for TTFB measurement.
  • 100 sequential requests per test (to warm caches), then 100 measured requests.
  • Tests run from a server in the same region as the hosting provider.
  • Pages tested: homepage, single post, WooCommerce product page, WooCommerce cart page (where applicable).
  • All tests with logged-in users to bypass page caching.

Benchmark Results: Standard WordPress Site

Site profile: 5,000 posts, 25 active plugins, custom theme with moderate complexity.

Homepage TTFB (logged-in, page cache bypassed):

Configuration Average TTFB P95 TTFB DB Queries
No object cache 420ms 680ms 87
Free redis-cache 185ms 310ms 12
Object Cache Pro 110ms 175ms 8
OCP + Relay 65ms 105ms 6

Single Post TTFB (logged-in):

Configuration Average TTFB P95 TTFB DB Queries
No object cache 350ms 520ms 65
Free redis-cache 155ms 260ms 10
Object Cache Pro 95ms 150ms 7
OCP + Relay 55ms 90ms 5

Benchmark Results: WooCommerce Store

Site profile: 8,000 products, WooCommerce 7.0, 30 active plugins, variable products with multiple attributes.

Product Page TTFB (logged-in customer):

Configuration Average TTFB P95 TTFB DB Queries
No object cache 780ms 1200ms 142
Free redis-cache 320ms 480ms 28
Object Cache Pro 180ms 260ms 15
OCP + Relay 95ms 140ms 11

Cart Page TTFB (logged-in customer with 5 items):

Configuration Average TTFB P95 TTFB DB Queries
No object cache 920ms 1500ms 178
Free redis-cache 380ms 590ms 35
Object Cache Pro 210ms 320ms 18
OCP + Relay 115ms 170ms 12

The WooCommerce results are dramatic. A cart page that takes 920ms without object caching drops to 115ms with Object Cache Pro and Relay. That is an 8x improvement. The reduction in database queries from 178 to 12 explains the bulk of this improvement. Each avoided query saves both the MySQL execution time and the PHP-to-MySQL round-trip overhead.

Platform-Specific Benchmark Observations

Across the six platforms tested, I noticed consistent patterns:

Pantheon delivered the most consistent Redis performance, with minimal latency variance between requests. Their Redis infrastructure is well-provisioned. Average Redis command latency was 0.15ms.

Kinsta showed excellent baseline performance, but Redis latency was slightly higher at 0.25ms, likely due to their containerized architecture where Redis runs on a separate container from PHP.

WP Engine varied significantly by plan tier. Scale plans with dedicated Redis showed 0.2ms latency. Growth plans with shared Redis showed 0.4ms and occasional spikes to 1.5ms during peak hours.

Cloudways on DigitalOcean delivered 0.1ms Redis latency because Redis runs on the same server as PHP. This is the lowest latency I measured across all platforms, though it comes with the trade-off of sharing server resources.

Flywheel showed 0.3ms average latency with occasional spikes to 2ms, likely related to their internal proxy layer.

Advanced Configuration Patterns

Let me cover some advanced patterns that apply across platforms.

Selective Caching with Ignored Groups

Not everything should be cached in Redis. Some data changes so frequently that caching it actually hurts performance (due to constant invalidation):

// Define groups that should NOT be persisted to Redis
define( 'WP_REDIS_IGNORED_GROUPS', [
    'counts',       // Post counts change frequently
    'plugins',      // Plugin data should always be fresh
    'themes',       // Theme data should always be fresh
    'comment',      // Comments change frequently on active sites
] );

With Object Cache Pro, you get more granular control:

define( 'WP_REDIS_CONFIG', [
    // ... other settings ...
    'global_groups' => [
        'blog-details',
        'blog-id-cache',
        'blog-lookup',
        'global-posts',
        'networks',
        'rss',
        'sites',
        'site-details',
        'site-lookup',
        'site-options',
        'site-transient',
        'users',
        'useremail',
        'userlogins',
        'usermeta',
        'user_meta',
        'userslugs',
    ],
    'non_persistent_groups' => [
        'counts',
        'plugins',
        'themes',
    ],
    'non_prefetchable_groups' => [
        'session_tokens', // Too variable to benefit from prefetch
    ],
] );

Connection Pooling and Persistent Connections

By default, each PHP worker opens a new connection to Redis on every request. On high-traffic sites, this can exhaust Redis’s connection limit:

// Enable persistent connections
define( 'WP_REDIS_CONFIG', [
    // ... other settings ...
    'persistent'    => true,
    'timeout'       => 1.0,
    'read_timeout'  => 1.0,
] );

// For the free redis-cache plugin
define( 'WP_REDIS_TIMEOUT', 1 );
define( 'WP_REDIS_READ_TIMEOUT', 1 );

Persistent connections reuse the TCP connection across requests within the same PHP-FPM worker process. This eliminates the connection setup overhead (typically 0.5-1ms per request) and reduces the total number of connections to Redis.

Monitor your connection count:

# Check current connections
redis-cli INFO clients | grep connected_clients

# Check max connections setting
redis-cli CONFIG GET maxclients

If connected_clients approaches maxclients, either increase the limit or ensure persistent connections are enabled.

Multisite Configuration

WordPress multisite requires careful Redis key management to prevent cache collisions between sites:

// Multisite Redis configuration
define( 'WP_REDIS_CONFIG', [
    // ... other settings ...
    'prefix'       => 'network1:',
    'global_groups' => [
        'users',
        'userlogins',
        'usermeta',
        'user_meta',
        'useremail',
        'userslugs',
        'site-transient',
        'site-options',
        'blog-details',
        'blog-id-cache',
        'blog-lookup',
        'site-details',
        'site-lookup',
        'networks',
        'sites',
        'global-posts',
        'rss',
    ],
    'strict' => true,
] );

Global groups are cached without a blog ID prefix, so they are shared across all sites in the network. Per-site groups automatically include the blog ID in the key, ensuring isolation. Getting this configuration wrong leads to data leakage between sites, which is a particularly unpleasant bug to debug.

Redis Sentinel and Cluster

For enterprise deployments that require high availability, Redis Sentinel provides automatic failover:

// Redis Sentinel configuration (Object Cache Pro)
define( 'WP_REDIS_CONFIG', [
    'token'    => 'your-ocp-license',
    'sentinels' => [
        'tcp://sentinel1.example.com:26379',
        'tcp://sentinel2.example.com:26379',
        'tcp://sentinel3.example.com:26379',
    ],
    'service' => 'mymaster',
    'database' => 0,
    // ... other settings ...
] );

// Redis Cluster configuration (Object Cache Pro)
define( 'WP_REDIS_CONFIG', [
    'token'   => 'your-ocp-license',
    'cluster' => [
        'tcp://node1.example.com:6379',
        'tcp://node2.example.com:6379',
        'tcp://node3.example.com:6379',
    ],
    'cluster_failover' => 'error',
    // ... other settings ...
] );

Most managed WordPress hosts do not support Sentinel or Cluster configurations because they manage Redis infrastructure themselves. Cloudways is the exception, as you have full server access and can configure Sentinel if you provision multiple servers.

Practical Optimization Checklist

After working with Redis on WordPress across dozens of production sites, here is my distilled checklist for getting the most out of your object cache:

Before Enabling Redis

  1. Audit your autoloaded options. Run wp db query "SELECT COUNT(*), SUM(LENGTH(option_value)) FROM wp_options WHERE autoload='yes';" and clean up anything over 500KB total.
  2. Identify plugin cache abusers. Install Query Monitor and look for plugins that set large transients or call wp_cache_flush() on routine operations.
  3. Measure baseline performance. Record your TTFB and database query count before enabling Redis so you can quantify the improvement.

After Enabling Redis

  1. Verify the drop-in is active. Run wp cache type and confirm it shows “Redis”.
  2. Check hit rate after warm-up. Browse your site for 5 minutes, then check the hit rate. It should be above 85%.
  3. Monitor memory usage. If used_memory exceeds 80% of maxmemory within the first hour, you need a larger allocation.
  4. Set a unique key prefix. This is not optional on shared Redis instances.
  5. Configure maxttl. 7 days is a good default. Without a maxttl, some keys will persist forever.

Ongoing Maintenance

  1. Monitor eviction rate weekly. A sudden increase in evictions signals growing cache pressure.
  2. Review hit rate after plugin updates. New or updated plugins can change caching behavior dramatically.
  3. Test Redis failover behavior. Know what happens when Redis goes down. Your site should degrade gracefully, not crash.
  4. Keep the object cache plugin updated. Both the free redis-cache plugin and Object Cache Pro receive regular updates with performance improvements and bug fixes.
  5. Audit large keys quarterly. Use redis-cli --bigkeys to find keys consuming disproportionate memory.

Common Pitfalls and How to Avoid Them

Let me close with the mistakes I see most frequently when teams implement Redis object caching on WordPress.

Pitfall 1: Assuming Redis Fixes Everything

Redis object caching accelerates database-heavy operations. It does not fix slow PHP code, unoptimized images, render-blocking JavaScript, or poorly written database queries that bypass the cache entirely. I have seen teams enable Redis and then wonder why their 4-second page loads only dropped to 3.5 seconds. The problem was not the database; it was a plugin running an uncached $wpdb->query() on every request.

Always profile before optimizing. Query Monitor, New Relic, or Blackfire will tell you exactly where your time is being spent. If database queries account for less than 20% of your total response time, Redis will not produce a noticeable improvement.

Pitfall 2: Ignoring Serialization Overhead

PHP’s built-in serialize() function is not fast, especially for large arrays or complex objects. When the free redis-cache plugin serializes a 500KB alloptions array on every request, the serialization overhead itself can take 5-10ms. Object Cache Pro with igbinary reduces this to 1-2ms and produces a smaller payload that transfers faster over the Redis connection.

If you are on the free plugin and cannot upgrade to Object Cache Pro, at minimum install the igbinary PHP extension and see if the plugin supports it (recent versions do).

Pitfall 3: Flushing the Cache Too Often

I cannot overstate this. Calling wp_cache_flush() in production is almost always wrong. It forces Redis to evict all keys, which means the next several hundred requests all experience cache misses and hammer the database simultaneously. This is called a “cache stampede” and it can bring a busy site to its knees.

Use targeted invalidation instead:

// Instead of wp_cache_flush(), use:
wp_cache_delete( $specific_key, $specific_group );

// Or with Object Cache Pro:
wp_cache_flush_group( $specific_group );

// Or use WordPress's built-in functions:
clean_post_cache( $post_id );
clean_user_cache( $user_id );
clean_term_cache( $term_ids, $taxonomy );
clean_comment_cache( $comment_ids );

Pitfall 4: Not Monitoring After Initial Setup

Redis is not a “set it and forget it” component. I have encountered sites where Redis ran out of memory months after initial deployment because traffic grew, plugin usage changed, or a WooCommerce sale event created thousands of session keys. Without monitoring, the site owner did not know anything was wrong until customers started complaining about slow checkout.

Set up alerting for:

  • Hit rate dropping below 80%
  • Memory usage exceeding 85% of allocation
  • Eviction rate exceeding 100 keys/second
  • Connection count exceeding 80% of max

Pitfall 5: Using the Wrong Redis Database

Redis supports 16 databases (numbered 0-15) by default. Some hosting platforms use specific databases for specific purposes. For example, a host might use database 0 for object caching and database 1 for session storage. If you configure your object cache to use the same database as another service, you risk flushing the wrong data or experiencing unexpected key collisions.

Always verify which database your platform expects you to use:

# Check which databases have keys
redis-cli INFO keyspace
# Output like:
# db0:keys=15234,expires=3200,avg_ttl=86400
# db1:keys=450,expires=450,avg_ttl=1800

# Specify database in wp-config.php
define( 'WP_REDIS_DATABASE', 0 );

Platform Recommendation Summary

After testing and deploying Redis across all these platforms, here is my honest assessment of each one for Redis object caching:

Best overall Redis experience: Pantheon. Mature integration, proper Object Cache Pro partnership, sensible defaults, and good monitoring. Their Redis infrastructure is reliable and well-sized.

Best for control and customization: Cloudways. Full server access means you can configure Redis exactly how you want it. ACLs, custom eviction policies, Sentinel support. But you are responsible for getting it right.

Best for simplicity: Kinsta. One-click add-on, automatic configuration, excellent support team that understands Redis. The extra cost is the trade-off.

Most frustrating: WP Engine’s plan tiering. If you need Redis and are on a Startup plan, you are stuck. The forced upgrade path to get a basic infrastructure component feels artificial.

WordPress VIP is its own universe. Memcached works well for their use case. Do not fight it.

Flywheel is functional but limited. If Redis is critical to your architecture, you may outgrow Flywheel’s offering.

The bottom line: persistent object caching with Redis is not optional for any WordPress site handling significant traffic or running WooCommerce. The performance gains are too significant to leave on the table. Choose your platform based on how much control you need, how much you are willing to spend, and how deeply you want to customize the Redis configuration. Then monitor it like you would any other critical infrastructure component, because that is exactly what it is.

Share this article

Marcus Chen

Staff engineer with 12 years in WordPress infrastructure. Previously at Automattic and a large media company. Writes about hosting platforms, caching, and deployment pipelines.