Back to Blog
Performance

Advanced WordPress Caching Architecture: Combining Nginx FastCGI, Redis Object Cache, and Cloudflare APO

Sarah Kim
54 min read

Every WordPress site sits behind an invisible chain of decisions. When a visitor requests a page, that request passes through multiple systems before a response arrives. Each system along the path has the opportunity to answer the request from stored data rather than generating a fresh response. This is caching, and the difference between a well-cached WordPress site and one running without any caching strategy can be measured in seconds of load time, thousands of dollars in hosting costs, and millions of lost page views.

This article breaks down the full caching architecture for a production WordPress site using three specific technologies: Nginx FastCGI cache at the server level, Redis as a persistent object cache, and Cloudflare APO (Automatic Platform Optimization) at the edge. These three layers work together to reduce origin server load by 90% or more while keeping content fresh through a coordinated invalidation chain.

We will walk through real configuration files, examine the decision logic at each layer, address WooCommerce-specific patterns, and build debugging strategies for identifying which layer is serving stale content.

The Multi-Layer Caching Mental Model

Think of caching as a series of checkpoints between the visitor and your WordPress PHP code. Each checkpoint can either answer the request directly or pass it along to the next layer. The closer a cache layer is to the visitor, the faster the response and the less work your server performs.

Here are the four layers, ordered from closest to the visitor to closest to the database:

Layer 1: Edge Cache (Cloudflare APO)
This sits in Cloudflare’s global network of 300+ data centers. When a visitor in Tokyo requests your page, the Cloudflare POP (Point of Presence) in Tokyo can serve the cached HTML without the request ever reaching your origin server. Latency drops to single-digit milliseconds.

Layer 2: Server Cache (Nginx FastCGI)
If the request reaches your origin server, Nginx can serve a cached copy of the full HTML page from disk or memory without ever invoking PHP. The response time is typically 1-5ms from this layer.

Layer 3: Application Cache (Redis Object Cache)
If PHP must execute (because the page is dynamic or the server cache missed), Redis stores the results of expensive database queries and computed values. Instead of running 50-200 MySQL queries per page load, WordPress retrieves pre-computed data from Redis in microseconds.

Layer 4: Database Query Cache (MySQL/MariaDB)
The database itself can cache query results, though this layer is less controllable and less effective than the layers above. We will not focus on it here because Redis largely replaces its function for WordPress workloads.

The mental model works like this: each layer absorbs a percentage of traffic so that the layers behind it handle progressively less work. In a well-tuned setup:

– Cloudflare APO handles 70-85% of all page requests at the edge
– Nginx FastCGI cache handles 10-20% at the server level (requests that bypass the edge)
– Redis object cache reduces database queries by 80-95% for the remaining dynamic requests
– MySQL handles only the queries that cannot be cached (writes, real-time data)

When Each Cache Layer Activates

Understanding when each layer fires is critical for debugging and for designing your cache rules. Here is the decision tree for a typical page request.

Edge Layer Decision (Cloudflare APO)

Cloudflare APO checks its Workers KV store for a cached copy of the requested URL. It serves from cache when:

– The URL matches a previously cached HTML page
– The visitor does not have a WordPress login cookie (wordpress_logged_in_*)
– The visitor does not have a WooCommerce cart cookie (woocommerce_items_in_cart)
– The cache entry has not expired or been purged

If any of those conditions fail, the request passes to your origin server.

Server Layer Decision (Nginx FastCGI)

When a request reaches Nginx, the FastCGI cache module checks for a cached response. It serves from cache when:

– The request method is GET or HEAD (not POST)
– No cookies indicate a logged-in user or active cart session
– The $skip_cache variable is not set to 1
– A valid cache file exists for the request URI
– The cache file has not expired

If the cache file does not exist or conditions require skipping, Nginx passes the request to PHP-FPM.

Application Layer Decision (Redis Object Cache)

Redis operates at a different granularity. It does not cache full pages. Instead, it caches individual data objects that WordPress requests through the wp_cache API. Every call to wp_cache_get() checks Redis first. On a hit, the data returns from Redis in under a millisecond. On a miss, WordPress queries MySQL, and the result is stored in Redis for subsequent requests.

Redis activates on every dynamic page load. There is no bypass logic at this layer because its purpose is to accelerate PHP execution, not to replace it.

Nginx FastCGI Cache: Configuration and Skip Rules

The Nginx FastCGI cache stores full rendered HTML pages on disk. When a cache hit occurs, Nginx serves the file directly without touching PHP-FPM at all. This is the single most impactful caching layer for logged-out visitors because it eliminates PHP execution entirely.

Defining the Cache Zone

The cache zone is defined in the http block of your Nginx configuration, typically in /etc/nginx/nginx.conf or a dedicated include file:

# /etc/nginx/conf.d/fastcgi-cache.conf

# Define the cache zone
# levels: directory structure depth (reduces files per directory)
# keys_zone: shared memory zone for cache keys (name:size)
# max_size: maximum disk space for cached files
# inactive: purge entries not accessed within this window
# use_temp_path: write directly to cache path (avoids cross-device moves)

fastcgi_cache_path /var/cache/nginx/fastcgi
    levels=1:2
    keys_zone=WORDPRESS:256m
    max_size=4g
    inactive=7d
    use_temp_path=off;

# Define the cache key structure
fastcgi_cache_key "$scheme$request_method$host$request_uri";

# Ignore upstream cache-control headers from WordPress
# (WordPress sends no-cache by default for logged-in users, which we handle separately)
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

The keys_zone parameter deserves attention. The 256m of shared memory holds cache keys and metadata, not the cached content itself. Each cache entry uses roughly 128 bytes of key zone memory, so 256MB supports approximately two million cached URLs. The actual cached HTML files live on disk within the /var/cache/nginx/fastcgi directory.

The levels=1:2 parameter creates a two-level directory hierarchy. Instead of dumping millions of files into one directory (which destroys filesystem performance), Nginx creates subdirectories based on the hash of the cache key. A typical cached file path looks like /var/cache/nginx/fastcgi/3/a8/b2f7e4d1c9a8b3f7e4d1c9a8b3f7e4a83.

Skip Rules for Logged-In Users and Dynamic Pages

The most critical part of FastCGI cache configuration is deciding which requests to skip. Getting this wrong either serves cached content to logged-in users (showing the wrong dashboard or admin bar) or caches pages that should be dynamic (cart pages, checkout flows).

# /etc/nginx/conf.d/skip-cache-rules.conf

# Default: do not skip cache
set $skip_cache 0;

# Skip cache for POST requests
if ($request_method = POST) {
    set $skip_cache 1;
}

# Skip cache for URLs with query strings
# (optional: you may want to cache some query string pages)
if ($query_string != "") {
    set $skip_cache 1;
}

# Skip cache for WordPress core dynamic pages
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $skip_cache 1;
}

# Skip cache for logged-in users (check WordPress cookies)
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $skip_cache 1;
}

# Skip cache for WooCommerce-specific cookies and pages
if ($http_cookie ~* "woocommerce_items_in_cart|woocommerce_cart_hash|wp_woocommerce_session") {
    set $skip_cache 1;
}

if ($request_uri ~* "/cart/|/checkout/|/my-account/|/addons/|/store/order|/store/cancel") {
    set $skip_cache 1;
}

Applying the Cache in the Server Block

With the zone and skip rules defined, the server block ties everything together:

# /etc/nginx/sites-available/wordpress.conf

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    root /var/www/example.com/public;
    index index.php;

    # SSL configuration (abbreviated)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Include skip cache rules
    include /etc/nginx/conf.d/skip-cache-rules.conf;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        try_files $uri =404;

        # FastCGI pass to PHP-FPM
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;

        # FastCGI cache directives
        fastcgi_cache WORDPRESS;
        fastcgi_cache_valid 200 301 302 60m;
        fastcgi_cache_valid 404 1m;
        fastcgi_cache_bypass $skip_cache;
        fastcgi_no_cache $skip_cache;

        # Add header to identify cache status in responses
        add_header X-FastCGI-Cache $upstream_cache_status;

        # Serve stale content while revalidating in background
        fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
        fastcgi_cache_background_update on;
        fastcgi_cache_lock on;
        fastcgi_cache_lock_timeout 5s;
    }

    # Static file caching
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 365d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
}

The fastcgi_cache_use_stale directive is particularly valuable. When a cache entry expires, the first visitor triggers a background update while still receiving the stale (but still recent) cached version. This prevents the “thundering herd” problem where hundreds of simultaneous requests hammer PHP because the cache just expired.

The fastcgi_cache_lock directive ensures that when multiple requests arrive for the same uncached URL simultaneously, only one request goes to PHP-FPM. The rest wait for that single request to populate the cache, then serve from it.

Cache Purging with Nginx Helper

Caching is only half the equation. You also need a reliable way to purge cached pages when content changes. The Nginx Helper WordPress plugin (by developer rtCamp) integrates with the FastCGI cache to purge entries when posts are updated:

# Required: compile Nginx with the ngx_cache_purge module
# or use the filesystem-based purge method

# If using ngx_cache_purge module, add this location block:
location ~ /purge(/.*) {
    allow 127.0.0.1;
    deny all;
    fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
}

If the ngx_cache_purge module is not available (common on distributions that ship stock Nginx), configure Nginx Helper to use the filesystem-based purge method. It directly deletes cache files from /var/cache/nginx/fastcgi based on the URL hash. This requires the web server user (typically www-data) to have write access to the cache directory.

Understanding the FastCGI Cache File Format

Each cached response is stored as a single file on disk. The file contains the full HTTP response, including headers and body. When Nginx serves from cache, it reads the file and sends it to the client without any processing. The file format starts with a binary header containing metadata (cache key, expiration time, HTTP status), followed by the raw HTTP response headers, a blank line, and then the response body.

You can inspect a cached file manually to verify what was stored:

# Read the cached file (the binary header will show as garbled text, but headers and HTML are readable)
head -50 /var/cache/nginx/fastcgi/3/a8/b2f7e4d1c9a8b3f7e4d1c9a8b3f7e4a83

This is useful when debugging whether a stale version is stored in the cache file itself or whether the staleness originates from a different layer.

Redis Object Cache: The Application Layer

While Nginx FastCGI cache eliminates PHP for full-page requests, Redis object cache accelerates the PHP execution that still needs to happen. Every WordPress page load involves dozens to hundreds of database queries: options, post meta, term relationships, user data, transients, and more. Redis stores these results in memory so they do not hit MySQL on subsequent requests.

How the wp_cache API Works

WordPress has a built-in object caching API centered on four functions:

// Store a value
wp_cache_set( 'my_key', $data, 'my_group', 3600 );

// Retrieve a value
$data = wp_cache_get( 'my_key', 'my_group' );

// Delete a value
wp_cache_delete( 'my_key', 'my_group' );

// Store only if the key does not exist
wp_cache_add( 'my_key', $data, 'my_group', 3600 );

By default, WordPress uses an in-memory array for object caching. This means cached values only persist for the duration of a single request. When you install a persistent object cache drop-in like Redis Object Cache (by developer Till Kruss), the wp_cache_* functions are redirected to Redis. Values persist across requests, across page loads, and across multiple PHP-FPM workers.

The performance difference is dramatic. A typical WordPress page load without persistent object cache runs 80-200 MySQL queries. With Redis, the first page load runs those queries and stores results in Redis. Subsequent loads for the same data retrieve everything from Redis, often reducing MySQL queries to 5-15 per page (just the writes and uncacheable lookups).

Cache Groups and Their Purpose

WordPress organizes cached data into groups. Understanding groups is essential for selective cache flushing:

// WordPress core groups:
wp_cache_get( 1, 'posts' );           // Post objects
wp_cache_get( 1, 'post_meta' );       // Post metadata
wp_cache_get( 1, 'terms' );           // Taxonomy terms
wp_cache_get( 1, 'term_meta' );       // Term metadata
wp_cache_get( 'alloptions', 'options' ); // Autoloaded options
wp_cache_get( 1, 'users' );           // User objects
wp_cache_get( 1, 'user_meta' );       // User metadata
wp_cache_get( 'widget_recent_posts', 'transient' ); // Transients

// WooCommerce adds its own groups:
wp_cache_get( 1, 'products' );        // Product objects
wp_cache_get( 1, 'order_items' );     // Order line items
wp_cache_get( 'shipping_zones', 'transient' ); // Shipping data

Each group maps to a Redis key prefix. When you call wp_cache_get( 42, 'posts' ), the Redis Object Cache plugin translates this to a Redis key like wp:posts:42 (the exact format depends on configuration). This prefix structure enables group-level operations.

Global Cache Groups vs. Per-Site Groups

In a WordPress multisite installation, the distinction between global and per-site cache groups becomes critical. Global groups share data across all sites in the network, while per-site groups are prefixed with the blog ID to keep data isolated.

// Register global groups (shared across all sites in multisite)
wp_cache_add_global_groups( array(
    'users',
    'userlogins',
    'usermeta',
    'site-transient',
    'site-options',
    'blog-lookup',
    'blog-details',
    'site-details',
    'rss',
) );

// Per-site groups are automatically prefixed: wp:{blog_id}:posts:42
// Global groups skip the blog_id prefix: wp:users:42

On a multisite with 50 sites, a full cache flush clears data for every site. Understanding this distinction helps you flush only what changed. If you update a user profile, you only need to clear the global users and usermeta groups. If you update a post on site 12, you only need to clear site 12’s posts and post_meta groups.

Redis Configuration for WordPress

The Redis server configuration should be tuned for WordPress workloads:

# /etc/redis/redis.conf (relevant sections)

# Bind to localhost only (security)
bind 127.0.0.1
port 6379

# Set a password (always, even on localhost)
requirepass your_strong_redis_password_here

# Memory limit: size based on your dataset
# A typical WordPress site uses 50-200MB of object cache
maxmemory 512mb

# Eviction policy: remove least recently used keys when memory is full
# allkeys-lru works well for WordPress because all keys are cache data
maxmemory-policy allkeys-lru

# Persistence: RDB snapshots for crash recovery
# Not critical for cache (data can be regenerated) but reduces warmup time
save 900 1
save 300 10
save 60 10000

# Disable AOF for cache-only workloads (reduces disk I/O)
appendonly no

# TCP keepalive: detect dead connections
tcp-keepalive 300

# Max clients: match your PHP-FPM worker count plus some headroom
maxclients 512

The WordPress-side configuration lives in wp-config.php:

// wp-config.php Redis configuration

// Redis connection details
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );
define( 'WP_REDIS_PASSWORD', 'your_strong_redis_password_here' );
define( 'WP_REDIS_DATABASE', 0 ); // Use database 0 (0-15 available)

// Key prefix: essential for multisite or multiple WP installs sharing one Redis
define( 'WP_REDIS_PREFIX', 'wp_prod:' );

// Timeout settings
define( 'WP_REDIS_TIMEOUT', 1 );        // Connection timeout in seconds
define( 'WP_REDIS_READ_TIMEOUT', 1 );   // Read timeout in seconds

// Disable Redis for specific cache groups if needed
// define( 'WP_REDIS_IGNORED_GROUPS', ['counts', 'plugins'] );

// Enable igbinary serializer for smaller memory footprint (if installed)
// define( 'WP_REDIS_SERIALIZER', 'igbinary' );

Selective Cache Flushing

A full Redis flush (wp cache flush or FLUSHDB) is rarely necessary and creates a temporary performance penalty while the cache warms back up. Instead, use selective flushing to clear only the data that changed:

// Clear a specific post's cached data
wp_cache_delete( $post_id, 'posts' );
wp_cache_delete( $post_id, 'post_meta' );
clean_post_cache( $post_id ); // WordPress helper that clears related caches

// Clear all transients (useful after plugin updates)
// This targets only the 'transient' group, leaving post/user caches intact
global $wp_object_cache;
if ( method_exists( $wp_object_cache, 'delete_group' ) ) {
    $wp_object_cache->delete_group( 'transient' );
}

// Clear term caches after taxonomy changes
clean_term_cache( $term_id, $taxonomy );

// WooCommerce: clear product cache after price update
wc_delete_product_transients( $product_id );
clean_post_cache( $product_id );

You can also manage Redis directly through the CLI for bulk operations:

# Connect to Redis CLI
redis-cli -a your_strong_redis_password_here

# View all WordPress keys
KEYS wp_prod:*

# Count keys per group
# (do not run KEYS in production on large datasets; use SCAN instead)
redis-cli -a your_password --scan --pattern 'wp_prod:posts:*' | wc -l
redis-cli -a your_password --scan --pattern 'wp_prod:options:*' | wc -l

# Delete all keys in a specific group
redis-cli -a your_password --scan --pattern 'wp_prod:transient:*' | xargs redis-cli -a your_password DEL

# Monitor real-time Redis commands (debugging only, impacts performance)
redis-cli -a your_password MONITOR

# Check memory usage
redis-cli -a your_password INFO memory

Redis Performance Monitoring

Track these Redis metrics to verify your object cache is working correctly:

# Key Redis metrics to monitor
redis-cli -a your_password INFO stats

# Look for:
# keyspace_hits: number of successful key lookups (should be high)
# keyspace_misses: number of failed key lookups (should be low relative to hits)
# Hit ratio = keyspace_hits / (keyspace_hits + keyspace_misses)
# Target: 85-95% hit ratio

# evicted_keys: keys removed due to maxmemory policy
# If this number grows rapidly, increase maxmemory

# connected_clients: current client connections
# Should roughly match active PHP-FPM workers

A healthy WordPress Redis cache typically shows a hit ratio above 90%. If your hit ratio is below 80%, investigate whether plugins are setting very short TTLs, whether your maxmemory is too low (causing excessive evictions), or whether a plugin is calling wp_cache_flush() too frequently.

Redis Sentinel and High Availability

For sites where downtime is unacceptable, Redis Sentinel provides automatic failover. Sentinel monitors your Redis primary instance and promotes a replica to primary if the original goes down. The WordPress Redis Object Cache plugin supports Sentinel natively:

// wp-config.php Sentinel configuration
define( 'WP_REDIS_SENTINEL', 'mymaster' );
define( 'WP_REDIS_SERVERS', array(
    'tcp://sentinel-1.example.com:26379',
    'tcp://sentinel-2.example.com:26379',
    'tcp://sentinel-3.example.com:26379',
) );
define( 'WP_REDIS_PASSWORD', 'your_strong_redis_password_here' );

The plugin connects to one of the Sentinel instances, asks which server is the current primary, and connects to that. If the primary fails, Sentinel promotes a replica within seconds, and the next PHP request reconnects automatically. This adds latency for the reconnection request (typically 50-100ms), but subsequent requests resume normal Redis speed.

Cloudflare APO: Edge Caching for WordPress

Cloudflare APO (Automatic Platform Optimization) is purpose-built for WordPress. Unlike standard Cloudflare caching that only handles static assets (images, CSS, JS), APO caches full HTML pages at the edge. It uses Cloudflare Workers and Workers KV to store and serve complete page responses from the nearest Cloudflare data center.

How APO Differs from Standard Cloudflare Caching

Standard Cloudflare caching with default page rules has significant limitations for WordPress:

– HTML pages are not cached by default (Cloudflare returns them from origin)
– Setting a “Cache Everything” page rule caches HTML but does not understand WordPress login states
– Logged-in users may receive cached versions of pages meant for anonymous visitors
– Cache invalidation requires manual purge or API calls

APO solves these problems by integrating directly with WordPress through the Cloudflare plugin:

– Automatically detects WordPress login cookies and bypasses cache for authenticated users
– Stores HTML in Workers KV (a globally distributed key-value store) rather than standard CDN cache
– Listens for WordPress post update hooks to trigger selective edge purges
– Handles WooCommerce session cookies correctly (when configured)

APO Architecture: Workers KV

When APO is enabled, every cacheable page is stored in Cloudflare Workers KV. Here is how the flow works:

1. Anonymous visitor requests example.com/blog/my-post/
2. The Cloudflare Worker intercepts the request at the edge
3. Worker checks if the visitor has any WordPress cookies (logged-in, cart, etc.)
4. If no cookies: Worker checks KV for a cached version of this URL
5. If KV hit: Worker returns the cached HTML directly (response time: 5-25ms globally)
6. If KV miss: Worker fetches from origin, caches the response in KV, and returns it
7. If cookies present: Worker passes the request to origin, bypassing cache entirely

The Workers KV storage is eventually consistent with a propagation time of roughly 60 seconds globally. This means that after a cache purge, some edge locations may still serve the old version for up to a minute. For most content sites, this is acceptable. For sites where instant consistency matters (stock prices, live scores), APO should be disabled on those specific paths.

APO and Vary Headers: Device-Specific Caching

One of the less documented aspects of APO is how it handles device-specific content. If your WordPress theme sends a Vary: User-Agent header, APO will store separate cache entries for different user agents. This can explode your cache storage because every unique user agent string creates a new cache entry. The practical effect is abysmal cache hit ratios since most visitors have slightly different user agent strings.

The solution: never send Vary: User-Agent if you are using responsive CSS. Use the APO “Cache By Device Type” option only if you serve genuinely different HTML to mobile and desktop visitors. When enabled, APO buckets visitors into “mobile” and “desktop” and stores two versions per URL. This is far better than varying by the full user agent string.

Configuring APO with the Cloudflare Plugin

Install the Cloudflare WordPress plugin and enable APO in the plugin settings. The plugin communicates with the Cloudflare API to:

– Register the site for APO
– Set up the Worker routes
– Configure bypass rules for WordPress cookies
– Hook into save_post, edit_post, delete_post, and transition_post_status to trigger purges

The plugin settings in wp-admin are straightforward, but the underlying API calls look like this:

// What the Cloudflare plugin does internally on post save:
// (simplified for illustration)

add_action( 'save_post', function( $post_id, $post ) {
    if ( wp_is_post_revision( $post_id ) || wp_is_post_autosave( $post_id ) ) {
        return;
    }

    $urls_to_purge = array(
        get_permalink( $post_id ),
        home_url( '/' ),
        get_post_type_archive_link( $post->post_type ),
    );

    // Add category and tag archive URLs
    $terms = wp_get_post_terms( $post_id, array( 'category', 'post_tag' ) );
    foreach ( $terms as $term ) {
        $urls_to_purge[] = get_term_link( $term );
    }

    // Purge via Cloudflare API
    $api_url = "https://api.cloudflare.com/client/v4/zones/{$zone_id}/purge_cache";
    wp_remote_request( $api_url, array(
        'method'  => 'POST',
        'headers' => array(
            'Authorization' => "Bearer {$api_token}",
            'Content-Type'  => 'application/json',
        ),
        'body'    => wp_json_encode( array( 'files' => $urls_to_purge ) ),
    ));
}, 10, 2 );

APO-Specific Headers

Cloudflare APO adds response headers that help with debugging:

# Key APO response headers:

cf-cache-status: HIT          # Served from Cloudflare cache
cf-cache-status: MISS         # Fetched from origin, now cached
cf-cache-status: BYPASS       # Not cached (logged-in user, excluded path)
cf-cache-status: DYNAMIC      # APO not active for this request

cf-apo-via: tcache            # Served from APO's Workers KV cache
cf-edge-cache: cache,platform=wordpress  # APO is active for this site

APO Limitations and Workarounds

APO is not a silver bullet. Several limitations require attention:

30-URL purge limit per API call. Each purge request can specify a maximum of 30 URLs. If a post belongs to 15 categories and 10 tags, a single post update may need to purge 25+ URLs (the post itself, the homepage, each category archive, each tag archive, paginated pages). For posts with many taxonomy terms, you may need to batch your purge calls or fall back to purging everything.

No query string awareness. APO caches URLs with query strings separately from the base URL. If your site uses pagination via query strings (?paged=2), each paginated page is a separate cache entry. This is usually fine, but it means purging page 1 of a category archive does not automatically purge page 2 through page 10. Your purge logic must enumerate paginated URLs.

TTL is controlled by Cloudflare, not by your headers. APO sets its own cache TTL (typically 30 days) and ignores your origin Cache-Control headers for HTML pages. The only way to control freshness is through explicit purges. This is different from standard Cloudflare caching, where you can set TTL via headers or page rules.

The Cache Invalidation Chain: When Content Updates

Cache invalidation is famously one of the two hard problems in computer science. With three cache layers, a single content update must propagate through all of them. Here is exactly what happens when an editor publishes a post update.

Step 1: WordPress save_post Hook Fires

When the post is saved, WordPress fires the save_post action. Multiple cache-related callbacks are attached to this hook:

// Order of cache invalidation on save_post:

// 1. WordPress core clears internal object cache for this post
clean_post_cache( $post_id );
// This calls wp_cache_delete for the post object, post meta, and related queries

// 2. Redis Object Cache drop-in removes these keys from Redis
// Keys deleted: wp_prod:posts:{$post_id}, wp_prod:post_meta:{$post_id}, etc.

// 3. Nginx Helper plugin purges FastCGI cache files
// Deletes: /var/cache/nginx/fastcgi/{hash_of_post_url}
// Also purges: homepage, archive pages, category pages, tag pages, feed

// 4. Cloudflare plugin sends API purge request
// Purges: post URL, homepage, relevant archive/taxonomy URLs from Workers KV

Step 2: Propagation Timing

Each layer clears at different speeds:

Redis: Instant. The DEL command completes in microseconds.
Nginx FastCGI: Instant. The cache file is unlinked from disk immediately.
Cloudflare APO: 5-30 seconds. The API call takes a moment, and Workers KV propagation across all edge locations takes additional time.

This means there is a brief window (up to 60 seconds) where the edge may serve stale content while the origin is already serving the updated version. For most sites, this gap is invisible to users. If you need tighter consistency, you can add a Cloudflare Worker that checks for a version header from the origin and revalidates when it detects a mismatch.

Step 3: Cache Warming

After purging, the next visitor to each purged URL triggers a cache miss, and the full generation chain runs:

1. Cloudflare APO: MISS, passes to origin
2. Nginx FastCGI: MISS, passes to PHP-FPM
3. PHP runs, Redis serves object cache hits (reducing MySQL load)
4. PHP returns rendered HTML
5. Nginx stores the HTML in FastCGI cache
6. Cloudflare stores the HTML in Workers KV

For high-traffic sites, you may want to proactively warm the cache after purges rather than waiting for the first visitor:

// Cache warming after post update
add_action( 'save_post', function( $post_id ) {
    if ( wp_is_post_revision( $post_id ) || wp_is_post_autosave( $post_id ) ) {
        return;
    }

    // Schedule cache warming 30 seconds after save
    // (gives time for all purges to complete)
    wp_schedule_single_event( time() + 30, 'wpkite_warm_cache', array( $post_id ) );
}, 20 );

add_action( 'wpkite_warm_cache', function( $post_id ) {
    $urls = array(
        get_permalink( $post_id ),
        home_url( '/' ),
    );

    foreach ( $urls as $url ) {
        wp_remote_get( $url, array(
            'timeout'   => 15,
            'blocking'  => false,
            'sslverify' => false,
            'headers'   => array( 'X-Cache-Warm' => '1' ),
        ));
    }
});

Custom Invalidation for Complex Scenarios

Some WordPress setups require invalidation logic that goes beyond what the standard plugins handle. Consider a site with a “Popular Posts” widget that displays the top 10 posts by view count. When view counts change, the widget content changes, but no save_post hook fires. You need a custom invalidation hook:

// Custom invalidation for view-count-based widgets
add_action( 'wp_ajax_track_post_view', function() {
    $post_id = absint( $_POST['post_id'] );

    // Update view count (stored in post meta)
    $views = (int) get_post_meta( $post_id, 'post_views', true );
    update_post_meta( $post_id, 'post_views', $views + 1 );

    // Every 100 views, recalculate the popular posts list
    if ( $views % 100 === 0 ) {
        // Clear the transient that stores the popular posts query
        delete_transient( 'popular_posts_widget' );

        // Purge pages that display the popular posts widget
        // (homepage, sidebar-containing pages)
        if ( function_exists( 'rt_nginx_helper_purge_url' ) ) {
            rt_nginx_helper_purge_url( home_url( '/' ) );
        }
    }
});

Debugging: Identifying Which Layer Serves Stale Content

When someone reports that they see outdated content, you need to quickly identify which cache layer is responsible. Here is a systematic debugging approach using response headers.

Reading Response Headers

Use curl with verbose headers to inspect the full response chain:

# Check all cache headers for a URL
curl -sI https://example.com/blog/my-post/ | grep -iE 'cache|cf-|x-|age'

# Expected output for a fully cached response:
# cf-cache-status: HIT
# cf-apo-via: tcache
# x-fastcgi-cache: HIT
# age: 3600
# cache-control: public, max-age=3600

# Expected output for origin-served (cache miss):
# cf-cache-status: MISS
# x-fastcgi-cache: MISS
# cache-control: no-cache

Decision Tree for Stale Content

Scenario 1: cf-cache-status: HIT, content is stale
The problem is at the Cloudflare edge. The Cloudflare plugin did not purge this URL, or KV propagation has not completed. Fix: manually purge the URL through the Cloudflare dashboard or API.

Scenario 2: cf-cache-status: MISS, x-fastcgi-cache: HIT, content is stale
Cloudflare fetched fresh from origin, but Nginx served a stale cached page. The Nginx Helper plugin did not purge this URL. Fix: check the Nginx Helper settings, verify file permissions on the cache directory, and manually clear the FastCGI cache.

Scenario 3: Both show MISS, but content is still stale
The issue is in PHP/MySQL or Redis. Possible causes:
– Redis has a stale object cached. Run wp cache flush or target the specific key.
– A plugin is caching rendered HTML in a transient. Clear transients.
– The browser itself is caching. Check for aggressive cache-control headers.

Useful Debugging Commands

# Bypass all caches and hit origin directly
curl -sI https://example.com/blog/my-post/ \
  -H "Cache-Control: no-cache" \
  -H "Pragma: no-cache"

# Check Nginx FastCGI cache status specifically (bypass Cloudflare)
# Add the origin IP to /etc/hosts temporarily, or use:
curl -sI https://example.com/blog/my-post/ \
  --resolve example.com:443:YOUR_ORIGIN_IP

# Check Redis for a specific post's cached data
redis-cli -a your_password GET "wp_prod:posts:42"

# Check Redis key TTL
redis-cli -a your_password TTL "wp_prod:posts:42"

# View Nginx cache file for a specific URL
# First, compute the cache key hash:
echo -n "httpsGETexample.com/blog/my-post/" | md5sum
# Then find the file:
find /var/cache/nginx/fastcgi -name "*result_hash_prefix*"

# Tail Nginx access log to see cache status in real time
# (requires custom log format with $upstream_cache_status)
tail -f /var/log/nginx/access.log | grep "my-post"

Custom Nginx Log Format for Cache Debugging

Add cache status to your Nginx log format for ongoing monitoring:

# In the http block of nginx.conf:
log_format cache_debug '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent" '
                       'cache_status=$upstream_cache_status '
                       'request_time=$request_time '
                       'upstream_time=$upstream_response_time';

# In the server block:
access_log /var/log/nginx/cache-debug.log cache_debug;

This log format lets you analyze cache hit rates over time:

# Calculate FastCGI cache hit ratio from logs
awk '{for(i=1;i<=NF;i++) if($i ~ /cache_status=/) print $i}' /var/log/nginx/cache-debug.log \
  | sort | uniq -c | sort -rn

# Expected output for a well-cached site:
#  45231 cache_status=HIT
#   3892 cache_status=MISS
#   1203 cache_status=BYPASS
#    421 cache_status=-

Building a Cache Status Dashboard

For ongoing visibility, build a simple cache monitoring page that queries each layer and reports status. Here is a lightweight PHP script that you can place behind authentication:

// cache-status.php (place in a non-public directory, protect with auth)

// 1. Check Redis connectivity and stats
$redis = new Redis();
$redis->connect( '127.0.0.1', 6379 );
$redis->auth( 'your_strong_redis_password_here' );
$info = $redis->info( 'stats' );
$memory = $redis->info( 'memory' );

$hits = $info['keyspace_hits'];
$misses = $info['keyspace_misses'];
$hit_ratio = $hits / max( $hits + $misses, 1 ) * 100;
$used_memory = $memory['used_memory_human'];
$evicted = $info['evicted_keys'];

echo "Redis Hit Ratio: " . round( $hit_ratio, 1 ) . "%\n";
echo "Redis Memory: {$used_memory}\n";
echo "Redis Evicted Keys: {$evicted}\n\n";

// 2. Check Nginx FastCGI cache directory size
$cache_size = trim( shell_exec( 'du -sh /var/cache/nginx/fastcgi 2>/dev/null' ) );
echo "Nginx Cache Size: {$cache_size}\n";

// 3. Count cached files
$file_count = trim( shell_exec( 'find /var/cache/nginx/fastcgi -type f 2>/dev/null | wc -l' ) );
echo "Nginx Cached Pages: {$file_count}\n\n";

// 4. Test a specific URL through the cache chain
$test_url = home_url( '/' );
$response = wp_remote_get( $test_url );
$headers = wp_remote_retrieve_headers( $response );
echo "Homepage Cache Status:\n";
echo "  CF: " . ( $headers['cf-cache-status'] ?? 'N/A' ) . "\n";
echo "  Nginx: " . ( $headers['x-fastcgi-cache'] ?? 'N/A' ) . "\n";

WooCommerce Caching Patterns

WooCommerce adds significant complexity to caching because it mixes public product pages with private session-dependent pages. The cart, checkout, and my-account pages must never be cached, while product pages, shop archives, and category pages should be cached aggressively.

The WooCommerce Cookie Problem

WooCommerce sets session cookies as soon as a visitor adds an item to the cart. The key cookies are:

- woocommerce_items_in_cart: set to "1" when items are in the cart
- woocommerce_cart_hash: hash of cart contents
- wp_woocommerce_session_{hash}: server-side session identifier

The standard caching approach is: if any of these cookies are present, bypass all caching. This works but creates a problem. Once a visitor adds a single item to their cart, they lose all caching benefits for the rest of their session, even on product pages that are identical for all visitors.

Advanced WooCommerce Caching Strategy

A better approach caches product pages even for visitors with cart sessions, while excluding only truly dynamic pages:

# /etc/nginx/conf.d/woocommerce-cache-rules.conf

# Define pages that must NEVER be cached regardless of cookies
set $woo_skip 0;
if ($request_uri ~* "^/(cart|checkout|my-account|store/order|store/cancel)") {
    set $woo_skip 1;
}

# For WooCommerce AJAX endpoints
if ($request_uri ~* "wc-ajax=") {
    set $woo_skip 1;
}

# Only skip product/shop pages for logged-in WordPress users
# NOT for WooCommerce cart-only sessions
# This allows cart holders to still get cached product pages
set $woo_logged_in 0;
if ($http_cookie ~* "wordpress_logged_in") {
    set $woo_logged_in 1;
}

# Combine the conditions in the PHP location block:
# Skip cache if: WooCommerce dynamic page OR logged-in user OR POST request
set $final_skip_cache 0;
if ($woo_skip = 1) {
    set $final_skip_cache 1;
}
if ($woo_logged_in = 1) {
    set $final_skip_cache 1;
}
if ($request_method = POST) {
    set $final_skip_cache 1;
}

There is a subtlety here. If you cache product pages for users with cart sessions, the cart widget in the header (showing item count) will display incorrect data from the cached version. You have two options to handle this:

Option A: Fragment caching with AJAX
Cache the full page but load the cart widget contents via AJAX after page load. WooCommerce already supports this with the wc_cart_fragments script. When a cached page loads, the cart fragment AJAX call runs and updates the mini-cart with the correct item count and contents.

This is the preferred approach because it maximizes cache hits while keeping the cart accurate.

Option B: ESI (Edge Side Includes)
If you use Varnish instead of Nginx FastCGI, you can use ESI to assemble pages from cached fragments plus dynamic fragments. Nginx does not support ESI natively, making this option more complex to implement.

WooCommerce-Specific Cache Invalidation

Product data changes more frequently than blog content due to inventory updates, price changes, and sale events. Set shorter cache TTLs for WooCommerce pages:

# In the Nginx server block, differentiate TTLs by URL pattern:

location ~ \.php$ {
    # Default cache TTL
    fastcgi_cache_valid 200 60m;

    # Override for WooCommerce product pages (shorter TTL)
    set $cache_ttl 60m;
    if ($request_uri ~* "^/product/") {
        set $cache_ttl 15m;
    }
    if ($request_uri ~* "^/shop/|^/product-category/") {
        set $cache_ttl 10m;
    }

    # Note: Nginx does not support variable-based fastcgi_cache_valid
    # Use the X-Accel-Expires header from PHP instead:
    # In your WooCommerce template or functions.php:
    # header('X-Accel-Expires: 900'); // 15 minutes for product pages

    fastcgi_cache WORDPRESS;
    fastcgi_cache_bypass $final_skip_cache;
    fastcgi_no_cache $final_skip_cache;
    # ... rest of FastCGI config
}

Since Nginx does not support variable-based fastcgi_cache_valid, implement dynamic TTLs through the X-Accel-Expires header in PHP:

// In your theme's functions.php or a custom plugin:

add_action( 'template_redirect', function() {
    if ( is_admin() || is_user_logged_in() ) {
        return;
    }

    // Set cache TTL based on content type
    if ( is_singular( 'product' ) ) {
        header( 'X-Accel-Expires: 900' ); // 15 minutes
    } elseif ( is_shop() || is_product_category() || is_product_tag() ) {
        header( 'X-Accel-Expires: 600' ); // 10 minutes
    } elseif ( is_front_page() || is_home() ) {
        header( 'X-Accel-Expires: 1800' ); // 30 minutes
    } else {
        header( 'X-Accel-Expires: 3600' ); // 1 hour
    }
});

// Hook into WooCommerce product save to purge related caches
add_action( 'woocommerce_update_product', function( $product_id ) {
    // Clear the product page
    clean_post_cache( $product_id );

    // Clear the shop page
    $shop_page_id = wc_get_page_id( 'shop' );
    if ( $shop_page_id > 0 ) {
        clean_post_cache( $shop_page_id );
    }

    // Clear product category archives
    $terms = wp_get_post_terms( $product_id, 'product_cat' );
    foreach ( $terms as $term ) {
        clean_term_cache( $term->term_id, 'product_cat' );
    }

    // Clear WooCommerce transients
    wc_delete_product_transients( $product_id );
}, 10, 1 );

Handling WooCommerce Price Updates and Sale Schedules

WooCommerce allows scheduling sale prices with start and end dates. When a sale starts or ends, the product page should display the correct price immediately. But if the page is cached, visitors see the old price until the cache expires or is purged.

The fix involves hooking into the WooCommerce scheduled sale cron job:

// Purge product caches when scheduled sales start or end
add_action( 'woocommerce_scheduled_sales', function() {
    // Get products whose sale just started
    $sale_products = wc_get_product_ids_on_sale();

    foreach ( $sale_products as $product_id ) {
        clean_post_cache( $product_id );
        wc_delete_product_transients( $product_id );

        // Purge Nginx cache for this product URL
        if ( function_exists( 'rt_nginx_helper_purge_url' ) ) {
            rt_nginx_helper_purge_url( get_permalink( $product_id ) );
        }
    }

    // Also purge the shop and category pages
    $shop_page_id = wc_get_page_id( 'shop' );
    if ( $shop_page_id > 0 ) {
        clean_post_cache( $shop_page_id );
    }
}, 20 );

Measuring Cache Effectiveness

You cannot improve what you do not measure. Here are the key metrics for each cache layer and how to collect them.

Metric 1: Cache Hit Ratio

The cache hit ratio is the percentage of requests served from cache versus total requests. Track it for each layer independently.

Cloudflare APO hit ratio:
Log into the Cloudflare dashboard and navigate to Analytics > Performance. The "Cached Requests" chart shows the percentage of requests served from edge cache. For a content-heavy WordPress site, target 70-85% cache hit ratio at the edge. E-commerce sites with more logged-in traffic will see lower ratios, typically 40-60%.

You can also query the Cloudflare API for precise data:

# Cloudflare Analytics API query
curl -s "https://api.cloudflare.com/client/v4/zones/{zone_id}/analytics/dashboard?since=-1440&until=0" \
  -H "Authorization: Bearer {api_token}" \
  | jq '.result.totals.requests'

# Returns:
# {
#   "all": 1250000,
#   "cached": 937500,    # 75% hit ratio
#   "uncached": 312500
# }

Nginx FastCGI hit ratio:
Parse the custom log format described earlier:

# Calculate hit ratio for the last 24 hours
awk '/cache_status=HIT/{hit++} /cache_status=MISS/{miss++} /cache_status=BYPASS/{bypass++} END{total=hit+miss+bypass; printf "HIT: %d (%.1f%%)\nMISS: %d (%.1f%%)\nBYPASS: %d (%.1f%%)\nTotal: %d\n", hit, hit/total*100, miss, miss/total*100, bypass, bypass/total*100, total}' /var/log/nginx/cache-debug.log

Redis hit ratio:

# Get Redis hit ratio
redis-cli -a your_password INFO stats | grep keyspace
# keyspace_hits:4892731
# keyspace_misses:523891
# Hit ratio: 4892731 / (4892731 + 523891) = 90.3%

Metric 2: Time to First Byte (TTFB)

TTFB measures the time between the client sending a request and receiving the first byte of the response. It is the most direct measure of your caching effectiveness from the user's perspective.

# Measure TTFB with curl
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\nDNS: %{time_namelookup}s\nConnect: %{time_connect}s\nSSL: %{time_appconnect}s\n" https://example.com/

# Expected results by cache layer:
# Cloudflare APO HIT:      TTFB: 0.015-0.050s (15-50ms)
# Nginx FastCGI HIT:       TTFB: 0.005-0.020s (5-20ms from same region)
# PHP with Redis:           TTFB: 0.100-0.300s (100-300ms)
# PHP without any cache:    TTFB: 0.500-2.000s (500ms-2s)

For ongoing TTFB monitoring, use synthetic monitoring tools (Uptime Robot, Pingdom, or a custom cron job) that record TTFB from multiple global locations. Track the P50 and P95 TTFB values over time.

Metric 3: Origin Offload Percentage

This measures how much traffic your origin server avoids handling. Calculate it as:

Origin Offload = 1 - (Origin Requests / Total Requests)

# If Cloudflare handled 1,000,000 requests and 150,000 reached origin:
# Origin Offload = 1 - (150,000 / 1,000,000) = 85%

# Of those 150,000 that reached origin, if Nginx FastCGI served 120,000:
# PHP Offload = 1 - (30,000 / 150,000) = 80%

# Final: only 30,000 out of 1,000,000 requests (3%) actually executed PHP

Monitor origin offload through your hosting provider's bandwidth metrics or through Nginx access logs:

# Count requests that reached PHP-FPM (cache MISS + BYPASS)
awk '/cache_status=MISS|cache_status=BYPASS/{count++} END{print count}' /var/log/nginx/cache-debug.log

# Compare with total Cloudflare requests from the API
# The difference is your edge offload

Metric 4: Cache Warmth After Deploy

After a deployment or full cache purge, track how long it takes for your cache to reach steady-state hit ratios. This is your "cache warmth" metric. For most WordPress sites:

- Redis warms within 5-10 minutes of normal traffic
- Nginx FastCGI warms within 15-30 minutes for top pages
- Cloudflare APO warms within 1-2 hours for the long tail of URLs

If your site has thousands of pages, proactive cache warming after deploys is worth the effort:

#!/bin/bash
# cache-warm.sh - Warm caches after deployment

SITEMAP_URL="https://example.com/sitemap_index.xml"

# Extract all URLs from sitemap
urls=$(curl -s "$SITEMAP_URL" \
  | grep -oP '\K[^<]+' \
  | head -500)  # Limit to top 500 URLs

# Warm each URL with a brief delay to avoid overwhelming origin
for url in $urls; do
    curl -s -o /dev/null -w "%{http_code} %{time_total}s $url\n" "$url" &

    # Run 10 parallel requests, then wait
    if (( $(jobs -r | wc -l) >= 10 )); then
        wait -n
    fi
done
wait

echo "Cache warming complete."

Complete Configuration Files for the Full Stack

Below are production-ready configuration files that tie together all three caching layers. These files assume Ubuntu 22.04, Nginx 1.24+, PHP 8.2-FPM, Redis 7.x, and the WordPress plugins Nginx Helper and Redis Object Cache.

Nginx Main Configuration

# /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
    worker_connections 2048;
    multi_accept on;
    use epoll;
}

http {
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;
    client_max_body_size 64m;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;
    gzip_min_length 256;
    gzip_types
        application/atom+xml
        application/javascript
        application/json
        application/ld+json
        application/manifest+json
        application/rss+xml
        application/vnd.geo+json
        application/vnd.ms-fontobject
        application/x-font-ttf
        application/x-web-app-manifest+json
        application/xhtml+xml
        application/xml
        font/opentype
        image/bmp
        image/svg+xml
        image/x-icon
        text/cache-manifest
        text/css
        text/plain
        text/vcard
        text/vnd.rim.location.xloc
        text/vtt
        text/x-component
        text/x-cross-domain-policy
        text/xml;

    # FastCGI cache zone
    fastcgi_cache_path /var/cache/nginx/fastcgi
        levels=1:2
        keys_zone=WORDPRESS:256m
        max_size=4g
        inactive=7d
        use_temp_path=off;

    fastcgi_cache_key "$scheme$request_method$host$request_uri";
    fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

    # Rate limiting zone for wp-login.php
    limit_req_zone $binary_remote_addr zone=wplogin:10m rate=1r/s;

    # Logging
    log_format cache_debug '$remote_addr - [$time_local] '
                           '"$request" $status $body_bytes_sent '
                           'cache=$upstream_cache_status '
                           'rt=$request_time ut=$upstream_response_time';

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    # Include site configs
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

WordPress Site Server Block

# /etc/nginx/sites-available/example.com.conf

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://example.com$request_uri;
}

# Redirect www to non-www
server {
    listen 443 ssl http2;
    server_name www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    return 301 https://example.com$request_uri;
}

# Main server block
server {
    listen 443 ssl http2;
    server_name example.com;

    root /var/www/example.com/public;
    index index.php;

    # SSL
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;

    # Access logging with cache status
    access_log /var/log/nginx/example.com.access.log cache_debug;
    error_log /var/log/nginx/example.com.error.log;

    # === Cache Skip Rules ===

    set $skip_cache 0;

    # POST requests
    if ($request_method = POST) {
        set $skip_cache 1;
    }

    # Query strings (comment this out if you want to cache paginated/filtered pages)
    if ($query_string != "") {
        set $skip_cache 1;
    }

    # WordPress backend and dynamic endpoints
    if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
        set $skip_cache 1;
    }

    # Logged-in users
    if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
        set $skip_cache 1;
    }

    # WooCommerce cookies
    if ($http_cookie ~* "woocommerce_items_in_cart|woocommerce_cart_hash|wp_woocommerce_session") {
        set $skip_cache 1;
    }

    # WooCommerce dynamic pages
    if ($request_uri ~* "/cart/|/checkout/|/my-account/|/addons/|wc-ajax=") {
        set $skip_cache 1;
    }

    # === Locations ===

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    # PHP handling with FastCGI cache
    location ~ \.php$ {
        try_files $uri =404;

        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;

        # FastCGI cache
        fastcgi_cache WORDPRESS;
        fastcgi_cache_valid 200 301 302 60m;
        fastcgi_cache_valid 404 1m;
        fastcgi_cache_bypass $skip_cache;
        fastcgi_no_cache $skip_cache;

        # Cache status header
        add_header X-FastCGI-Cache $upstream_cache_status;

        # Stale serving and locking
        fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
        fastcgi_cache_background_update on;
        fastcgi_cache_lock on;
        fastcgi_cache_lock_timeout 5s;

        # Buffers
        fastcgi_buffer_size 32k;
        fastcgi_buffers 16 16k;
        fastcgi_busy_buffers_size 64k;
    }

    # FastCGI cache purge endpoint (requires ngx_cache_purge module)
    location ~ /purge(/.*) {
        allow 127.0.0.1;
        deny all;
        fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
    }

    # Static assets
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|webp|avif|woff|woff2|ttf|eot|mp4|webm)$ {
        expires 365d;
        add_header Cache-Control "public, immutable";
        access_log off;
        log_not_found off;
    }

    # Block access to sensitive files
    location ~ /\.ht {
        deny all;
    }

    location ~ /wp-config.php {
        deny all;
    }

    location ~ /xmlrpc.php {
        deny all;
    }

    # Rate limit wp-login
    location = /wp-login.php {
        limit_req zone=wplogin burst=3 nodelay;

        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    # Deny access to uploads PHP execution
    location ~* /uploads/.*\.php$ {
        deny all;
    }

    # Robots.txt
    location = /robots.txt {
        access_log off;
        log_not_found off;
    }

    # Favicon
    location = /favicon.ico {
        access_log off;
        log_not_found off;
        expires 30d;
    }
}

Redis Server Configuration

# /etc/redis/redis.conf (WordPress-optimized)

bind 127.0.0.1
port 6379
requirepass your_strong_redis_password_here

# Memory
maxmemory 512mb
maxmemory-policy allkeys-lru
maxmemory-samples 10

# Persistence (lightweight RDB for cache warmth preservation)
save 900 1
save 300 10
save 60 10000
dbfilename dump.rdb
dir /var/lib/redis

# Disable AOF (not needed for cache)
appendonly no

# Networking
tcp-backlog 511
tcp-keepalive 300
timeout 0
maxclients 512

# Performance
hz 10
dynamic-hz yes
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes

# Logging
loglevel notice
logfile /var/log/redis/redis-server.log

# Security
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command CONFIG "CONFIG_b4f8a2c9d7e1"
# Note: renaming FLUSHALL/FLUSHDB means you must use the Redis CLI
# with the renamed commands or temporarily restore them for maintenance

WordPress wp-config.php Cache Settings

// Add these to wp-config.php before "That's all, stop editing!"

// Redis Object Cache configuration
define( 'WP_REDIS_HOST', '127.0.0.1' );
define( 'WP_REDIS_PORT', 6379 );
define( 'WP_REDIS_PASSWORD', 'your_strong_redis_password_here' );
define( 'WP_REDIS_DATABASE', 0 );
define( 'WP_REDIS_PREFIX', 'wp_prod:' );
define( 'WP_REDIS_TIMEOUT', 1 );
define( 'WP_REDIS_READ_TIMEOUT', 1 );
define( 'WP_REDIS_MAXTTL', 86400 ); // Maximum TTL: 24 hours

// Page cache settings (used by cache plugins)
define( 'WP_CACHE', true );

// Disable WordPress internal cron (use system cron instead)
// This prevents cache-busting from wp-cron.php requests
define( 'DISABLE_WP_CRON', true );

// Increase memory limit for PHP
define( 'WP_MEMORY_LIMIT', '256M' );
define( 'WP_MAX_MEMORY_LIMIT', '512M' );

PHP-FPM Pool Configuration

The PHP-FPM configuration impacts caching indirectly. If your PHP workers are overloaded, uncached requests queue up and TTFB spikes. Size your pool based on your available RAM after accounting for Redis and MySQL:

# /etc/php/8.2/fpm/pool.d/www.conf (relevant sections)

[www]
user = www-data
group = www-data

listen = /run/php/php8.2-fpm.sock
listen.owner = www-data
listen.group = www-data

# Process management: static gives predictable memory usage
pm = static
pm.max_children = 32

# For servers with less RAM, use ondemand:
; pm = ondemand
; pm.max_children = 16
; pm.process_idle_timeout = 10s
; pm.max_requests = 1000

# OPcache settings (critical for PHP performance)
; These go in /etc/php/8.2/fpm/conf.d/10-opcache.ini
; opcache.enable=1
; opcache.memory_consumption=256
; opcache.interned_strings_buffer=16
; opcache.max_accelerated_files=10000
; opcache.revalidate_freq=60
; opcache.save_comments=1
; opcache.enable_file_override=1

System Cron for WordPress

Since we disabled wp-cron.php (which runs on every page load and bypasses cache), set up a system cron job:

# /etc/cron.d/wordpress
# Run WordPress cron every 5 minutes
*/5 * * * * www-data /usr/bin/php /var/www/example.com/public/wp-cron.php > /dev/null 2>&1

Cloudflare Configuration via API

While most Cloudflare settings are configured through the dashboard, here are the API calls for the critical cache-related settings:

#!/bin/bash
# cloudflare-setup.sh - Configure Cloudflare cache settings

ZONE_ID="your_zone_id"
API_TOKEN="your_api_token"
API_BASE="https://api.cloudflare.com/client/v4/zones/${ZONE_ID}"
HEADERS=(-H "Authorization: Bearer ${API_TOKEN}" -H "Content-Type: application/json")

# Enable APO
curl -s -X PATCH "${API_BASE}/settings/automatic_platform_optimization" \
  "${HEADERS[@]}" \
  --data '{"value":{"enabled":true,"cf":true,"wordpress":true,"wp_plugin":true,"hostnames":["example.com","www.example.com"],"cache_by_device_type":false}}'

# Set Browser Cache TTL (respect existing headers)
curl -s -X PATCH "${API_BASE}/settings/browser_cache_ttl" \
  "${HEADERS[@]}" \
  --data '{"value":0}'

# Enable Always Online
curl -s -X PATCH "${API_BASE}/settings/always_online" \
  "${HEADERS[@]}" \
  --data '{"value":"on"}'

# Set Cache Level to Standard
curl -s -X PATCH "${API_BASE}/settings/cache_level" \
  "${HEADERS[@]}" \
  --data '{"value":"aggressive"}'

# Enable Tiered Caching (reduces origin requests)
curl -s -X POST "${API_BASE}/argo/tiered_caching" \
  "${HEADERS[@]}" \
  --data '{"value":"on"}'

echo "Cloudflare configuration complete."

Putting It All Together: The Full Request Lifecycle

Let us trace a single page request through the entire stack to see how all three layers interact.

Request: Anonymous Visitor Loads a Blog Post

1. DNS Resolution
The visitor's browser resolves example.com to a Cloudflare IP address (e.g., 104.21.x.x). The request goes to the nearest Cloudflare POP.

2. Cloudflare Edge (APO)
The Cloudflare Worker runs. It checks:
- No wordpress_logged_in cookie? Correct, proceed.
- No woocommerce_items_in_cart cookie? Correct, proceed.
- Cached copy in Workers KV? Yes, found.
- Response: cf-cache-status: HIT, cf-apo-via: tcache
- TTFB: 18ms. The request never reaches the origin.

3. What If APO Cache Was Empty?
If Workers KV had no entry (first visit after purge), the Worker forwards the request to the origin server.

4. Nginx (Origin)
Nginx receives the request. It evaluates skip rules:
- GET request? Yes.
- No login cookies? Correct.
- No WooCommerce cookies? Correct.
- Cache file exists? Suppose yes.
- Response: X-FastCGI-Cache: HIT
- TTFB: 8ms from server. Cloudflare adds this to its KV store for future requests.

5. What If Nginx Cache Was Also Empty?
Nginx passes the request to PHP-FPM.

6. PHP-FPM + Redis
WordPress boots. It needs data:
- wp_cache_get('alloptions', 'options') - Redis HIT (0.1ms)
- wp_cache_get($post_id, 'posts') - Redis HIT (0.1ms)
- wp_cache_get($post_id, 'post_meta') - Redis HIT (0.1ms)
- Theme template runs, sidebar widgets load, menu builds
- Total: 12 Redis hits, 8 MySQL queries (for uncached items)
- PHP returns rendered HTML in 180ms

7. Response Flows Back
- PHP-FPM returns HTML to Nginx
- Nginx stores the HTML in FastCGI cache
- Nginx returns the response to Cloudflare
- Cloudflare APO stores the HTML in Workers KV
- The visitor receives the page

On the next request for the same URL, step 2 serves it in 18ms.

Request: Logged-In Admin Edits the Post

1. Edit Page Load
- Cloudflare: BYPASS (sees wordpress_logged_in cookie)
- Nginx: BYPASS (same cookie detected)
- PHP runs with full Redis acceleration
- Admin sees the edit screen

2. Post Save
- WordPress fires save_post
- clean_post_cache() removes Redis keys for this post
- Nginx Helper deletes FastCGI cache files for the post URL, homepage, archives
- Cloudflare plugin sends purge API call for the same URLs

3. Next Anonymous Visit
- Cloudflare: MISS (entry was purged), fetches from origin
- Nginx: MISS (entry was purged), passes to PHP
- PHP runs with Redis (most data still cached, only the updated post was flushed)
- Fresh HTML generated, stored in both Nginx and Cloudflare caches
- All subsequent anonymous visits: served from cache

Common Pitfalls and How to Avoid Them

Pitfall 1: Caching wp-admin or Login Pages

If your skip rules are misconfigured, logged-in users may see cached pages that lack the admin bar or show another user's dashboard. Always verify skip rules by testing with both logged-in and logged-out sessions. The X-FastCGI-Cache header should show BYPASS for any authenticated request.

Pitfall 2: Plugin Conflicts with Object Cache

Some plugins call wp_cache_flush() on every page load or during routine operations. This destroys your Redis hit ratio. Use the Redis CLI MONITOR command briefly (it impacts performance, so do not leave it running) to identify plugins that flush excessively:

redis-cli -a your_password MONITOR | grep -i "flushdb\|flushall\|del wp_prod"

If you find a plugin flushing too aggressively, contact the developer or replace the plugin.

Pitfall 3: Query String Cache Fragmentation

Marketing tools append query strings like ?utm_source=twitter&utm_medium=social to URLs. If you cache query string variations, your cache stores dozens of copies of the same page. Either strip tracking parameters in Nginx before the cache key is computed, or skip caching for query strings entirely (as shown in our configuration).

To strip specific parameters while still caching:

# Strip UTM and fbclid parameters from cache key
# Place this in the server block before the PHP location

set $clean_uri $request_uri;
if ($clean_uri ~ "^(.+)\?(utm_[a-z]+=[^&]*&?|fbclid=[^&]*&?|gclid=[^&]*&?)+$") {
    set $clean_uri $1;
}

# Override the cache key to use the cleaned URI
fastcgi_cache_key "$scheme$request_method$host$clean_uri";

Pitfall 4: Not Monitoring After Setup

Caching is not a "set and forget" system. Plugins update, WordPress updates, and server configurations change. Set up monitoring alerts for:

- FastCGI cache hit ratio dropping below 70%
- Redis memory usage exceeding 80% of maxmemory
- Redis evicted keys count increasing rapidly
- TTFB P95 exceeding 500ms
- PHP-FPM active processes consistently at max_children

Pitfall 5: Forgetting About Mobile and Desktop Variants

If your theme serves different HTML to mobile and desktop users (not just responsive CSS, but actually different markup), you need to vary the cache by device type. Add the device type to your Nginx cache key:

# Detect mobile devices
map $http_user_agent $is_mobile {
    default 0;
    "~*Mobile|Android|iPhone|iPad|iPod|BlackBerry|Windows Phone" 1;
}

# Include device type in cache key
fastcgi_cache_key "$scheme$request_method$host$request_uri$is_mobile";

For Cloudflare APO, enable "Cache By Device Type" in the APO settings. This tells APO to store separate cache entries for mobile and desktop visitors.

Most modern WordPress themes use responsive design and serve identical HTML to all devices, in which case device-based cache splitting is unnecessary and wastes cache space. Only enable it if your theme genuinely serves different HTML.

Pitfall 6: OPcache Invalidation After Deploys

This pitfall catches many teams. PHP OPcache caches compiled PHP bytecode in shared memory. When you deploy new PHP files, OPcache may continue serving the old bytecode until it detects the file modification time has changed. If your deployment process preserves timestamps (some rsync configurations do this), OPcache never invalidates.

The fix: restart PHP-FPM after every deployment, or use opcache_reset() in a deployment script. This is separate from your content caching stack, but it causes identical symptoms: stale content despite all other caches being purged.

# In your deployment script, after files are synced:
sudo systemctl reload php8.2-fpm

# Or programmatically:
php -r "opcache_reset();"

Performance Benchmarks: Before and After

To ground these configurations in reality, here are typical performance numbers from a WordPress site running WooCommerce with approximately 500 products, 200 blog posts, and 50,000 monthly visitors. These numbers were measured using server-side timing and Cloudflare analytics.

Before Caching (Stock WordPress + Managed Hosting)

- Average TTFB: 1,200ms
- P95 TTFB: 3,400ms
- MySQL queries per page: 120-180
- PHP memory per request: 48MB
- Max concurrent users before degradation: ~50

After Redis Object Cache Only

- Average TTFB: 380ms (68% improvement)
- P95 TTFB: 890ms
- MySQL queries per page: 15-25
- PHP memory per request: 38MB
- Max concurrent users before degradation: ~150

After Redis + Nginx FastCGI Cache

- Average TTFB for cached pages: 8ms (99.3% improvement)
- Average TTFB for uncached pages: 380ms (same as Redis-only)
- FastCGI cache hit ratio: 78%
- PHP executions reduced by 78%
- Max concurrent users before degradation: ~800

After Redis + Nginx FastCGI + Cloudflare APO

- Average global TTFB for cached pages: 22ms
- Average TTFB for origin-served pages: 8ms (Nginx cache) or 380ms (PHP)
- Cloudflare cache hit ratio: 74%
- Combined origin offload: 94%
- PHP executions: 6% of total requests
- Max concurrent users before degradation: ~5,000+
- Monthly origin bandwidth: reduced from 180GB to 11GB

The numbers tell a clear story. Each layer multiplies the effectiveness of the previous one. Redis alone cuts TTFB by two thirds. Adding Nginx FastCGI cache eliminates PHP for most requests. Adding Cloudflare APO moves that elimination to the edge, delivering sub-30ms responses globally.

Maintenance and Ongoing Operations

Once your caching stack is running, establish these operational procedures.

Weekly Checks

- Review Redis INFO output for eviction rates and memory usage
- Check Nginx cache disk usage: du -sh /var/cache/nginx/fastcgi
- Review Cloudflare analytics for cache hit ratio trends
- Check PHP-FPM status for worker saturation

After WordPress Core Updates

- Flush Redis object cache: wp cache flush
- Purge Nginx FastCGI cache: rm -rf /var/cache/nginx/fastcgi/*
- Purge Cloudflare cache: Dashboard > Purge Everything
- Run cache warming script
- Monitor TTFB and error rates for 30 minutes

After Plugin Updates

- Flush Redis transients (not full flush): target the transient group
- Monitor Redis hit ratio for 24 hours (some plugins change caching behavior on update)
- Check for new wp_cache_flush() calls in updated plugin code

After Content Migrations or Bulk Imports

- Full Redis flush (justified because large amounts of data changed)
- Full Nginx purge
- Full Cloudflare purge
- Run cache warming script with higher concurrency

Emergency: Stale Content Visible to Users

Run this sequence to clear all layers immediately:

#!/bin/bash
# emergency-purge.sh - Nuclear option: clear all cache layers

echo "Flushing Redis..."
wp cache flush --path=/var/www/example.com/public

echo "Purging Nginx FastCGI cache..."
rm -rf /var/cache/nginx/fastcgi/*
nginx -s reload

echo "Purging Cloudflare cache..."
curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/purge_cache" \
  -H "Authorization: Bearer ${API_TOKEN}" \
  -H "Content-Type: application/json" \
  --data '{"purge_everything":true}'

echo "All caches purged. Running cache warm..."
bash /opt/scripts/cache-warm.sh

echo "Done."

This should be rare. If you need to run this frequently, your invalidation chain has a bug that needs fixing.

The caching architecture described here is not theoretical. These configurations run in production on WordPress sites serving millions of page views per month. The key principles are straightforward: cache as close to the visitor as possible, skip the cache only when you must, invalidate precisely when content changes, and monitor continuously to catch regressions before users notice them. Each layer has a specific job, and together they transform WordPress from a platform that struggles at scale into one that handles serious traffic with minimal server resources.

Share this article

Sarah Kim

Systems administrator and WordPress hosting specialist. Has managed infrastructure at two managed WordPress hosting companies. Writes about server stacks, caching, and monitoring.