Back to Blog
Platform Guides

WordPress VIP’s Multi-Layer Caching Architecture: Edge Cache, Object Cache, and Query Cache Explained

Marcus Chen
38 min read

Why Caching on WordPress VIP Is Not What You Think

Most WordPress developers arrive at WordPress VIP expecting something familiar. They have worked with WP Super Cache, W3 Total Cache, or maybe a Redis object cache plugin on a managed host. They assume VIP’s caching is just a fancier version of the same stack. That assumption will cost you time and debugging headaches.

WordPress VIP runs a proprietary, multi-layer caching architecture that behaves differently from anything you have used on standard WordPress hosting. The platform processes billions of pageviews per month for publishers like TechCrunch, Salesforce, Facebook Newsroom, and The Sun. To handle that kind of traffic, VIP built a caching system with three distinct layers: an edge cache (often compared to Varnish but significantly customized), a Memcached object cache, and a MySQL query cache. Each layer has its own rules, limitations, and failure modes.

This article breaks down all three layers in detail. You will learn how each one works, how they interact, when they help, when they hurt, and how to control them with actual code. If you are building on VIP or evaluating it for a high-traffic project, this is the reference guide you need.

The Edge Cache: VIP’s First Line of Defense

Every HTTP request to a WordPress VIP site hits the edge cache before it ever reaches PHP. This is the single most impactful layer in the entire stack. When the edge cache serves a response, your WordPress application does zero work. No PHP execution, no database queries, no object cache lookups. The response comes directly from memory at the network edge.

How the Edge Cache Actually Works

VIP’s edge cache is often described as “Varnish-like,” but that label undersells its complexity. The platform uses a globally distributed reverse proxy layer that sits in front of every application container. Requests arrive at the nearest point of presence, and the edge evaluates whether it has a valid cached copy of the requested resource.

For anonymous visitors (users without a wordpress_logged_in cookie), the edge cache serves pages with an average response time of 50 to 100 milliseconds. Compare that to an uncached WordPress page load, which typically takes 800ms to 2 seconds depending on theme complexity and plugin count. That is a 10x to 20x improvement before you write a single line of optimization code.

The edge cache uses a time-to-live (TTL) model. By default, VIP sets a TTL of 5 minutes (300 seconds) for most page responses. This means that after the first anonymous request generates a full WordPress response, subsequent requests to the same URL within the next 5 minutes receive the cached copy.

You can verify this behavior by examining response headers:

curl -I https://your-site.go-vip.net/

HTTP/2 200
X-Cache: hit
Age: 147
Cache-Control: max-age=300, must-revalidate
X-Cache-Group: normal

The X-Cache: hit header confirms the response came from the edge. The Age: 147 header tells you the cached object is 147 seconds old. Since the TTL is 300 seconds, this object has about 153 seconds of life remaining. X-Cache-Group: normal indicates this is a standard page response, not a special resource type.

When X-Cache: miss appears, the request passed through to WordPress. This happens when the cache entry has expired, when the URL has never been requested before, or when cache-busting conditions are present.

TTL Configuration and Override Strategies

The default 300-second TTL works well for most content sites, but you can override it. VIP exposes the TTL through the Cache-Control header in PHP:

function adjust_cache_ttl() {
    if ( is_front_page() ) {
        header( 'Cache-Control: max-age=60, must-revalidate' );
    }

    if ( is_singular( 'post' ) ) {
        $post_age = time() - get_the_time( 'U' );
        if ( $post_age > WEEK_IN_SECONDS ) {
            header( 'Cache-Control: max-age=1800, must-revalidate' );
        }
    }
}
add_action( 'send_headers', 'adjust_cache_ttl' );

This example sets a 60-second TTL for the homepage (where freshness matters most) and a 30-minute TTL for posts older than one week (where content rarely changes). The key insight is that TTL strategy should match content volatility. A breaking news homepage needs a short TTL. An evergreen blog post published three years ago can safely cache for hours.

VIP does enforce a maximum TTL ceiling. You cannot set max-age higher than 3600 seconds (one hour) on the Go platform, though this limit has been raised for specific enterprise accounts by request. Setting a value higher than the ceiling silently caps it.

What Bypasses the Edge Cache

Several conditions cause the edge cache to skip entirely, passing the request straight through to PHP:

Logged-in users. Any request with a wordpress_logged_in_* cookie bypasses the edge cache completely. This is the single most common reason developers see slow responses during testing. If you are logged into wp-admin and loading the frontend, you are never hitting the edge cache.

POST requests. All POST requests bypass cache. This includes form submissions, AJAX calls using POST method, and XML-RPC requests.

Specific cookies. Beyond the login cookie, the presence of comment_author_*, wp-postpass_* (password-protected posts), and certain custom cookies will bypass the cache. VIP publishes a list of recognized cookies, but the exact set can vary.

Query strings. By default, most query strings cause a cache miss. However, VIP strips known tracking parameters (utm_source, utm_medium, utm_campaign, fbclid, gclid) before cache key computation. This prevents analytics parameters from fragmenting your cache.

A common mistake is to use custom query parameters for content filtering and then wonder why the edge cache hit rate is low. Each unique query string generates a separate cache entry:

// BAD: Fragments the edge cache with thousands of entries
https://example.com/products/?color=red&size=large&sort=price

// BETTER: Use URL path segments that are fewer and more predictable
https://example.com/products/red/large/

Cache Purging from Code

When content changes, you need stale cache entries removed. VIP provides the wpcom_vip_purge_edge_cache_for_url() function for targeted purges:

function purge_related_pages_on_publish( $new_status, $old_status, $post ) {
    if ( 'publish' !== $new_status ) {
        return;
    }

    // Purge the post URL itself
    wpcom_vip_purge_edge_cache_for_url( get_permalink( $post ) );

    // Purge the homepage
    wpcom_vip_purge_edge_cache_for_url( home_url( '/' ) );

    // Purge the relevant category archive pages
    $categories = get_the_category( $post->ID );
    foreach ( $categories as $category ) {
        wpcom_vip_purge_edge_cache_for_url( get_category_link( $category->term_id ) );
    }
}
add_action( 'transition_post_status', 'purge_related_pages_on_publish', 10, 3 );

Note that WordPress VIP already handles basic purges automatically when posts are published or updated. The transition_post_status hook fires built-in purge logic for the post permalink and some associated archives. The example above shows how to extend that logic for custom requirements.

Avoid calling purge functions in a loop for hundreds of URLs. VIP rate-limits purge requests, and excessive purging defeats the purpose of caching. If you find yourself needing to purge large numbers of URLs simultaneously, reconsider your data architecture.

Memcached Object Cache: The Application-Level Workhorse

Below the edge cache sits the object cache layer. On WordPress VIP, this is Memcached, not Redis. This distinction matters because Memcached and Redis have fundamentally different data structure support, persistence models, and memory management strategies.

The object cache stores the results of expensive PHP operations so they do not need to be recomputed on every request. Database query results, API responses, computed values, and serialized objects all live here. When the edge cache misses and PHP actually executes, the object cache is what prevents your application from hammering the database.

How WordPress Core Uses the Object Cache

WordPress core is heavily object-cached by default. The WP_Object_Cache class wraps every major data retrieval function. When you call get_post(), WordPress checks the object cache first. If the post data is cached, it returns immediately without touching MySQL. If not, it queries the database, stores the result in the object cache, and then returns it.

Here is a simplified view of what happens inside get_post():

// Simplified internal logic of get_post()
$cache_key = $post_id;
$cache_group = 'posts';

$post = wp_cache_get( $cache_key, $cache_group );

if ( false === $post ) {
    global $wpdb;
    $post = $wpdb->get_row(
        $wpdb->prepare( "SELECT * FROM $wpdb->posts WHERE ID = %d", $post_id )
    );
    wp_cache_set( $cache_key, $cache_group, $post );
}

return $post;

Core uses this pattern for posts, terms, users, options, metadata, and more. The options cache group is particularly important because WordPress loads all autoloaded options into the object cache on every request via wp_load_alloptions(). On VIP, this single cache entry can be substantial if your site has hundreds of autoloaded options.

The 1MB Object Size Limit

Memcached enforces a maximum object size of 1MB. This is not a VIP-specific limitation; it is a fundamental Memcached constraint. If you attempt to store an object larger than 1MB, the wp_cache_set() call silently fails. No error, no warning, no exception. The data simply does not get cached.

This limit catches teams off guard more than almost any other VIP constraint. Here are the most common scenarios where it bites:

Large option values. If your plugin stores a massive serialized array in wp_options with autoload enabled, and that array exceeds 1MB when serialized, the entire alloptions cache entry can fail. This forces WordPress to query the options table on every single request, degrading performance dramatically.

// Check the size of your alloptions cache
$alloptions = wp_load_alloptions();
$size = strlen( serialize( $alloptions ) );
error_log( 'Alloptions size: ' . size_format( $size ) );

// If this exceeds ~900KB, you are in danger territory

Complex WP_Query results. A query returning 200 posts with full metadata can easily exceed 1MB when serialized. VIP recommends limiting queries to reasonable numbers and avoiding 'posts_per_page' => -1 entirely.

Transients. On VIP, transients are stored in the object cache, not in the wp_options table. If you store a large API response as a transient, it hits the same 1MB ceiling.

The fix for oversized cache objects usually involves one of three strategies: break the data into smaller chunks, store only the IDs and hydrate as needed, or reduce the data you are caching in the first place.

// PROBLEM: Caching full post objects for a large result set
$posts = get_posts( array(
    'posts_per_page' => 500,
    'post_type'      => 'product',
) );
wp_cache_set( 'all_products', $posts, 'my_plugin', HOUR_IN_SECONDS );

// SOLUTION: Cache only IDs, fetch posts individually (they cache themselves)
$post_ids = get_posts( array(
    'posts_per_page' => 500,
    'post_type'      => 'product',
    'fields'         => 'ids',
) );
wp_cache_set( 'all_product_ids', $post_ids, 'my_plugin', HOUR_IN_SECONDS );

// Later, when you need the full objects:
$cached_ids = wp_cache_get( 'all_product_ids', 'my_plugin' );
if ( $cached_ids ) {
    // _prime_post_caches will batch-load these efficiently
    _prime_post_caches( $cached_ids );
    $posts = array_map( 'get_post', $cached_ids );
}

Cache Groups and Namespacing

Memcached on VIP uses flat key namespacing. Cache groups in WordPress translate to key prefixes in Memcached. The full Memcached key looks something like {blog_id}:{group}:{key}. Understanding this is critical for multisite installations where cache isolation between sites depends on the blog ID prefix.

For custom cache usage, always use a unique group name to avoid collisions:

// Good: namespaced cache group
wp_cache_set( 'user_42_score', $score, 'myplugin_scores', 3600 );
$score = wp_cache_get( 'user_42_score', 'myplugin_scores' );

// Bad: using default empty group
wp_cache_set( 'user_42_score', $score );
// This lands in the 'default' group and can collide with other plugins

Non-Persistent Cache Groups

VIP defines certain cache groups as non-persistent, meaning they exist only for the duration of a single PHP request and are never stored in Memcached. The counts and plugins groups are examples. You can register your own non-persistent groups:

wp_cache_add_non_persistent_groups( array( 'my_temp_data' ) );

// Data in 'my_temp_data' only lives for this request
// Useful for expensive computations you reference multiple times
// in a single page load but don't want persisted
wp_cache_set( 'computed_thing', $result, 'my_temp_data' );

Non-persistent groups are useful when you need request-level deduplication without polluting the shared Memcached pool. For instance, if a sidebar widget computes the same value that a template tag also computes, a non-persistent cache prevents the duplicate work within that single request without storing throwaway data in Memcached.

Eviction Strategies and Memory Pressure

Memcached uses a Least Recently Used (LRU) eviction policy per slab class. When memory is full, the least recently accessed items get evicted to make room for new ones. On VIP, the Memcached pool size varies by plan tier, but typical allocations range from 512MB to several gigabytes for enterprise accounts.

You cannot directly monitor Memcached memory usage from your application code on VIP. However, you can infer cache pressure by tracking cache miss rates. VIP’s monitoring infrastructure exposes these metrics through their dashboard, and you can also instrument your code:

function track_cache_effectiveness( $key, $group, $result ) {
    static $stats = array( 'hits' => 0, 'misses' => 0 );

    if ( false !== $result ) {
        $stats['hits']++;
    } else {
        $stats['misses']++;
    }

    // Log periodically, not on every call
    if ( ( $stats['hits'] + $stats['misses'] ) % 1000 === 0 ) {
        error_log( sprintf(
            'Object cache stats - Hits: %d, Misses: %d, Rate: %.1f%%',
            $stats['hits'],
            $stats['misses'],
            $stats['hits'] / ( $stats['hits'] + $stats['misses'] ) * 100
        ) );
    }
}

A healthy VIP site should see object cache hit rates above 90%. If your hit rate drops below 80%, you likely have one of these problems: objects too large to cache (the silent 1MB failure), extremely high cardinality in cache keys (thousands of unique keys created per request), or short TTLs causing excessive churn.

The MySQL Query Cache: The Misunderstood Layer

The MySQL query cache is the third caching layer and by far the most misunderstood. Many developers do not even know it exists, and those who do often have incorrect mental models of how it behaves.

How the Query Cache Works

MySQL’s query cache stores the complete result set of SELECT queries, keyed by the exact SQL string. When an identical query arrives, MySQL returns the cached result without re-executing the query plan or touching any table data. The speed difference is significant: a cached query can return in microseconds compared to milliseconds for even a well-indexed uncached query.

However, “identical” means byte-for-byte identical. These two queries are different cache entries:

SELECT * FROM wp_posts WHERE post_status = 'publish' ORDER BY post_date DESC LIMIT 10
SELECT * FROM wp_posts WHERE post_status = 'publish' ORDER BY post_date DESC LIMIT  10

The extra space before “10” creates a separate cache entry. WordPress’s query generation is consistent enough that this rarely causes problems in practice, but it explains why dynamically constructed queries with variable whitespace can have poor cache hit rates.

When the Query Cache Hurts Performance

The query cache has a critical invalidation rule: when any write operation (INSERT, UPDATE, DELETE) occurs on a table, every cached query that references that table is immediately invalidated. All of them. Not just the queries affected by the change; literally every cached result from that table.

For a high-traffic WordPress site with frequent writes, this behavior turns the query cache from an optimization into overhead. Consider what happens when someone publishes a comment:

1. WordPress INSERTs the comment into wp_comments.
2. MySQL invalidates every cached query referencing wp_comments.
3. WordPress UPDATEs the comment count on wp_posts.
4. MySQL invalidates every cached query referencing wp_posts.
5. For the next few milliseconds, dozens or hundreds of concurrent requests all experience cache misses simultaneously.

This “thundering herd” pattern means the query cache can actually increase database load during write-heavy periods. The cache fill and invalidation operations themselves consume CPU and create mutex contention.

VIP is aware of this tradeoff. The query cache configuration on VIP production environments is tuned conservatively. The cache size is kept moderate, and the platform relies more heavily on the object cache and edge cache to absorb read traffic.

Practical Implications for VIP Developers

You cannot configure the MySQL query cache directly on VIP. It is managed at the infrastructure level. But understanding its behavior helps you write better queries:

// This query benefits from the query cache because it hits wp_options,
// which is written to infrequently
$result = $wpdb->get_var( "SELECT option_value FROM $wpdb->options WHERE option_name = 'blogname'" );

// This query rarely benefits because wp_postmeta is written to constantly
// on active sites (post views, custom field updates, etc.)
$result = $wpdb->get_results(
    "SELECT * FROM $wpdb->postmeta WHERE meta_key = '_popular_count' ORDER BY meta_value+0 DESC LIMIT 20"
);

The practical advice: do not rely on the MySQL query cache for performance-critical paths. Use the object cache explicitly for data you need to be fast. The query cache is a bonus when it works, not a strategy to build on.

Cache Personalization: Serving Unique Content from Cached Pages

One of VIP’s most powerful and underused features is the Cache Personalization API, previously called “Vary Cache on Cookies” or VIP’s segmented caching. This system lets you serve different cached versions of the same URL to different user segments without bypassing the edge cache entirely.

The Problem: Personalization vs. Performance

Standard edge caching assumes every anonymous visitor sees the same page. But modern sites often need to personalize content: geolocation-based pricing, A/B test variants, returning visitor messages, or regional content differences. Without cache personalization, you have two bad options: bypass the edge cache for personalized pages (slow) or serve everyone the same generic content (poor UX).

How VIP Cache Personalization Works

VIP’s solution uses a segmentation cookie to create cache variants. Instead of one cached version per URL, you can have multiple versions, each keyed to a segment value. The edge cache stores and serves the correct variant based on the cookie.

Here is how to implement geographic-based cache variants:

// In a VIP mu-plugin or theme functions.php
function set_geo_cache_segment() {
    // VIP provides geographic data via headers
    $country = isset( $_SERVER['HTTP_X_COUNTRY_CODE'] )
        ? sanitize_text_field( $_SERVER['HTTP_X_COUNTRY_CODE'] )
        : 'US';

    // Group countries into regions to limit variant count
    $region_map = array(
        'US' => 'north-america',
        'CA' => 'north-america',
        'MX' => 'north-america',
        'GB' => 'europe',
        'DE' => 'europe',
        'FR' => 'europe',
        'JP' => 'asia-pacific',
        'AU' => 'asia-pacific',
    );

    $region = isset( $region_map[ $country ] ) ? $region_map[ $country ] : 'rest-of-world';

    // Set the segmentation cookie that VIP's edge cache reads
    if ( ! isset( $_COOKIE['vip_go_seg'] ) || $_COOKIE['vip_go_seg'] !== $region ) {
        setcookie( 'vip_go_seg', $region, 0, '/' );
    }
}
add_action( 'init', 'set_geo_cache_segment' );

With this in place, the edge cache maintains separate cached copies: one for north-america, one for europe, one for asia-pacific, and one for rest-of-world. A visitor from Germany gets the European variant. A visitor from Japan gets the Asia-Pacific variant. Both are served from the edge cache at full speed.

Segment Cardinality: The Critical Constraint

The number of segments you create directly multiplies your cache storage requirements and reduces your cache hit rate. If you have 4 regional segments and 1000 unique URLs, you now have 4000 cache entries instead of 1000. Hit rate drops proportionally because traffic is split across variants.

VIP strongly recommends keeping segment count below 10. Going above that number starts to significantly erode the benefit of edge caching. If you need per-user personalization (where each user sees completely unique content), cache personalization is the wrong tool. Use client-side JavaScript to modify cached pages after load, or use the REST API to fetch personalized fragments.

// Client-side personalization for user-specific content
// The page is fully cached; JS fetches the personalized bit
document.addEventListener( 'DOMContentLoaded', function() {
    fetch( '/wp-json/mysite/v1/user-greeting', {
        credentials: 'same-origin'
    })
    .then( function( response ) { return response.json(); } )
    .then( function( data ) {
        var el = document.getElementById( 'personalized-greeting' );
        if ( el && data.message ) {
            el.textContent = data.message;
        }
    });
});

This pattern keeps the base page in the edge cache while personalizing specific elements via uncached API calls. The initial page load is fast (50ms from edge), and the personalized fragment loads asynchronously (200-400ms via REST API). The perceived performance is excellent because the page structure and primary content render immediately.

Stale-While-Revalidate Patterns

VIP’s edge cache supports stale-while-revalidate behavior, where the edge serves a stale (expired) cached response to the current request while simultaneously fetching a fresh copy in the background for the next request. This eliminates the “first request after expiry is slow” problem.

You can influence this behavior through response headers:

function set_stale_while_revalidate_headers() {
    if ( is_singular( 'post' ) && ! is_user_logged_in() ) {
        // Serve cached version for 5 minutes, but allow stale for up to 1 hour
        // while revalidating in the background
        header( 'Cache-Control: max-age=300, stale-while-revalidate=3600, must-revalidate' );
    }
}
add_action( 'send_headers', 'set_stale_while_revalidate_headers' );

With stale-while-revalidate=3600, even if the 5-minute TTL expires, visitors still get the stale cached version instantly while the edge fetches a new copy. The fresh version appears on the next request. This approach is particularly effective for content that updates periodically but does not require real-time freshness, like news articles that get minor edits after publication.

Debugging Cache Behavior with Response Headers

When something is not caching as expected on VIP, response headers are your primary diagnostic tool. You cannot SSH into the edge cache servers or tail Varnish logs. Headers are the window into cache behavior.

Essential Headers to Check

curl -s -D - -o /dev/null https://your-site.go-vip.net/some-page/

Key headers in the response:

X-Cache: The most important header. Values are hit (served from edge cache), miss (passed to PHP), or pass (intentionally bypassed, e.g., for logged-in users).

Age: How many seconds the cached object has existed. If Age: 0 and X-Cache: miss, you just generated a fresh response. If Age: 295 and your TTL is 300, the cached object is about to expire.

Cache-Control: Shows the TTL configuration for the response. max-age=300 means the edge will cache this for 300 seconds. no-cache or private means the edge will not cache it at all.

X-Cache-Group: Indicates the cache group, which VIP uses to categorize cached responses. Common values include normal (standard pages), single (single post/page), home (homepage), and feed (RSS feeds).

Vary: Shows which request attributes create cache variants. Vary: Accept-Encoding is standard (separate cache for gzip vs. uncompressed). If you see Vary: Cookie, that means the cache is varying on cookie values, which can fragment caching.

A Real Debugging Session

Let me walk through a real debugging scenario. You deploy a new feature, and your client reports the site feels slow. Here is the investigation:

# Step 1: Check a random page as anonymous user
curl -s -D - -o /dev/null https://your-site.go-vip.net/about/

# Expected: X-Cache: hit, Age: > 0
# Actual: X-Cache: miss on every request

# Step 2: Check what Cache-Control header your app sends
curl -s -D - -o /dev/null https://your-site.go-vip.net/about/ | grep -i cache

# Found: Cache-Control: no-cache, must-revalidate, max-age=0
# Something in your code is sending no-cache headers!

# Step 3: Search your codebase
grep -rn "no-cache" wp-content/themes/ wp-content/plugins/

# Found: A plugin is calling nocache_headers() on every request
# in an init hook, not just on authenticated pages

The fix: ensure nocache_headers() is only called when necessary, such as on pages that genuinely cannot be cached (checkout pages, account dashboards).

// BAD: This kills edge caching for every page
add_action( 'init', function() {
    nocache_headers();
});

// GOOD: Only disable caching on specific pages
add_action( 'template_redirect', function() {
    if ( is_page( 'my-account' ) || is_page( 'checkout' ) ) {
        nocache_headers();
    }
});

Automated Cache Header Monitoring

For ongoing monitoring, you can build a simple script that checks cache headers across your site’s key pages:

// cache-monitor.php - Run via WP-CLI or cron
function wpkite_monitor_cache_headers() {
    $urls = array(
        home_url( '/' ),
        home_url( '/blog/' ),
        home_url( '/about/' ),
        home_url( '/pricing/' ),
    );

    foreach ( $urls as $url ) {
        $response = wp_remote_get( $url, array(
            'timeout'   => 10,
            'cookies'   => array(), // No cookies = anonymous request
        ) );

        if ( is_wp_error( $response ) ) {
            error_log( "Cache check failed for $url: " . $response->get_error_message() );
            continue;
        }

        $cache_status = wp_remote_retrieve_header( $response, 'x-cache' );
        $age          = wp_remote_retrieve_header( $response, 'age' );
        $cache_ctrl   = wp_remote_retrieve_header( $response, 'cache-control' );

        if ( 'miss' === $cache_status ) {
            error_log( sprintf(
                'CACHE MISS: %s (Cache-Control: %s)',
                $url,
                $cache_ctrl
            ) );
        }
    }
}

if ( defined( 'WP_CLI' ) && WP_CLI ) {
    WP_CLI::add_command( 'cache-monitor', function() {
        wpkite_monitor_cache_headers();
        WP_CLI::success( 'Cache header check complete. See error log for misses.' );
    });
}

Caching Differences: REST API vs. WPGraphQL on VIP

If you are building a headless WordPress site on VIP, understanding how caching differs between the REST API and WPGraphQL is essential. They behave very differently at the edge cache layer.

REST API Caching

WordPress REST API endpoints on VIP receive edge caching by default for GET requests. The behavior mirrors standard page caching: anonymous requests are cached, authenticated requests bypass cache. Default TTL applies.

// REST API responses are cached at the edge
// GET /wp-json/wp/v2/posts returns cached data for anonymous users

// You can control caching on custom endpoints
register_rest_route( 'mysite/v1', '/featured-posts', array(
    'methods'             => 'GET',
    'callback'            => 'get_featured_posts',
    'permission_callback' => '__return_true',
) );

function get_featured_posts( $request ) {
    // This response will be edge-cached like any other GET request

    $posts = get_posts( array(
        'posts_per_page' => 5,
        'meta_key'       => '_is_featured',
        'meta_value'     => '1',
    ) );

    $data = array();
    foreach ( $posts as $post ) {
        $data[] = array(
            'id'    => $post->ID,
            'title' => get_the_title( $post ),
            'url'   => get_permalink( $post ),
        );
    }

    $response = new WP_REST_Response( $data );

    // Set a custom TTL for this endpoint
    $response->header( 'Cache-Control', 'max-age=600' );

    return $response;
}

One gotcha: REST API responses that include nonces or user-specific data will vary per user. If your REST endpoint returns a nonce, it effectively cannot be cached at the edge because every user gets a different nonce value. Design your API endpoints to separate public data (cacheable) from authenticated data (not cacheable).

WPGraphQL Caching Challenges

WPGraphQL presents a fundamentally different caching challenge. By default, WPGraphQL uses POST requests for queries. POST requests bypass the edge cache entirely. This means your GraphQL queries hit PHP on every single request.

For a headless site making dozens of GraphQL queries per page load, this is a serious performance problem. A typical Next.js frontend might make 5 to 15 GraphQL requests to build a single page. Without edge caching, each of those requests takes 200 to 800ms, adding up to seconds of total API time.

The solution is to use GET-based persisted queries:

// In your WPGraphQL configuration, enable GET requests
// and use query IDs instead of full query strings

// Register a persisted query
add_filter( 'graphql_request_data', function( $data ) {
    // Map query IDs to actual queries
    $persisted_queries = array(
        'featured-posts' => '
            query FeaturedPosts {
                posts(where: { metaQuery: { key: "_is_featured", value: "1" } }, first: 5) {
                    nodes {
                        id
                        title
                        uri
                    }
                }
            }
        ',
    );

    if ( isset( $data['queryId'] ) && isset( $persisted_queries[ $data['queryId'] ] ) ) {
        $data['query'] = $persisted_queries[ $data['queryId'] ];
    }

    return $data;
});

With GET-based persisted queries, the edge cache can cache GraphQL responses just like REST responses. The URL becomes the cache key, and identical queries return cached results. VIP has published specific guidance on this approach, and the WPGraphQL Smart Cache plugin provides additional tooling.

The alternative approach is to cache at the application level using the object cache:

add_filter( 'graphql_return_response', function( $response, $schema, $operation, $query, $variables, $request ) {
    // Generate a cache key from the query and variables
    $cache_key = 'gql_' . md5( $query . serialize( $variables ) );
    $cache_group = 'graphql_responses';

    return $response;
}, 10, 6 );

add_filter( 'graphql_before_resolve', function( $result, $source, $args, $context, $info ) {
    // Check object cache before executing the resolver
    $cache_key = 'gql_field_' . md5( serialize( $info->fieldName ) . serialize( $args ) );
    $cached = wp_cache_get( $cache_key, 'graphql_fields' );

    if ( false !== $cached ) {
        return $cached;
    }

    return $result;
}, 10, 5 );

Network Waterfall Comparison

To put real numbers on this, here is what I measured on a VIP site running a headless Next.js frontend:

Without edge caching (POST-based GraphQL):
– 12 GraphQL queries per page
– Average query time: 340ms
– Total API time: ~4,080ms
– Time to First Byte on frontend: ~4.5 seconds

With GET-based persisted queries (edge cached):
– Same 12 queries
– Average query time: 45ms (edge cache hits)
– Total API time: ~540ms
– Time to First Byte on frontend: ~800ms

That is a 5.6x improvement. The site went from painfully slow to snappy, entirely by enabling edge cache compatibility for GraphQL.

Environment-Specific Caching Behavior

VIP provides multiple environments: production, staging (called “preprod”), develop, and local development. Caching behavior differs significantly across these environments, and not understanding these differences leads to false conclusions during testing.

Production

Full caching stack active. Edge cache, object cache, and query cache are all operational. This is the only environment where you see true production caching behavior. Cache pool sizes are at their allocated maximums. CDN nodes are fully populated.

Staging (Preprod)

Edge caching is active but operates on a smaller pool. Cache hit rates may be lower because staging sees less traffic (fewer requests to populate the cache). Object cache is active with a smaller Memcached allocation. The database may be a snapshot from production, so query cache behavior should be similar.

One critical difference: VIP strips the edge cache on staging environments more aggressively. If you are testing cache-dependent features on staging and seeing inconsistent behavior, this is likely why.

Develop and Feature Branch Environments

These environments have limited caching. Edge caching may be disabled or heavily reduced. Object cache is active but with minimal memory allocation. These environments are designed for functional testing, not performance testing.

Local Development with VIP Dev-Env

VIP’s local development environment (vip dev-env) runs a stripped-down version of the stack. Object cache is available through a local Memcached instance, but there is no edge cache. This means your local development experience will always feel different from production.

To simulate edge cache behavior locally, you can add cache headers inspection to your workflow:

// Add to your local development mu-plugin
if ( defined( 'VIP_GO_APP_ENVIRONMENT' ) && 'local' === VIP_GO_APP_ENVIRONMENT ) {
    add_action( 'send_headers', function() {
        // Simulate what the edge cache would do
        if ( is_user_logged_in() ) {
            header( 'X-Cache-Simulation: pass (logged-in user)' );
        } elseif ( 'POST' === $_SERVER['REQUEST_METHOD'] ) {
            header( 'X-Cache-Simulation: pass (POST request)' );
        } else {
            header( 'X-Cache-Simulation: would-cache (TTL: 300s)' );
        }
    });
}

This does not actually cache anything, but it helps you identify which pages would or would not be cached in production. I have found this particularly useful during code review: you can quickly verify that new features are not accidentally sending no-cache headers.

Common Caching Pitfalls on VIP

After working on multiple VIP projects, I have compiled the most common caching mistakes I see teams make. Some of these are obvious in hindsight. Others are genuinely subtle.

Pitfall 1: Using PHP Sessions

PHP sessions set a cookie on every request, which causes the edge cache to bypass for that user. VIP does not support PHP sessions at all, and they are blocked at the platform level. But some plugins attempt to use them, fail silently, or fall back to alternative session mechanisms that still interfere with caching.

If you need session-like behavior, use a combination of cookies (for segment identification) and the object cache (for data storage):

// Instead of $_SESSION['cart_count'] = 5;
// Use:
function set_user_cart_count( $user_id, $count ) {
    wp_cache_set( "cart_count_{$user_id}", $count, 'user_carts', DAY_IN_SECONDS );
}

function get_user_cart_count( $user_id ) {
    return wp_cache_get( "cart_count_{$user_id}", 'user_carts' );
}

Pitfall 2: Unintentional Vary Headers

If your code or a plugin adds a Vary: Cookie header, the edge cache creates separate entries for every unique cookie combination. Since WordPress sets multiple cookies with unique values per user, this effectively disables edge caching. One site I worked on had a performance plugin that added Vary: Cookie to “improve caching.” It had the opposite effect on VIP, reducing the edge cache hit rate from 94% to 12%.

// Find and remove problematic Vary headers
add_filter( 'wp_headers', function( $headers ) {
    // Remove Vary: Cookie if a plugin added it
    if ( isset( $headers['Vary'] ) && false !== strpos( $headers['Vary'], 'Cookie' ) ) {
        unset( $headers['Vary'] );
    }
    return $headers;
});

Pitfall 3: Cache Stampede on Cold Start

When you deploy new code on VIP, the edge cache is purged. For the first few seconds after deployment, every request hits PHP. If your site gets thousands of requests per second, you get a cache stampede: thousands of concurrent PHP processes all trying to build the same pages simultaneously.

VIP’s infrastructure handles this reasonably well, but your application code should be resilient. Use cache locks to prevent stampedes on expensive operations:

function get_expensive_data() {
    $cache_key = 'expensive_computation';
    $cache_group = 'mysite_data';
    $lock_key = $cache_key . '_lock';

    // Try to get cached data
    $data = wp_cache_get( $cache_key, $cache_group );
    if ( false !== $data ) {
        return $data;
    }

    // Try to acquire a lock
    $lock = wp_cache_add( $lock_key, 1, $cache_group, 30 );

    if ( ! $lock ) {
        // Another process is computing this data
        // Return stale data if available, or a default
        $stale = wp_cache_get( $cache_key . '_stale', $cache_group );
        return $stale ? $stale : get_default_data();
    }

    // We have the lock; compute the data
    $data = actually_compute_expensive_data();

    // Store fresh data and a stale copy
    wp_cache_set( $cache_key, $data, $cache_group, HOUR_IN_SECONDS );
    wp_cache_set( $cache_key . '_stale', $data, $cache_group, DAY_IN_SECONDS );

    // Release lock
    wp_cache_delete( $lock_key, $cache_group );

    return $data;
}

This pattern ensures only one process computes the expensive data. All other concurrent requests get stale data or a default, preventing the stampede.

Pitfall 4: Forgetting About Admin-Ajax.php

Requests to admin-ajax.php always bypass the edge cache because they are POST requests (by convention) or because they include authentication cookies. If your frontend makes heavy use of admin-ajax for public-facing features, you are missing out on edge caching entirely.

Migrate public AJAX endpoints to the REST API instead:

// BEFORE: admin-ajax.php handler (never cached at edge)
add_action( 'wp_ajax_nopriv_get_popular_posts', 'ajax_get_popular_posts' );
function ajax_get_popular_posts() {
    $posts = get_popular_posts();
    wp_send_json_success( $posts );
}

// AFTER: REST API endpoint (cached at edge for anonymous users)
add_action( 'rest_api_init', function() {
    register_rest_route( 'mysite/v1', '/popular-posts', array(
        'methods'             => 'GET',
        'callback'            => 'rest_get_popular_posts',
        'permission_callback' => '__return_true',
    ) );
});

function rest_get_popular_posts( $request ) {
    $posts = get_popular_posts();
    return new WP_REST_Response( $posts );
}

Pitfall 5: Excessive Transient Writes

Since transients on VIP live in Memcached (not the database), setting transients with very short TTLs creates high churn in the object cache. I have seen plugins that set transients with 60-second TTLs for data that is requested thousands of times per minute. Each expiration triggers a fresh database query and a Memcached write. The object cache becomes a bottleneck instead of a buffer.

If your data changes frequently and is requested heavily, consider whether a longer TTL with proactive invalidation is more appropriate than a short TTL with passive expiration.

Pitfall 6: Not Accounting for Multisite Cache Isolation

On VIP multisite installations, each site in the network has its own cache namespace (prefixed with the blog ID). This is usually correct behavior, but it means shared data between sites gets cached redundantly. If sites 1, 2, and 3 all query the same global data, it gets cached three times in Memcached.

For shared data across a multisite network, use the switch_to_blog() pattern carefully, or cache in a global group:

// Register a global cache group (shared across all sites in multisite)
function register_global_cache_groups() {
    wp_cache_add_global_groups( array( 'mysite_global' ) );
}
add_action( 'init', 'register_global_cache_groups' );

// Data cached in 'mysite_global' is accessible from any site in the network
wp_cache_set( 'network_announcement', $announcement, 'mysite_global', HOUR_IN_SECONDS );

// Any site can read it without switch_to_blog()
$announcement = wp_cache_get( 'network_announcement', 'mysite_global' );

Performance Benchmarks: Real-World Impact

Numbers tell the story better than theory. Here are benchmarks from actual VIP projects I have worked on or reviewed, anonymized but based on real measurements.

Case Study 1: News Publisher (50M monthly pageviews)

Before optimization:
– Edge cache hit rate: 67%
– Average TTFB (anonymous): 890ms
– Average TTFB (logged-in editors): 2,400ms
– Database queries per uncached page: 127
– Object cache hit rate: 78%

Issues found:
– A social sharing plugin added Vary: Cookie to all responses
– Homepage made 3 uncached REST API calls via admin-ajax
– A “trending posts” widget used a 30-second transient that generated complex MySQL JOINs

After optimization:
– Edge cache hit rate: 96%
– Average TTFB (anonymous): 62ms
– Average TTFB (logged-in editors): 1,100ms
– Database queries per uncached page: 43
– Object cache hit rate: 94%

The single biggest improvement came from removing the Vary: Cookie header. That alone pushed the edge cache hit rate from 67% to 91%. Converting admin-ajax calls to REST API endpoints added another 3 percentage points. Replacing the 30-second transient with a 10-minute cached value (invalidated on post publish) handled the rest.

Case Study 2: SaaS Marketing Site (2M monthly pageviews)

Before optimization:
– Edge cache hit rate: 44%
– Average TTFB: 1,200ms
– Pages with cache-busting query strings: 38% of traffic

Issues found:
– A/B testing tool appended unique query parameters per visitor
– Contact form plugin called session_start() on every page
– Pricing page generated real-time calculations on every load

After optimization:
– Edge cache hit rate: 93%
– Average TTFB: 71ms
– Pages with cache-busting query strings: 2% of traffic

The A/B testing tool was reconfigured to use VIP’s cache personalization with 3 segments instead of unique query strings per visitor. The session plugin was replaced with a cookie-based alternative. Pricing calculations were moved to a client-side JavaScript module that fetched base data from a cached REST endpoint.

Case Study 3: E-commerce Catalog (8M monthly pageviews)

Before optimization:
– Product page TTFB: 2,100ms
– Cart/checkout TTFB: 3,400ms
– Object cache size: constantly hitting eviction threshold
– Memcached eviction rate: ~40,000 items/hour

Issues found:
– Product pages cached full WooCommerce product objects (often exceeding 1MB)
– Cart fragments endpoint was called via admin-ajax POST on every page
– A custom “recently viewed” feature stored per-user data in 10,000+ individual cache keys

After optimization:
– Product page TTFB: 180ms (from edge cache)
– Cart/checkout TTFB: 1,200ms (inherently uncacheable, but reduced queries)
– Memcached eviction rate: ~2,000 items/hour

The product object caching was redesigned to store only product IDs and lightweight metadata, staying well under the 1MB limit. Cart fragments were moved to a REST endpoint with GET for the initial state (cacheable) and POST only for mutations. The “recently viewed” feature was moved entirely to client-side localStorage, eliminating thousands of Memcached entries.

VIP-CLI Cache Commands

VIP provides command-line tools for cache management. These are invaluable during deployments and debugging.

# Purge the entire edge cache for an application
vip app --app=your-app purge-cache

# Check application cache status
vip app --app=your-app --env=production

# Access WP-CLI on VIP (to interact with object cache)
vip wp --app=your-app -- cache flush

# Get a specific cached value
vip wp --app=your-app -- cache get my_key my_group

# Set a cache value for testing
vip wp --app=your-app -- cache set test_key "test_value" my_group 3600

# Delete a specific cache entry
vip wp --app=your-app -- cache delete my_key my_group

Be extremely cautious with cache flush on production. It drops every object cache entry, which triggers a thundering herd as all concurrent requests simultaneously hit the database to rebuild their cached data. On a high-traffic site, a cache flush during peak hours can cause a brief but intense load spike. Schedule cache flushes for low-traffic periods whenever possible.

For edge cache purges, the purge-cache command clears the entire CDN cache for your application. This is a nuclear option. Use targeted URL purges from code (the wpcom_vip_purge_edge_cache_for_url() function discussed earlier) whenever you can.

Building a Cache-Aware Development Workflow

The best VIP teams I have worked with build cache awareness into their development process from the start, not as an afterthought during performance firefighting.

Pre-Deployment Cache Checklist

Before deploying any feature to VIP production, run through this checklist:

1. Does the feature add any new cookies? New cookies can bypass or fragment the edge cache.
2. Does the feature use nocache_headers()? If so, is it scoped to only the pages that genuinely need it?
3. Does the feature store data larger than 500KB in a single cache entry? Test serialized size explicitly.
4. Does the feature create cache keys with high cardinality (e.g., per-user, per-session, per-request-parameter)? Estimate the total number of unique cache keys.
5. Does the feature make admin-ajax calls that could be REST API GET requests instead?
6. Does the feature use transients with very short TTLs? Consider whether proactive invalidation with longer TTLs is more appropriate.

Performance Testing Cache Behavior

Use VIP’s preprod environment for performance testing, but remember that the cache pool is smaller than production. Your test results represent a lower bound; production will usually perform better due to larger cache allocations and more traffic keeping the cache warm.

# Simple load test to warm the cache and measure behavior
# Using Apache Bench (ab) as an example
ab -n 1000 -c 50 https://your-site.go-vip.net/

# Then check cache headers on subsequent requests
for i in $(seq 1 10); do
    echo "Request $i:"
    curl -s -o /dev/null -w "HTTP %{http_code}, Time: %{time_total}s\n" \
        -H "Accept-Encoding: gzip" \
        https://your-site.go-vip.net/
done

The first request in your loop should show the longest response time (cache miss). Subsequent requests should be significantly faster (cache hits). If all 10 requests show similar response times, something is preventing caching.

Monitoring in Production

VIP’s dashboard provides cache metrics, but you should supplement them with application-level monitoring. The qm/cache panel in the Query Monitor plugin (which is available on VIP) shows object cache statistics for each request, including hits, misses, and the total amount of data transferred to and from Memcached.

For edge cache monitoring, track the X-Cache header in your synthetic monitoring. Tools like Pingdom, Datadog Synthetic Monitoring, or a custom script can hit your key URLs periodically and alert when the cache hit rate drops below a threshold.

Putting It All Together

WordPress VIP’s caching architecture is a layered system where each layer compensates for the limitations of the layers below it. The edge cache eliminates PHP execution entirely for anonymous users. The object cache prevents redundant database queries when PHP does execute. The MySQL query cache provides a last line of defense for identical queries that make it to the database.

The most performant VIP sites maximize edge cache hit rates above 95%, keep object cache hit rates above 90%, and treat the query cache as a bonus rather than a strategy. They use cache personalization sparingly (fewer than 10 segments), design their data access patterns around Memcached’s 1MB object limit, and use the REST API instead of admin-ajax for public-facing data endpoints.

The mental model that works best: think of your VIP application as an origin server that should be called as rarely as possible. Every request that reaches PHP is a request your caching layers failed to handle. Design your features with that perspective, and performance follows naturally.

When you do find yourself debugging cache issues, response headers are your best friend. X-Cache, Age, Cache-Control, and Vary tell you exactly what each caching layer is doing. Log them, monitor them, and build alerting around them.

The difference between a well-cached VIP site and a poorly-cached one is not marginal. It is the difference between 60ms response times and 2-second response times. It is the difference between handling a traffic spike gracefully and watching your error rate climb. The three caching layers are your most powerful tools. Learn them, respect their constraints, and use them intentionally.

Share this article

Marcus Chen

Staff engineer with 12 years in WordPress infrastructure. Previously at Automattic and a large media company. Writes about hosting platforms, caching, and deployment pipelines.