WordPress Object Cache Internals: From alloptions to Cache Stampedes
The Boot Sequence: How WordPress Initializes Its Object Cache
Every WordPress request begins with a question the system answers before almost anything else: is there a persistent object cache available? The function responsible for answering that question is wp_start_object_cache(), found in wp-includes/load.php. Understanding what happens inside this function unlocks a great deal of knowledge about how caching works at every layer of the WordPress stack.
When WordPress boots, the loading sequence in wp-settings.php calls wp_start_object_cache() very early, before plugins load and before the theme initializes. The function first checks whether a file named object-cache.php exists inside the wp-content directory. This file is what WordPress calls a “drop-in” plugin. Unlike regular plugins, drop-ins occupy a specific filename in a specific directory and load automatically without activation.
If object-cache.php exists, WordPress includes it directly. This drop-in is expected to define a WP_Object_Cache class that replaces the default implementation. After including the drop-in, WordPress calls wp_cache_init(), which typically instantiates the replacement cache class and assigns it to the global $wp_object_cache variable.
If no drop-in exists, WordPress falls back to its built-in WP_Object_Cache class defined in wp-includes/class-wp-object-cache.php. This default class stores everything in a PHP array for the duration of a single request. Nothing persists between requests. Every page load starts from zero, queries the database, and builds up the in-memory cache from scratch.
Here is the simplified logic:
function wp_start_object_cache() {
$first_init = false;
if ( file_exists( WP_CONTENT_DIR . '/object-cache.php' ) ) {
require_once WP_CONTENT_DIR . '/object-cache.php';
if ( function_exists( 'wp_cache_init' ) ) {
wp_cache_init();
}
if ( function_exists( 'wp_cache_add_global_groups' ) ) {
wp_cache_add_global_groups( array(
'blog-details',
'blog-id-cache',
'blog-lookup',
'global-posts',
'networks',
'rss',
'sites',
'site-details',
'site-lookup',
'site-options',
'site-transient',
'users',
'useremail',
'userlogins',
'usermeta',
'user_meta',
'userslugs',
) );
}
$first_init = true;
} else {
if ( ! class_exists( 'WP_Object_Cache' ) ) {
require_once ABSPATH . WPINC . '/class-wp-object-cache.php';
}
$first_init = true;
}
if ( $first_init ) {
if ( function_exists( 'wp_cache_init' ) ) {
wp_cache_init();
}
}
}
The critical takeaway: the drop-in completely replaces the default cache class. There is no inheritance requirement, no interface contract enforced at the language level. The drop-in just needs to provide the same set of functions (wp_cache_get(), wp_cache_set(), wp_cache_delete(), wp_cache_add(), wp_cache_flush(), and so on) that WordPress core expects. This design gives persistent cache backends enormous flexibility, but it also means a buggy drop-in can break an entire site at the earliest possible moment in the boot process.
After the cache initializes, wp_start_object_cache() registers global cache groups. These groups matter significantly in multisite, which we will cover in detail later. For single-site installs, the global groups registration still happens but has minimal practical impact.
WP_Object_Cache Class Internals: Groups, Keys, and Blog Prefixes
The default WP_Object_Cache class in WordPress core is surprisingly straightforward. Its primary data structure is a nested associative array stored in the $cache property. The first level of keys represents cache groups. The second level represents individual cache keys within those groups. The stored values can be any serializable PHP type.
class WP_Object_Cache {
private $cache = array();
private $cache_hits = 0;
private $cache_misses = 0;
private $global_groups = array();
private $blog_prefix;
// ...
}
When you call wp_cache_set( 'my_key', $data, 'my_group' ), the default implementation stores it at $this->cache['my_group']['my_key']. If you omit the group, WordPress uses the 'default' group. The method also increments internal hit/miss counters that you can access through wp_cache_get() return values and the global cache object.
Cache Key Construction
In multisite installations, cache keys need to be scoped per site. The WP_Object_Cache class handles this through its $blog_prefix property, which is set to the current blog ID. When you call wp_cache_get( 'my_key', 'options' ), the class internally constructs the storage key by prepending the blog prefix. On blog ID 5, the internal key becomes something like 5:my_key within the options group.
This prefix-based isolation keeps site 5’s options separate from site 12’s options within the same cache backend. Without it, every site in a multisite network would read each other’s cached data, producing catastrophic cross-contamination.
Global Groups: The Exception to Blog Prefixing
Some data is shared across an entire multisite network. User data, for instance, belongs to the network, not to individual sites. The $global_groups property tracks which cache groups should skip blog-prefix scoping. When a cache group is registered as global, the key is stored without the blog prefix, making it accessible regardless of which site in the network triggered the cache read.
The function wp_cache_add_global_groups() registers these groups. WordPress core registers a set of default global groups during wp_start_object_cache() including 'users', 'usermeta', 'user_meta', 'userlogins', 'useremail', 'userslugs', 'site-transient', 'networks', 'sites', and several others.
When the cache class processes a get or set operation, it checks whether the group is global:
protected function _key( $key, $group ) {
if ( empty( $group ) ) {
$group = 'default';
}
if ( isset( $this->global_groups[ $group ] ) ) {
$prefix = '';
} else {
$prefix = $this->blog_prefix;
}
return $prefix . $key;
}
This key-construction logic is the single most important piece of the multisite caching puzzle. Get it wrong in a custom drop-in, and you will spend days tracking down phantom data corruption across sites.
Non-Persistent Groups
Some persistent cache drop-ins introduce another concept: non-persistent groups. These are cache groups that should only live in the local PHP array, never written to Redis, Memcached, or any external store. Common examples include 'counts' and 'plugins'. The reasoning varies. Sometimes the data changes too frequently to benefit from external caching. Sometimes the data is request-specific and meaningless to other processes.
The default WP_Object_Cache class does not implement non-persistent groups because it stores everything in PHP memory anyway. But persistent cache drop-ins from plugins like Redis Object Cache or Memcached Object Cache typically support a wp_cache_add_non_persistent_groups() function or equivalent configuration to handle this.
The alloptions Problem: One Massive Cache Key Per Site
Among all the data WordPress caches, one entry causes more operational headaches than any other: alloptions. This single cache key, stored in the 'options' group, contains every autoloaded option from the wp_options table for the current site. On a typical WordPress site, this key holds a serialized associative array with hundreds or thousands of key-value pairs. On sites with many plugins, the serialized payload can exceed 1 MB.
The alloptions key exists because WordPress made a deliberate performance trade-off many years ago. Rather than running a separate database query for each option value, WordPress loads all autoloaded options in a single SELECT * FROM wp_options WHERE autoload = 'yes' query and stuffs the results into one cache entry. For a site with 200 autoloaded options, this converts 200 potential database queries into one query plus one cache read.
On paper, this is smart. In practice, it creates several painful problems.
Problem 1: Serialization Size
Every time the alloptions cache is read, the entire serialized blob must be unserialized. Every time an autoloaded option changes, the entire blob must be re-serialized and written back. For a 500 KB alloptions entry, this means every update_option() call on an autoloaded option triggers a full serialize/unserialize cycle of half a megabyte of data. On a busy site, this adds measurable CPU overhead.
With a persistent object cache, the problem compounds. The 500 KB blob must be transmitted over the network to Redis or Memcached, stored there, and transmitted back on every subsequent request. Network bandwidth and latency multiply the cost.
Problem 2: Cache Invalidation Granularity
Because all autoloaded options share one cache key, changing any single option invalidates the entire alloptions cache entry. If a plugin updates an innocuous option once per minute, the entire alloptions blob is flushed and rebuilt once per minute. Every request during that rebuild window pays the cost of a database query and full re-serialization.
This behavior means that a single poorly behaved plugin that frequently writes to autoloaded options can degrade cache efficiency for the entire site. The plugin might only be updating a tiny 20-byte value, but it forces a full rebuild of a structure that could contain thousands of values totaling hundreds of kilobytes.
Problem 3: Multisite Multiplication
On a multisite network with 500 sites, each site has its own alloptions key. If the network serves traffic across many sites simultaneously, the cache backend stores 500 separate alloptions blobs. A Redis instance holding these for a large network can allocate significant memory just for options data. Worse, if all 500 sites use the same plugins, their alloptions entries contain substantial duplication, but there is no deduplication mechanism.
How Plugins Make It Worse
Many plugins register options with autoload = 'yes' when they do not need to. An option that only matters on the plugin’s settings page in wp-admin has no business being autoloaded on every frontend request. But the default value of the $autoload parameter in add_option() is 'yes', so developers who do not think carefully about it end up bloating alloptions by default.
WordPress 6.6 introduced changes to the autoload system, adding an 'auto-off' value and smarter handling. But the fundamental architecture of one-big-key remains.
The best thing you can do for alloptions performance is audit your wp_options table:
SELECT option_name, LENGTH(option_value) as size
FROM wp_options
WHERE autoload = 'yes'
ORDER BY size DESC
LIMIT 30;
If you see transient data, serialized plugin settings arrays of enormous size, or options that clearly belong to deactivated plugins, clean them up. Change their autoload value to 'no' or delete them outright.
wp_load_alloptions() Lifecycle: From Boot to Cache Hit
The function wp_load_alloptions() is called extremely early during WordPress initialization and is the mechanism by which the alloptions cache entry gets populated. Understanding its lifecycle reveals how WordPress balances database queries against cache reads.
Here is the flow, step by step.
Step 1: Check the Object Cache
The function first calls wp_cache_get( 'alloptions', 'options' ). If a persistent object cache is active and the alloptions key exists, this returns the full associative array immediately. No database query is needed. This is the fast path, and on a healthy site with warm cache, this is what happens on every request.
Step 2: Cache Miss, Query the Database
If the cache returns false (a miss), wp_load_alloptions() runs the database query:
$alloptions_db = $wpdb->get_results(
"SELECT option_name, option_value FROM $wpdb->options WHERE autoload IN ( " .
implode( ', ', array_fill( 0, count( $autoload_values ), '%s' ) ) . " )",
// ...
);
This query pulls every row from wp_options where autoload is enabled. The results are assembled into an associative array keyed by option_name.
Step 3: Prime the Cache
After building the array, the function calls wp_cache_add( 'alloptions', $alloptions, 'options' ). This stores the full array in the object cache. For persistent backends, it is now written to Redis or Memcached. Subsequent requests will hit Step 1 and return immediately.
Step 4: Return the Array
The function returns the associative array. From this point forward, any call to get_option() for an autoloaded option consults the in-memory copy of this array rather than making another database query or cache read.
The Notoptions Cache
There is a companion cache key called notoptions in the 'options' group. This stores option names that WordPress has been asked about but that do not exist. Without this negative cache, every call to get_option( 'nonexistent_thing' ) would trigger a database query. The notoptions cache prevents that by recording known misses.
The get_option() function checks both alloptions and notoptions before falling back to a direct database query:
function get_option( $option, $default_value = false ) {
// ...
$alloptions = wp_load_alloptions();
if ( isset( $alloptions[ $option ] ) ) {
$value = $alloptions[ $option ];
} else {
$value = wp_cache_get( $option, 'options' );
if ( false === $value ) {
// Check notoptions
$notoptions = wp_cache_get( 'notoptions', 'options' );
if ( isset( $notoptions[ $option ] ) ) {
return apply_filters( "default_option_{$option}", $default_value, $option, false );
}
// Direct DB query for non-autoloaded options
$row = $wpdb->get_row(
$wpdb->prepare(
"SELECT option_value FROM $wpdb->options WHERE option_name = %s LIMIT 1",
$option
)
);
// ...
}
}
// ...
}
This three-tier lookup (alloptions array, individual option cache, notoptions negative cache) is what makes the options system performant despite its age. The design is pragmatic. It handles the common cases (autoloaded options) very efficiently while still supporting options that are not autoloaded.
Cache Stampedes: When Redis Restarts and 1000 Requests Hit Cold Cache
A cache stampede (sometimes called a thundering herd) is one of the most dangerous operational scenarios for a WordPress site running a persistent object cache. It happens when a heavily used cache key expires or disappears, and many concurrent requests simultaneously attempt to rebuild it.
Here is a concrete scenario. You have a WooCommerce site handling 200 requests per second. Redis is your persistent object cache. The alloptions key, the user session data, and hundreds of transients all live in Redis. Someone restarts Redis for maintenance, or Redis runs out of memory and evicts keys, or the network between your web servers and Redis blips for three seconds.
When Redis comes back (or when the evicted keys are first requested), every single incoming PHP process discovers a cache miss at roughly the same time. Let us trace what happens.
The Stampede Unfolds
T+0ms: Redis is back. 50 PHP-FPM workers handle incoming requests simultaneously.
T+1ms: All 50 workers call wp_load_alloptions(). All 50 get cache misses. All 50 issue the same SELECT * FROM wp_options WHERE autoload = 'yes' query.
T+5ms: The database now has 50 identical queries competing for the same rows. On a site with a large alloptions set, each query takes maybe 10-20ms. The database connection pool fills up. Query latency climbs.
T+10ms: All 50 workers get their query results and simultaneously call wp_cache_set(). Redis receives 50 SET commands for the same key in rapid succession. This is wasteful but not directly harmful.
T+15ms: Meanwhile, each of those 50 requests also needs user data, post data, taxonomy data, and transients. Each cache group experiences its own stampede. The database is now fielding hundreds of queries that would normally be served from cache.
T+50ms: The next wave of 50 requests arrives. Some of them benefit from the cache writes of the first wave, but some still miss because the first wave has not finished writing yet. The stampede partially repeats.
T+200ms: Database CPU spikes to 100%. Query latency rises from 5ms to 500ms. PHP-FPM workers start stacking up, each waiting on slow database responses. The FPM pool exhausts its available workers. Nginx starts queuing requests. Response times go from 200ms to 5 seconds. Some requests start timing out.
T+2s: If the site does not have enough PHP workers and database connections to absorb the spike, it enters a cascading failure. New requests arrive faster than old ones complete. The queue grows. Health checks start failing. Load balancers may pull the server out of rotation, concentrating even more traffic on the remaining servers.
This entire sequence can happen in under five seconds. The root cause was a momentary cache loss, but the effect was a full site outage.
Why alloptions Makes Stampedes Worse
The alloptions key is particularly dangerous in stampede scenarios because it is requested by every single WordPress request, it is relatively expensive to rebuild (a query that returns hundreds of rows and must be serialized), and it is large (meaning the serialization and network transfer cost is nontrivial). It is the single most stampede-prone cache key in WordPress.
Mitigation Strategies
Cache Locking: The most effective stampede mitigation is lock-based rebuilding. When a cache miss occurs, the first process acquires a lock (typically a Redis key with a short TTL), rebuilds the cache, and releases the lock. Other processes that miss the cache check for the lock and either wait briefly or serve stale data. The Redis Object Cache plugin by Till Kruss implements a form of this.
// Pseudocode for lock-based cache rebuilding
$value = wp_cache_get( 'alloptions', 'options' );
if ( false === $value ) {
$lock_key = 'alloptions_lock';
$lock = wp_cache_add( $lock_key, 1, 'options', 30 );
if ( $lock ) {
// I won the lock. Rebuild the cache.
$value = rebuild_alloptions_from_db();
wp_cache_add( 'alloptions', $value, 'options' );
wp_cache_delete( $lock_key, 'options' );
} else {
// Another process is rebuilding. Wait or use stale data.
usleep( 50000 ); // 50ms
$value = wp_cache_get( 'alloptions', 'options' );
}
}
Stale-While-Revalidate: Some advanced cache implementations store the data with a soft TTL and a hard TTL. When the soft TTL expires, one process rebuilds the cache in the background while other processes continue serving the stale (but still valid) data. This eliminates the stampede entirely because there is no window where the data is unavailable.
Pre-warming: Before restarting Redis, pre-warm the new instance with critical cache keys. This requires tooling that can dump and restore specific keys or run the cache-building queries proactively.
Connection Limits: Configure your database to handle burst connections gracefully. Set MariaDB/MySQL’s max_connections high enough that a stampede does not exhaust available connections. Use connection pooling (like ProxySQL) to queue excess connections rather than rejecting them.
wp_cache_add_global_groups() and Multisite Cache Isolation
In a multisite WordPress network, cache isolation between sites is not optional. Without it, Site A could read Site B’s options, posts, or user meta from cache, producing data leaks and corruption. The mechanism that prevents this is the blog prefix system we discussed earlier, combined with the global groups registry.
wp_cache_add_global_groups() accepts an array of group names and marks them as global. The function is called in wp_start_object_cache() with core’s default global groups, and plugins can call it to register additional global groups.
function wp_cache_add_global_groups( $groups ) {
global $wp_object_cache;
if ( is_callable( array( $wp_object_cache, 'add_global_groups' ) ) ) {
$wp_object_cache->add_global_groups( $groups );
}
}
The implementation inside the cache class typically just adds the group names to a lookup array:
public function add_global_groups( $groups ) {
$groups = (array) $groups;
$groups = array_fill_keys( $groups, true );
$this->global_groups = array_merge( $this->global_groups, $groups );
}
When Global Groups Go Wrong
A common multisite bug occurs when a persistent cache drop-in fails to implement add_global_groups() or implements it incorrectly. In that situation, user data gets scoped per-site. A user logged into Site A might not appear logged in on Site B because their user meta is cached under different keys on each site. Or, conversely, if global groups are applied too broadly, site-specific data like options or posts might leak between sites.
Another subtle issue arises with switch_to_blog(). When WordPress switches the current blog context (common in multisite themes and plugins), the cache class must update its $blog_prefix to match the new blog. The default class handles this in its switch_to_blog() method:
public function switch_to_blog( $blog_id ) {
$blog_id = (int) $blog_id;
$this->blog_prefix = $blog_id . ':';
}
Custom drop-ins must replicate this behavior. If they do not, switch_to_blog() calls will not actually switch the cache context, and cross-site data contamination occurs silently.
Practical Multisite Cache Key Example
Consider a multisite network where Site 1 is the main site and Site 7 is a subsite. When Site 7 handles a request:
wp_cache_get( 'alloptions', 'options' )internally looks up key7:alloptionsin groupoptionswp_cache_get( 3, 'users' )internally looks up key3in groupusers(no blog prefix, becauseusersis a global group)wp_cache_get( 42, 'posts' )internally looks up key7:42in groupposts
When the same request calls switch_to_blog( 1 ) and then wp_cache_get( 'alloptions', 'options' ), the key becomes 1:alloptions. The user cache lookups remain unchanged because users are global.
Cache Warming and Pre-Population Strategies
Cache warming is the practice of proactively populating the object cache before real traffic needs the data. This is most important after a cache flush, a Redis restart, a deployment, or a server scaling event that adds new nodes.
The Naive Approach: Crawl the Site
The simplest warming strategy is to send HTTP requests to your own site immediately after a cache-clearing event. Each request forces WordPress to boot, miss the cache, query the database, and write the results back to cache. After enough requests hit enough pages, the most frequently used cache keys are warm.
# Simple cache warming script
#!/bin/bash
SITE_URL="https://example.com"
# Warm the homepage (primes alloptions, site options, menus, widgets)
curl -s -o /dev/null "$SITE_URL/"
# Warm key pages
curl -s -o /dev/null "$SITE_URL/shop/"
curl -s -o /dev/null "$SITE_URL/blog/"
curl -s -o /dev/null "$SITE_URL/wp-admin/" -b cookies.txt
# Warm popular posts
for slug in "top-post-1" "top-post-2" "top-post-3"; do
curl -s -o /dev/null "$SITE_URL/$slug/"
done
This approach works but is slow and imprecise. You warm whatever data each page needs, but you do not control the order or prioritize critical keys.
The Targeted Approach: Direct Cache Priming
A more surgical strategy is to write a WP-CLI command or mu-plugin that directly primes specific cache keys. For alloptions, you can simply call wp_load_alloptions(). For post caches, you can call _prime_post_caches() with arrays of post IDs. For term caches, _prime_term_caches().
// WP-CLI command to warm critical caches
if ( defined( 'WP_CLI' ) && WP_CLI ) {
WP_CLI::add_command( 'cache warm', function() {
// Prime alloptions
wp_load_alloptions();
WP_CLI::log( 'Primed alloptions.' );
// Prime recent posts
$posts = get_posts( array(
'numberposts' => 100,
'post_type' => 'post',
'post_status' => 'publish',
'update_post_meta_cache' => true,
'update_post_term_cache' => true,
) );
WP_CLI::log( 'Primed ' . count( $posts ) . ' posts with meta and terms.' );
// Prime user data for active users
$users = get_users( array(
'number' => 50,
'orderby' => 'ID',
'order' => 'DESC',
) );
WP_CLI::log( 'Primed ' . count( $users ) . ' user records.' );
// Prime nav menus
$locations = get_nav_menu_locations();
foreach ( $locations as $location => $menu_id ) {
wp_get_nav_menu_items( $menu_id );
}
WP_CLI::log( 'Primed ' . count( $locations ) . ' navigation menus.' );
WP_CLI::success( 'Cache warming complete.' );
});
}
Multisite Warming
On multisite, cache warming gets more involved because each site has its own set of cache keys. A thorough warming script must iterate over sites:
if ( is_multisite() ) {
$sites = get_sites( array( 'number' => 0 ) );
foreach ( $sites as $site ) {
switch_to_blog( $site->blog_id );
wp_load_alloptions();
// Prime other site-specific caches...
restore_current_blog();
WP_CLI::log( "Warmed site {$site->blog_id}: {$site->domain}{$site->path}" );
}
}
Warming on Deploy
The best place to trigger cache warming is in your deployment pipeline. After deploying new code and flushing the cache (which many deployment tools do automatically), run the warming script before opening the server to traffic. If you use a load balancer, keep the new server out of the pool until warming completes, then add it.
Monitoring Cache Hit Ratios and Diagnosing Cache Churn
A persistent object cache is only valuable if it actually serves hits. If your cache hit ratio is below 90%, something is wrong. If it is below 70%, the cache may be causing more harm than good due to the overhead of serialization, network round-trips, and connection management.
Measuring Hit Ratio
The WP_Object_Cache class tracks hits and misses in its $cache_hits and $cache_misses properties. You can access these through the global object:
// Add to footer or debug bar
global $wp_object_cache;
$hits = $wp_object_cache->cache_hits;
$misses = $wp_object_cache->cache_misses;
$total = $hits + $misses;
$ratio = $total > 0 ? round( ( $hits / $total ) * 100, 2 ) : 0;
error_log( "Object cache: {$hits} hits, {$misses} misses, {$ratio}% hit ratio" );
For Redis specifically, the INFO stats command gives you server-level hit and miss counters:
$ redis-cli INFO stats | grep keyspace
keyspace_hits:48293741
keyspace_misses:2847392
This gives you a global view across all applications sharing the Redis instance. For WordPress-specific metrics, use the per-request counters.
What Causes Low Hit Ratios
Cache Churn: Plugins or code that writes to the cache too frequently. Every write invalidates the previous value and forces the next read to deserialize fresh data. If a plugin calls wp_cache_set() in a loop on every request, it churns the cache without benefit.
Oversized Keys: If cache values are very large, they are more expensive to store and retrieve. They also consume disproportionate memory, potentially causing evictions of other keys. The alloptions key is the prime offender here, but large transients and serialized post meta arrays also contribute.
Short TTLs: Transients with very short expiration times expire before they can serve many hits. If a transient expires every 60 seconds but is only read once per second, the best possible hit ratio for that key is roughly 59/60. Multiply this across hundreds of short-lived transients and the aggregate hit ratio drops.
Memory Pressure: If Redis or Memcached does not have enough memory, it evicts keys using its eviction policy (typically LRU). Evicted keys register as misses on the next read. If your cache instance is constantly at its memory limit, critical keys may be evicted to make room for less important ones.
Diagnosing Churn with Redis MONITOR
The MONITOR command in Redis streams every command the server processes in real time. This is extremely useful for identifying which keys are being written too frequently:
$ redis-cli MONITOR | grep -E "SET|DEL" | head -100
Look for keys that appear repeatedly within short time windows. If you see the same key being SET dozens of times per second, something is churning it. Trace the key name back to its WordPress cache group and key to find the responsible code.
Be careful with MONITOR in production. It adds overhead and can generate enormous output on busy servers. Use it briefly, capture the output, and analyze it offline.
Redis Memory Analysis
To understand memory distribution across cache keys, use redis-cli --bigkeys for a quick scan of the largest keys:
$ redis-cli --bigkeys
# Scanning the entire keyspace to find biggest keys
...
[00.00%] Biggest string found so far '"wp_:options:alloptions"' with 487232 bytes
[12.44%] Biggest string found so far '"wp_:transient:feed_ac0..."' with 892441 bytes
This output immediately tells you which cache entries consume the most memory. If a transient holding an RSS feed parse result is consuming 900 KB, that is a candidate for optimization or removal.
For more detailed analysis, tools like redis-rdb-tools can parse a Redis RDB dump and produce per-key memory reports, sorted by size, with namespace breakdowns.
The Persistent Object Cache Drop-In Architecture
The drop-in architecture is one of WordPress’s oldest extension mechanisms. Unlike plugins (which use hooks and filters), drop-ins replace core functionality entirely by occupying predefined filenames in the wp-content directory. The object cache drop-in (object-cache.php) is the most widely used drop-in, but others exist: advanced-cache.php (page caching), db.php (database class replacement), maintenance.php (custom maintenance mode), and several more.
How Drop-Ins Differ from Plugins
Drop-ins load before plugins. In the WordPress boot sequence, wp_start_object_cache() runs in wp-settings.php before wp_get_active_and_valid_plugins() is called. This means the object cache drop-in cannot use any plugin APIs, WordPress hooks, or theme functions. It operates in a minimal environment where only the most basic WordPress constants and a handful of core files are available.
This early loading is what makes drop-ins powerful. The object cache must be available before anything else because virtually every WordPress function relies on it. But it also makes drop-ins fragile. A fatal error in object-cache.php crashes the entire site with no graceful fallback. There is no deactivation UI. You must delete or rename the file via FTP, SSH, or file manager to recover.
What a Drop-In Must Implement
WordPress core expects the following functions to be available after including the drop-in. These are not methods on a class but standalone functions that wrap the underlying cache implementation:
wp_cache_init()– Initialize the cache. Called bywp_start_object_cache().wp_cache_get( $key, $group = '', $force = false, &$found = null )– Retrieve a value.wp_cache_set( $key, $data, $group = '', $expire = 0 )– Store a value.wp_cache_add( $key, $data, $group = '', $expire = 0 )– Store only if key does not exist.wp_cache_replace( $key, $data, $group = '', $expire = 0 )– Store only if key exists.wp_cache_delete( $key, $group = '' )– Remove a value.wp_cache_flush()– Remove all values.wp_cache_add_global_groups( $groups )– Register global cache groups.wp_cache_add_non_persistent_groups( $groups )– Register groups that should not persist.wp_cache_switch_to_blog( $blog_id )– Switch the blog context for multisite.wp_cache_close()– Clean up connections. Called at shutdown.
A minimal drop-in skeleton looks like this:
// wp-content/object-cache.php
function wp_cache_init() {
global $wp_object_cache;
$wp_object_cache = new My_Custom_Cache();
}
function wp_cache_get( $key, $group = '', $force = false, &$found = null ) {
global $wp_object_cache;
return $wp_object_cache->get( $key, $group, $force, $found );
}
function wp_cache_set( $key, $data, $group = '', $expire = 0 ) {
global $wp_object_cache;
return $wp_object_cache->set( $key, $data, $group, $expire );
}
function wp_cache_add( $key, $data, $group = '', $expire = 0 ) {
global $wp_object_cache;
return $wp_object_cache->add( $key, $data, $group, $expire );
}
// ... remaining functions follow the same delegation pattern
class My_Custom_Cache {
private $local_cache = array();
private $global_groups = array();
private $non_persistent_groups = array();
private $blog_prefix = '';
public function __construct() {
// Connect to Redis, Memcached, etc.
$this->blog_prefix = is_multisite() ? get_current_blog_id() . ':' : '';
}
public function get( $key, $group = 'default', $force = false, &$found = null ) {
$internal_key = $this->build_key( $key, $group );
// Check local (per-request) cache first
if ( ! $force && isset( $this->local_cache[ $group ][ $internal_key ] ) ) {
$found = true;
return $this->local_cache[ $group ][ $internal_key ];
}
// Skip external lookup for non-persistent groups
if ( isset( $this->non_persistent_groups[ $group ] ) ) {
$found = false;
return false;
}
// Fetch from external backend
$value = $this->fetch_from_backend( $internal_key );
if ( $value !== false ) {
$found = true;
$this->local_cache[ $group ][ $internal_key ] = $value;
return $value;
}
$found = false;
return false;
}
// ... set, add, delete, flush methods
private function build_key( $key, $group ) {
if ( isset( $this->global_groups[ $group ] ) ) {
return $key;
}
return $this->blog_prefix . $key;
}
}
Popular Drop-In Implementations
The most widely deployed persistent cache drop-ins in the WordPress ecosystem are:
Redis Object Cache (by Till Kruss): The most popular Redis-backed drop-in. It supports key prefixing, global groups, non-persistent groups, connection timeouts, retry logic, and selective flushing. It stores PHP values in Redis using serialize()/unserialize() by default but supports igbinary serialization for better performance.
Memcached Object Cache (by Automattic): Used at scale on WordPress.com VIP and similar platforms. It uses the PHP Memcached extension (not Memcache, there is a difference). It supports consistent hashing across multiple Memcached servers, automatic failover, and binary protocol for efficiency.
APCu Object Cache Backend: Uses PHP’s APCu shared memory cache. Faster than Redis or Memcached for single-server setups because there is no network overhead. However, it does not work in multi-server environments because APCu is local to each PHP process’s shared memory.
WP Redis (by Pantheon): A Redis drop-in optimized for the Pantheon hosting platform but usable elsewhere. It includes cache key salting for deployments and built-in support for the WP_REDIS_USE_CACHE_GROUPS constant.
The $found Parameter
One subtle but important detail in the cache API is the $found parameter on wp_cache_get(). This is a pass-by-reference boolean that indicates whether the key was actually found in the cache. Why is this necessary when the function returns false on a miss?
Because false is a legitimate cached value. If you cache false for a key and later retrieve it, the return value is false. Without the $found parameter, you cannot distinguish between “the key exists and its value is false” and “the key does not exist.” This distinction matters for negative caching patterns and option lookups.
$value = wp_cache_get( 'might_be_false', 'my_group', false, $found );
if ( $found ) {
// Key exists. $value might be false, but that's the real cached value.
return $value;
} else {
// Key does not exist. Rebuild and cache it.
$value = expensive_computation();
wp_cache_set( 'might_be_false', $value, 'my_group', 3600 );
return $value;
}
Drop-in authors must implement this correctly. If the $found parameter is always set to true or always ignored, subtle caching bugs will appear in core and plugin code that relies on it.
Practical Code Examples for Cache Management Patterns
The following patterns represent battle-tested approaches to working with the WordPress object cache in production environments. Each pattern addresses a specific real-world scenario.
Pattern 1: Lazy-Loading with Cache
The most common caching pattern wraps an expensive operation with a cache check:
function get_popular_posts( $count = 10 ) {
$cache_key = 'popular_posts_' . $count;
$cached = wp_cache_get( $cache_key, 'my_plugin' );
if ( false !== $cached ) {
return $cached;
}
$posts = new WP_Query( array(
'posts_per_page' => $count,
'meta_key' => 'post_views_count',
'orderby' => 'meta_value_num',
'order' => 'DESC',
'post_type' => 'post',
'post_status' => 'publish',
'no_found_rows' => true,
) );
$result = $posts->posts;
wp_cache_set( $cache_key, $result, 'my_plugin', HOUR_IN_SECONDS );
return $result;
}
Key details: the function uses HOUR_IN_SECONDS (3600) as the TTL, the cache group is plugin-specific to avoid key collisions, and no_found_rows is set to true to avoid the extra SQL_CALC_FOUND_ROWS overhead since we do not need pagination data.
Pattern 2: Cache Invalidation on Data Change
Caching is easy. Invalidation is hard. The safest approach is to delete the cache key whenever the underlying data changes:
// Invalidate when a post is published, updated, or deleted
add_action( 'save_post', 'invalidate_popular_posts_cache' );
add_action( 'deleted_post', 'invalidate_popular_posts_cache' );
add_action( 'trashed_post', 'invalidate_popular_posts_cache' );
function invalidate_popular_posts_cache( $post_id ) {
if ( wp_is_post_revision( $post_id ) ) {
return;
}
// Delete all variants of the cached query
$counts = array( 5, 10, 20 );
foreach ( $counts as $count ) {
wp_cache_delete( 'popular_posts_' . $count, 'my_plugin' );
}
}
The revision check prevents unnecessary invalidation when WordPress auto-saves drafts. The loop handles multiple cache key variants that the lazy-loading function might create with different $count values.
Pattern 3: Cache Key Versioning
Sometimes you need to invalidate many related cache keys at once without knowing all their names. Cache key versioning solves this by incorporating a version number into the key. Incrementing the version effectively invalidates all keys in the group:
function get_cache_version( $namespace ) {
$version = wp_cache_get( $namespace . '_version', 'my_plugin' );
if ( false === $version ) {
$version = 1;
wp_cache_set( $namespace . '_version', $version, 'my_plugin' );
}
return $version;
}
function bump_cache_version( $namespace ) {
wp_cache_incr( $namespace . '_version', 1, 'my_plugin' );
}
function get_user_stats( $user_id ) {
$version = get_cache_version( 'user_stats' );
$cache_key = "user_stats_{$user_id}_v{$version}";
$cached = wp_cache_get( $cache_key, 'my_plugin' );
if ( false !== $cached ) {
return $cached;
}
$stats = compute_expensive_user_stats( $user_id );
wp_cache_set( $cache_key, $stats, 'my_plugin', DAY_IN_SECONDS );
return $stats;
}
// When user stats data changes, bump the version
// This invalidates ALL user_stats cache keys at once
add_action( 'my_plugin_stats_updated', function() {
bump_cache_version( 'user_stats' );
});
The old versioned keys still exist in the cache but will never be read again. They expire naturally when their TTL passes. This pattern trades a small amount of memory waste for the ability to perform mass invalidation without iterating over keys.
Pattern 4: wp_cache_add() for Race-Free Initialization
The wp_cache_add() function only writes if the key does not already exist. This is the correct function to use when you want to set a default value without overwriting a value that another process may have already set:
function initialize_daily_counter() {
$today = date( 'Y-m-d' );
$key = 'daily_counter_' . $today;
// add() is atomic in Redis/Memcached.
// If another request already initialized this, add() returns false
// and the existing value is preserved.
wp_cache_add( $key, 0, 'my_plugin_counters', DAY_IN_SECONDS );
}
function increment_daily_counter() {
$today = date( 'Y-m-d' );
$key = 'daily_counter_' . $today;
initialize_daily_counter();
// incr() is also atomic, safe for concurrent access
return wp_cache_incr( $key, 1, 'my_plugin_counters' );
}
The atomicity of wp_cache_add() in Redis and Memcached backends makes this pattern safe under concurrent access. Two requests hitting this code simultaneously will not overwrite each other’s counter initialization.
Pattern 5: Preloading Related Data
WordPress core includes several internal functions for batch-preloading cache data. Understanding and using these can dramatically reduce query counts:
function render_post_list( $post_ids ) {
// Prime all post caches in a single query
_prime_post_caches( $post_ids, true, true );
// Now individual get_post() calls hit cache instead of DB
foreach ( $post_ids as $post_id ) {
$post = get_post( $post_id );
// Render the post...
}
}
// For custom queries where you have post IDs but need meta and terms:
function prime_custom_query_caches( $post_ids ) {
// This function runs:
// 1. A single query for all post objects
// 2. A single query for all post meta
// 3. A single query for all term relationships
_prime_post_caches( $post_ids, true, true );
// The second parameter (true) enables post meta priming
// The third parameter (true) enables term cache priming
}
// For user data:
function prime_user_caches_for_comments( $comments ) {
$user_ids = array_unique( array_filter( wp_list_pluck( $comments, 'user_id' ) ) );
if ( ! empty( $user_ids ) ) {
// Prime user caches in batch
cache_users( $user_ids );
}
}
The _prime_post_caches() function is used internally by WP_Query but can be called directly when you have post IDs from a custom source (like a search engine or an external API). The cache_users() function similarly batch-loads user data.
Pattern 6: Transient API vs. Direct Object Cache
WordPress offers two caching APIs: the Transients API (set_transient()/get_transient()) and the Object Cache API (wp_cache_set()/wp_cache_get()). Understanding when to use each is critical.
When a persistent object cache is active, set_transient() stores data in the object cache, not in the database. The transient functions are wrappers that check wp_using_ext_object_cache() and route accordingly. With a persistent cache, set_transient( 'foo', $data, 3600 ) is functionally equivalent to wp_cache_set( 'foo', $data, 'transient', 3600 ).
Without a persistent object cache, transients fall back to the wp_options table. The transient value is stored as an option with the name _transient_foo, and the expiration is stored in a companion option _transient_timeout_foo. This means every transient operation becomes a database read/write.
The practical guidance:
- Use
set_transient()when the data must survive beyond a single request even on sites without a persistent cache. Transients in the database are slower than transients in Redis, but they still avoid recomputation. - Use
wp_cache_set()when the data is only useful within the current request or when you know a persistent cache is available. Data stored withwp_cache_set()on a site without a persistent cache vanishes at the end of the request. - Avoid
set_transient()with no expiration on sites without persistent caches. Non-expiring transients in the options table are never garbage-collected by WordPress and accumulate indefinitely. This bloats thewp_optionstable and degrades autoload performance.
Pattern 7: Monitoring Cache Performance in Production
For production monitoring, log cache statistics to your observability stack on every request:
add_action( 'shutdown', function() {
if ( ! function_exists( 'wp_cache_get' ) ) {
return;
}
global $wp_object_cache;
if ( ! is_object( $wp_object_cache ) ) {
return;
}
$stats = array(
'hits' => $wp_object_cache->cache_hits,
'misses' => $wp_object_cache->cache_misses,
'ratio' => 0,
);
$total = $stats['hits'] + $stats['misses'];
if ( $total > 0 ) {
$stats['ratio'] = round( ( $stats['hits'] / $total ) * 100, 2 );
}
// Send to StatsD, Datadog, New Relic, or custom logging
if ( function_exists( 'statsd_timing' ) ) {
statsd_gauge( 'wordpress.cache.hits', $stats['hits'] );
statsd_gauge( 'wordpress.cache.misses', $stats['misses'] );
statsd_gauge( 'wordpress.cache.ratio', $stats['ratio'] );
}
// Or log to error_log for simple setups
if ( defined( 'WP_DEBUG' ) && WP_DEBUG ) {
error_log( sprintf(
'Object cache: %d hits, %d misses, %.1f%% ratio, URI: %s',
$stats['hits'],
$stats['misses'],
$stats['ratio'],
$_SERVER['REQUEST_URI'] ?? 'cli'
) );
}
});
Track these metrics over time. A sudden drop in hit ratio often indicates a plugin update that changed caching behavior, a configuration change that affected cache TTLs, or a memory pressure issue causing evictions. A steadily declining ratio might indicate growing data volume that is outpacing cache memory allocation.
Pattern 8: Group-Based Flush
WordPress core’s wp_cache_flush() clears the entire cache. There is no built-in mechanism to flush a single group. However, some persistent cache backends support group-based flushing through custom methods:
// Redis Object Cache supports flush_group() in newer versions
if ( function_exists( 'wp_cache_flush_group' ) ) {
wp_cache_flush_group( 'my_plugin' );
} else {
// Fallback: use cache key versioning to simulate group flush
bump_cache_version( 'my_plugin' );
}
WordPress 6.1 introduced wp_cache_flush_group() as a formal part of the cache API, but it only works if the underlying drop-in supports it. The function wp_cache_supports( 'flush_group' ) tells you whether the current backend has this capability.
Pattern 9: Handling Cache Failures Gracefully
A persistent cache is an external dependency. Like any external dependency, it can fail. Your code should handle cache failures without crashing:
function get_data_with_fallback( $key, $group, $rebuild_callback ) {
// Attempt to read from cache
$found = false;
$value = wp_cache_get( $key, $group, false, $found );
if ( $found ) {
return $value;
}
// Cache miss or cache failure. Rebuild.
$value = call_user_func( $rebuild_callback );
// Attempt to write to cache. If this fails (Redis down, etc.),
// we still return the freshly computed value.
wp_cache_set( $key, $value, $group, HOUR_IN_SECONDS );
return $value;
}
// Usage:
$posts = get_data_with_fallback(
'featured_posts',
'my_plugin',
function() {
return get_posts( array(
'numberposts' => 5,
'meta_key' => '_featured',
'meta_value' => '1',
) );
}
);
This pattern ensures that a Redis outage degrades performance (more database queries) but does not cause errors. The site stays functional, just slower. This graceful degradation is essential for production WordPress sites.
Pattern 10: Debugging Cache Contents
When troubleshooting cache issues, you often need to inspect what is actually stored:
// Dump all cache groups and their key counts (default WP_Object_Cache only)
function debug_cache_contents() {
global $wp_object_cache;
if ( ! isset( $wp_object_cache->cache ) ) {
// Persistent cache drop-in may not expose internal cache array
return 'Cache internals not accessible.';
}
$report = array();
foreach ( $wp_object_cache->cache as $group => $keys ) {
$report[ $group ] = array(
'key_count' => count( $keys ),
'total_size' => strlen( serialize( $keys ) ),
'keys' => array_keys( $keys ),
);
}
// Sort by total size descending
uasort( $report, function( $a, $b ) {
return $b['total_size'] - $a['total_size'];
});
return $report;
}
// For Redis, inspect keys directly:
// redis-cli KEYS "wp_:options:*" | head -20
// redis-cli DEBUG OBJECT "wp_:options:alloptions"
// redis-cli MEMORY USAGE "wp_:options:alloptions"
On the Redis side, the MEMORY USAGE command (available in Redis 4.0+) tells you exactly how many bytes a specific key consumes, including overhead. This is more accurate than STRLEN because it accounts for Redis’s internal data structure overhead.
Advanced Topics: Cache Coherency and Atomic Operations
Beyond basic get/set patterns, several advanced concerns affect production WordPress caching at scale.
Cache Coherency Across Web Servers
When multiple web servers share a single Redis or Memcached instance, cache coherency is handled by the centralized backend. All servers read from and write to the same store, so a cache write on Server A is immediately visible to Server B. This is one of the primary reasons to use a persistent external cache rather than a local in-process cache like APCu in multi-server deployments.
However, each PHP process also maintains its own local (in-memory) copy of cached values for the duration of the request. This means that if Server A updates a cache key after Server B has already read it during the same request, Server B will use the stale local copy until its request ends. This is generally acceptable because a single HTTP request typically completes in under a second, and the stale data is at most one request old.
Atomic Increment and Decrement
The wp_cache_incr() and wp_cache_decr() functions perform atomic increment and decrement operations. In Redis and Memcached, these map to the native INCR/DECR commands, which are guaranteed atomic even under concurrent access. The default PHP-array cache implements these with simple arithmetic, which is safe because a single PHP process is single-threaded.
These functions are the correct way to implement counters, rate limiters, and similar patterns:
function rate_limit_check( $user_id, $max_per_minute = 60 ) {
$key = 'rate_limit_' . $user_id;
$group = 'rate_limiting';
$current = wp_cache_get( $key, $group );
if ( false === $current ) {
wp_cache_set( $key, 1, $group, 60 );
return true; // First request this minute
}
if ( $current >= $max_per_minute ) {
return false; // Rate limited
}
wp_cache_incr( $key, 1, $group );
return true;
}
The TTL of 60 seconds creates a sliding window that resets automatically. Because wp_cache_incr() is atomic, two simultaneous requests will not produce a race condition where the count is incremented by only 1 instead of 2.
The wp_using_ext_object_cache() Function
Sometimes you need to know at runtime whether a persistent object cache is active. The function wp_using_ext_object_cache() returns true if a drop-in is loaded and false if the default in-memory cache is in use. You can also pass a boolean to force-set the state, which is useful in testing:
if ( wp_using_ext_object_cache() ) {
// Persistent cache available.
// Transients go to object cache, not database.
// Cache survives across requests.
} else {
// No persistent cache.
// Transients go to wp_options.
// wp_cache_set() data dies at end of request.
}
This function is particularly useful when deciding between transients and direct cache operations, or when implementing fallback behavior for sites without persistent caching.
Performance Characteristics and Benchmarks
Understanding the relative performance of different cache backends helps you make informed infrastructure decisions.
Redis vs. Memcached
Both Redis and Memcached are excellent choices for WordPress persistent caching. The performance difference between them for simple key-value operations is minimal. In benchmarks, both handle hundreds of thousands of operations per second on modern hardware with sub-millisecond latency.
The meaningful differences lie in features, not raw speed:
- Data structures: Redis supports lists, sets, sorted sets, hashes, and streams. Memcached only supports simple key-value pairs. For WordPress object caching, this difference rarely matters because WordPress stores serialized PHP values as opaque strings.
- Persistence: Redis can persist data to disk (RDB snapshots and AOF logs). Memcached stores everything in memory only. For WordPress caching, persistence is nice to have (survives Redis restarts) but not essential.
- Eviction policies: Both support LRU eviction, but Redis offers more granular eviction policies (volatile-lru, allkeys-lru, volatile-ttl, noeviction, etc.).
- Replication: Redis has built-in primary-replica replication. Memcached does not; you need a client library that supports consistent hashing across multiple Memcached servers.
- Memory efficiency: Memcached can be slightly more memory-efficient for simple key-value storage because Redis’s data structure overhead adds a few bytes per key.
For most WordPress sites, Redis is the better choice due to its broader feature set, persistence options, and the maturity of the Redis Object Cache plugin.
Local vs. Network Caching Latency
The performance gap between local (same-machine) and network caching is significant:
- PHP array (default WP_Object_Cache): ~0.001ms per operation. Essentially free, but data does not persist.
- APCu (local shared memory): ~0.01ms per operation. 10x slower than a PHP array but still very fast because there is no network round-trip.
- Redis (local socket): ~0.1ms per operation. Using a Unix socket instead of TCP eliminates network overhead.
- Redis (local TCP): ~0.2ms per operation. Connecting over TCP to localhost adds a small overhead.
- Redis (network): ~0.5-2ms per operation. Network latency dominates. Each cache get/set involves a round-trip to the Redis server.
On a WordPress page load that makes 200 cache operations, the difference between local-socket Redis (0.1ms x 200 = 20ms) and network Redis (1ms x 200 = 200ms) is 180ms of page load time. This is why running Redis on the same machine as PHP, connected via Unix socket, is strongly recommended for performance-sensitive sites.
Serialization Format Impact
The PHP serialize()/unserialize() functions are the default serialization mechanism for WordPress object cache values. The igbinary PHP extension offers an alternative serializer that produces smaller output and runs faster.
Benchmarks on typical WordPress data show igbinary producing serialized output 30-50% smaller than PHP’s native serializer, with serialization speed roughly 2x faster. For a site where the cache processes megabytes of data per request, switching to igbinary can reduce both Redis memory consumption and serialization CPU time meaningfully.
The Redis Object Cache plugin supports igbinary through the WP_REDIS_IGBINARY constant:
// wp-config.php
define( 'WP_REDIS_IGBINARY', true );
Putting It All Together: A Production Cache Architecture
Given everything covered above, here is what a well-designed WordPress caching architecture looks like in production.
Infrastructure: Redis runs on the same machine as PHP-FPM, connected via Unix socket. Redis is configured with maxmemory set to a generous limit (512 MB or more for large sites) and maxmemory-policy set to allkeys-lru. RDB persistence is enabled for crash recovery.
Drop-in: The Redis Object Cache plugin is installed, configured with igbinary serialization, and deployed as the object-cache.php drop-in.
wp-config.php:
define( 'WP_REDIS_HOST', '/var/run/redis/redis.sock' );
define( 'WP_REDIS_PORT', 0 ); // 0 for Unix socket
define( 'WP_REDIS_PREFIX', 'wp_mysite_' );
define( 'WP_REDIS_DATABASE', 0 );
define( 'WP_REDIS_TIMEOUT', 1 );
define( 'WP_REDIS_READ_TIMEOUT', 1 );
define( 'WP_REDIS_IGBINARY', true );
define( 'WP_REDIS_MAXTTL', 86400 ); // 24 hours max TTL
Monitoring: Cache hit ratio is tracked per request via the shutdown hook pattern shown earlier. Redis memory usage, connected clients, and evicted keys are monitored via Prometheus or Datadog. Alerts fire if the hit ratio drops below 90% or if evictions exceed a threshold.
Warming: A WP-CLI command runs as part of the deployment pipeline to prime alloptions, navigation menus, and the most-visited pages’ caches before the server accepts production traffic.
alloptions hygiene: A monthly audit query identifies oversized or unnecessary autoloaded options. Plugins that add options with autoload = 'yes' without justification get reported for review.
Stampede protection: The Redis Object Cache plugin’s built-in lock mechanism prevents multiple processes from rebuilding the same key simultaneously. For critical custom cache keys, the application code uses wp_cache_add() as a lock to coordinate rebuilds.
This architecture handles thousands of requests per second with sub-200ms response times, gracefully degrades during Redis maintenance windows, and provides the observability needed to detect and diagnose caching issues before they affect users. The WordPress object cache is a deceptively simple system, but getting it right at scale requires attention to every layer discussed in this article, from the boot sequence that loads the drop-in to the monitoring that tells you whether it is actually helping.
Rachel Torres
Senior WordPress developer and core contributor. Specializes in WordPress internals, performance optimization, and PHP best practices. Runs a WordPress consultancy in Austin, Texas.