Back to Blog
Platform Guides

Cloudways Autonomous vs. Flexible: A Technical Deep Dive for WordPress Developers

Sarah Kim
45 min read

Why This Comparison Matters

Cloudways has operated as a managed cloud hosting platform since 2012, abstracting away server management for developers who want performance without the sysadmin overhead. For years, the product was straightforward: pick a cloud provider (DigitalOcean, Vultr, AWS, Google Cloud, or Linode), choose your server size, and deploy. That model still exists today under what Cloudways now calls the Flexible plan.

But in late 2022, Cloudways introduced something fundamentally different: the Autonomous plan. Built on Kubernetes, backed by Cloudflare Enterprise, and designed around auto-scaling, Autonomous represents a genuine architectural shift rather than a simple rebrand or pricing change.

The problem? Most comparisons between these two plans focus on pricing tables and feature checklists. That approach fails WordPress developers who need to understand what is actually happening at the infrastructure level. How PHP workers are allocated, how caching layers interact, how database connections scale, and when the cost math actually favors one plan over the other: these are the questions that matter.

This article breaks down both plans from a systems architecture perspective. We will examine how each handles traffic spikes, how caching stacks differ, where the real performance bottlenecks live, and how to model costs for real-world WordPress and WooCommerce workloads. If you manage client sites, run a WooCommerce store, or build WordPress applications that need to handle unpredictable traffic, this comparison will give you the technical foundation to make an informed decision.

Architecture: Kubernetes Pods vs. Traditional VPS

The most significant difference between Autonomous and Flexible is not a feature toggle or a pricing model. It is the underlying infrastructure architecture.

Flexible: The Traditional VPS Model

Cloudways Flexible runs on virtual private servers from your chosen cloud provider. When you spin up a server on DigitalOcean through Cloudways, you are getting a DigitalOcean Droplet with Cloudways’ managed stack installed on top. The same applies to Vultr, Linode, AWS, and Google Cloud.

Each server has fixed resources: a set number of CPU cores, a fixed amount of RAM, and allocated storage. You can host multiple WordPress applications on a single server, but they all share those fixed resources. If one application spikes in traffic and consumes 90% of the server’s RAM, every other application on that server suffers.

The server stack on Flexible follows a classic reverse-proxy pattern:

Client Request
    → Varnish (full-page cache)
        → Nginx (reverse proxy / static files)
            → Apache (PHP processing via mod_php or PHP-FPM)
                → MariaDB / MySQL

This stack is well-understood, battle-tested, and highly configurable. It has also been the standard for managed WordPress hosting for over a decade. The downside is that scaling means either vertical scaling (upgrading to a bigger server) or manually cloning servers and distributing load.

Autonomous: Kubernetes-Based Container Orchestration

Autonomous runs on Kubernetes. Each WordPress application deploys as a containerized workload, with distinct pods handling different functions: web serving, PHP processing, database operations, and caching. The infrastructure layer manages resource allocation dynamically, spinning pods up and down based on actual demand.

Here is a simplified view of how an Autonomous application’s architecture looks:

Client Request
    → Cloudflare Enterprise CDN (edge caching, WAF, DDoS protection)
        → Kubernetes Ingress Controller
            → Web Server Pod(s) (Nginx-based)
                → PHP Worker Pod(s) (PHP-FPM)
                    → Database Pod (MariaDB + optional read replicas)
                    → Redis Pod (Object Cache Pro)

The Kubernetes layer means your application is not tied to a single virtual machine. When traffic spikes, the orchestrator can allocate additional PHP worker pods within seconds. When traffic drops, those pods are deallocated. You do not pay for idle capacity in the same way you do with a fixed VPS.

This distinction has cascading effects on everything from PHP worker allocation to database scaling to caching behavior. Let us examine each layer.

PHP Workers: Fixed Allocation vs. Dynamic Scaling

PHP workers are the single most important performance metric for a WordPress hosting environment that most site owners never think about. Each PHP worker handles one request at a time. If your server has 5 PHP workers and 10 requests arrive simultaneously, 5 requests get processed immediately while the other 5 wait in a queue. If the queue fills up, visitors see 503 errors or extremely slow page loads.

Flexible: Fixed PHP Worker Pools

On Cloudways Flexible, PHP workers are configured per application but constrained by your server’s total resources. The number of workers you can run depends on your server size, and each worker consumes a fixed amount of RAM (typically 30-60MB per worker for a standard WordPress site, but WooCommerce with heavy plugins can push that to 128MB or more per worker).

You configure PHP-FPM settings through the Cloudways dashboard or directly via SSH:

# Typical PHP-FPM pool configuration on Cloudways Flexible
# Located at: /etc/php/8.1/fpm/pool.d/www.conf

[www]
pm = dynamic
pm.max_children = 10
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 500

; Memory limit per worker
php_admin_value[memory_limit] = 256M

With the dynamic process manager, PHP-FPM starts with pm.start_servers workers and scales between pm.min_spare_servers and pm.max_spare_servers based on demand, never exceeding pm.max_children. The ceiling is hard. If you need more than 10 concurrent PHP processes, you either reconfigure (risking OOM kills if your server does not have enough RAM) or upgrade to a larger server.

For a 2GB RAM DigitalOcean server running a single WordPress site, a realistic maximum is around 10-15 PHP workers depending on your per-worker memory consumption. A 4GB server might handle 20-30. These numbers drop significantly if you are running WooCommerce with plugins like WooCommerce Subscriptions, WooCommerce Bookings, or heavy page builders.

Autonomous: Effectively Unlimited PHP Workers

Autonomous takes a fundamentally different approach. Cloudways describes the PHP worker allocation as “unlimited,” which is technically marketing language, but the practical reality is close. Because PHP processing runs in Kubernetes pods that scale horizontally, the system can spin up additional worker pods when concurrent requests increase.

The mechanics work like this:

1. Your application starts with a baseline allocation of PHP worker pods.
2. Kubernetes monitors CPU and memory utilization across those pods.
3. When utilization exceeds a threshold (typically 70-80%), the Horizontal Pod Autoscaler (HPA) triggers, spinning up additional pods.
4. New pods become available within 10-30 seconds.
5. When load decreases, excess pods are terminated after a cooldown period.

For developers, this means you do not need to calculate PHP worker counts or worry about running out of workers during a flash sale or viral traffic spike. The infrastructure handles scaling automatically. The tradeoff is that you surrender granular control over PHP-FPM pool settings, which experienced sysadmins might find limiting.

Practical Impact: WooCommerce During a Flash Sale

Consider a WooCommerce store running a Black Friday promotion. On Flexible with a 4GB server and 20 PHP workers, the site can handle roughly 20 concurrent uncached requests (cart additions, checkout processes, account lookups). If 50 customers hit checkout simultaneously, 30 of them wait. If the queue grows beyond the backlog limit, some customers get errors.

On Autonomous, the same traffic spike triggers pod autoscaling. Within 30 seconds of detecting increased load, additional PHP worker pods spin up. The store handles 50 concurrent checkout processes without queuing. When the rush subsides at 2 AM, those extra pods spin down, and you stop paying for them.

The difference is not theoretical. For traffic-variable sites, this is the core value proposition of Autonomous.

PHP Version Management: Per-Application vs. Server-Wide

This is a detail that matters enormously for agencies and developers managing multiple client sites, and it is often overlooked in comparisons.

Flexible: Server-Wide PHP Version

On Cloudways Flexible, PHP version is configured at the server level. All applications on a given server share the same PHP version. If you want to run one site on PHP 8.1 and another on PHP 8.2, you need two separate servers.

In practice, this creates friction. Suppose you manage 12 client sites across three Cloudways Flexible servers. One client needs PHP 8.0 because their theme breaks on 8.1. Another client wants PHP 8.2 for performance improvements. You end up with servers grouped by PHP version rather than by logical workload, which complicates resource allocation and increases costs.

You can change the PHP version through the Cloudways dashboard under Server Settings, or via the Cloudways API:

# Change PHP version via Cloudways API (Flexible)
curl -X PUT "https://api.cloudways.com/api/v1/server/{server_id}" \
  -H "Authorization: Bearer {api_token}" \
  -H "Content-Type: application/json" \
  -d '{
    "php_version": "php82"
  }'

This change applies to every application on that server. There is no way around it on Flexible.

Autonomous: Per-Application PHP Version

Because each application on Autonomous runs in its own containerized environment, PHP version is configurable per application. Site A can run PHP 8.0, Site B can run PHP 8.2, and Site C can run PHP 8.3, all under the same account with no server-level conflicts.

This is a direct consequence of the Kubernetes architecture. Each application’s PHP runtime is encapsulated in its own pod, isolated from other applications. Upgrading one application’s PHP version has zero impact on any other application.

For agencies, this eliminates the “PHP version grouping” problem entirely. You manage applications based on client needs, not infrastructure constraints.

Database Architecture: Single Instance vs. Read Replicas

WordPress is notoriously database-heavy. Every page load on a default WordPress installation triggers between 20 and 80 database queries. WooCommerce pushes that number much higher, especially on product archive pages, cart operations, and order processing. Complex queries from plugins like Advanced Custom Fields, WPBakery, or Elementor can add dozens more.

Flexible: Single MariaDB Instance

On Cloudways Flexible, each server runs a single MariaDB instance. All applications on that server share the same database server process. You can (and should) configure MariaDB settings for optimization:

# Key MariaDB settings to tune on Cloudways Flexible
# Accessible via Cloudways Dashboard → Server → Manage Services

[mysqld]
innodb_buffer_pool_size = 1024M    # Set to ~70% of available RAM for DB
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2  # Slight durability tradeoff for speed
query_cache_type = 0                # Disable query cache (deprecated, use Redis)
max_connections = 150
tmp_table_size = 64M
max_heap_table_size = 64M
join_buffer_size = 4M
sort_buffer_size = 4M

The innodb_buffer_pool_size setting is the single most impactful database tuning parameter. On a 4GB server where you are also running PHP, Nginx, Apache, and Varnish, you might allocate 1-1.5GB to the InnoDB buffer pool. On a dedicated database server, you could go higher.

The constraint here is that reads and writes go to the same instance. During heavy write operations (order processing, bulk imports, cron jobs), read performance degrades because the database engine is handling both workloads on the same CPU and disk I/O channels.

Autonomous: Read Replicas and Redis Object Cache Pro

Autonomous introduces database read replicas and ships with Redis Object Cache Pro as a default integration. This is a significant architectural advantage.

Read replicas work by maintaining one or more copies of your database that handle SELECT queries while the primary instance handles writes (INSERT, UPDATE, DELETE). WordPress’s query pattern is overwhelmingly read-heavy: on most sites, 90-95% of database queries are reads. By directing those reads to replica instances, you dramatically reduce load on the primary database.

The application-level configuration for read replicas typically involves the HyperDB plugin or a similar database abstraction layer:

// Simplified read replica configuration concept
// In practice, Cloudways Autonomous handles this transparently

$wpdb->add_database(array(
    'host'     => 'primary-db.internal',
    'user'     => 'wp_user',
    'password' => 'secure_password',
    'name'     => 'wp_database',
    'write'    => 1,  // Primary handles writes
    'read'     => 1,
));

$wpdb->add_database(array(
    'host'     => 'replica-db.internal',
    'user'     => 'wp_user',
    'password' => 'secure_password',
    'name'     => 'wp_database',
    'write'    => 0,  // Replica handles reads only
    'read'     => 2,  // Higher priority for reads
));

On Autonomous, Cloudways manages the read replica configuration transparently. You do not need to install HyperDB or configure database routing manually. The platform handles query routing at the infrastructure level.

Redis Object Cache Pro is included by default on Autonomous plans. This is not the free Redis Object Cache plugin from the WordPress repository. Object Cache Pro is a commercial plugin (normally $95/month) that provides significantly better WordPress object caching with features like:

  • Relay integration for shared-memory caching (reduces Redis roundtrips by 80-90%)
  • Prefetching of cache keys to prevent cache stampedes
  • Lazy loading of alloptions to reduce memory overhead
  • Granular analytics and diagnostics dashboard
  • Multisite support with per-site cache isolation

On Flexible, you can install Redis and use the free object cache plugin, but you do not get Object Cache Pro unless you purchase a separate license. The performance difference is measurable: Object Cache Pro typically reduces database queries by 80-95% compared to no object caching, and by 20-40% compared to the free Redis plugin, primarily through its prefetching and relay optimizations.

CDN and Edge Caching: Cloudflare Enterprise vs. Add-On

Content delivery is where the user-facing performance differences between Autonomous and Flexible become most visible. Page load times, Time to First Byte (TTFB), and global reach all depend on how effectively your CDN layer works.

Autonomous: Cloudflare Enterprise by Default

Every Autonomous application gets Cloudflare Enterprise CDN integration out of the box. This is not the free or Pro Cloudflare plan. Enterprise includes:

  • Full-page edge caching for WordPress, including HTML pages (not just static assets)
  • Tiered caching that reduces origin pulls by routing cache misses through regional upper-tier data centers before hitting your origin
  • Argo Smart Routing that optimizes packet paths across Cloudflare’s network, reducing latency by 30% on average
  • Enterprise WAF with managed rulesets for WordPress-specific threats
  • DDoS protection with 155+ Tbps network capacity
  • HTTP/3 and QUIC support for faster connection establishment
  • Early Hints (103) for preloading critical resources before the HTML response arrives
  • Image optimization including Polish and WebP/AVIF conversion

The full-page edge caching is the key differentiator. On a traditional hosting setup, even with a CDN for static assets, HTML pages still need to be generated by PHP on every uncached request. With Cloudflare Enterprise edge caching on Autonomous, HTML pages are cached at 275+ edge locations globally. A visitor in Tokyo gets served a cached HTML page from a Tokyo edge node, never touching your origin server.

Cache-Control headers for Autonomous typically look like this:

HTTP/2 200 OK
Content-Type: text/html; charset=UTF-8
Cache-Control: public, max-age=600, s-maxage=31536000
CF-Cache-Status: HIT
CF-Ray: 76a1b2c3d4e5f6g7-NRT
Age: 142
Server: cloudflare

The s-maxage=31536000 tells Cloudflare’s edge to cache the page for up to a year, while max-age=600 tells browsers to cache it for 10 minutes. When content changes, cache purging happens via API integration between WordPress and Cloudflare.

Flexible: Cloudflare Add-On or Self-Managed

Cloudways Flexible offers a Cloudflare CDN add-on, but it is not the Enterprise tier. The add-on provides basic CDN functionality, DNS management, and some optimization features. It does not include full-page HTML caching at the edge, Enterprise WAF rulesets, Argo Smart Routing, or tiered caching.

You can, of course, set up your own Cloudflare account (free, Pro, or Business) in front of a Flexible server. Many experienced WordPress developers do exactly this. But configuring full-page caching for WordPress on Cloudflare requires careful setup:

# Cloudflare Page Rules for WordPress full-page caching (manual setup)
# These require Cloudflare Business or Enterprise plan for best results

# Rule 1: Bypass cache for WordPress admin
URL: *example.com/wp-admin/*
Setting: Cache Level = Bypass

# Rule 2: Bypass cache for logged-in users
URL: *example.com/*
Setting: Cache Level = Bypass (when cookie matches wordpress_logged_in_*)

# Rule 3: Bypass cache for WooCommerce dynamic pages
URL: *example.com/cart/*
Setting: Cache Level = Bypass

URL: *example.com/checkout/*
Setting: Cache Level = Bypass

URL: *example.com/my-account/*
Setting: Cache Level = Bypass

# Rule 4: Cache everything else
URL: *example.com/*
Setting: Cache Level = Cache Everything, Edge Cache TTL = 1 month

This manual approach works, but it requires ongoing maintenance. When you install a new plugin that creates dynamic pages, you need to add bypass rules. When Cloudflare’s free plan changes its caching policies, you need to adapt. Autonomous handles all of this automatically through its integrated Cloudflare Enterprise layer.

The Flexible Server Stack: Varnish, Nginx, Apache, and How They Interact

One of Cloudways Flexible’s strengths is its well-tuned multi-layer caching stack. Understanding how these layers interact is essential for diagnosing performance issues and optimizing your setup.

Varnish: The Full-Page Cache Layer

Varnish sits at the front of the Flexible server stack, listening on port 80. It serves as a full-page cache, storing rendered HTML pages in memory and serving them without touching PHP or the database.

When a request arrives at a Flexible server:

1. Varnish checks if it has a cached copy of the requested URL.
2. If HIT: Varnish returns the cached page immediately. Response time: 1-5ms.
3. If MISS: Varnish forwards the request to Nginx, which proxies to Apache/PHP-FPM for processing. The response is then cached by Varnish for subsequent requests.

Varnish cache behavior is controlled by VCL (Varnish Configuration Language). Cloudways provides a default VCL configuration optimized for WordPress, but you can customize it:

# Key Varnish VCL behaviors on Cloudways Flexible
# Default Varnish cache TTL
sub vcl_backend_response {
    # Cache static assets for 30 days
    if (bereq.url ~ "\.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$") {
        set beresp.ttl = 30d;
    }

    # Cache HTML pages for 24 hours by default
    if (beresp.http.Content-Type ~ "text/html") {
        set beresp.ttl = 24h;
    }

    # Do not cache responses with Set-Cookie
    if (beresp.http.Set-Cookie) {
        set beresp.uncacheable = true;
        return (deliver);
    }
}

sub vcl_recv {
    # Bypass cache for WordPress admin
    if (req.url ~ "^/wp-(admin|login)") {
        return (pass);
    }

    # Bypass cache for logged-in users
    if (req.http.Cookie ~ "wordpress_logged_in_") {
        return (pass);
    }

    # Bypass cache for WooCommerce cart/checkout
    if (req.url ~ "^/(cart|checkout|my-account)") {
        return (pass);
    }

    # Bypass cache for POST requests
    if (req.method == "POST") {
        return (pass);
    }
}

Varnish’s primary advantage is speed. Serving a page from Varnish’s in-memory cache takes 1-5 milliseconds, compared to 200-2000 milliseconds for a PHP-generated response. For a WordPress blog or marketing site where most visitors see the same content, Varnish delivers exceptional performance.

The primary limitation is that Varnish only caches on the server itself. A visitor in Singapore requesting a page cached on a server in New York still experiences the network latency of a transpacific round trip. This is where Autonomous’s Cloudflare Enterprise edge caching has a structural advantage.

Nginx: Reverse Proxy and Static File Serving

Behind Varnish, Nginx handles reverse proxying and static file serving. On Cloudways Flexible, Nginx listens on port 8080 and forwards dynamic requests to Apache on port 8090.

Nginx excels at serving static files (images, CSS, JavaScript, fonts) with minimal resource consumption. Its event-driven architecture can handle thousands of concurrent connections using a single thread, making it far more efficient than Apache for static content.

Key Nginx configurations you can adjust on Cloudways Flexible include:

# Nginx configuration options accessible via Cloudways
# Server Management → Nginx Configuration

# Gzip compression settings
gzip on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript
           text/xml application/xml application/xml+rss text/javascript
           image/svg+xml;

# Client body size (important for media uploads)
client_max_body_size 100M;

# Static file caching headers
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
    expires 365d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

Apache: PHP Processing

Apache handles the actual PHP processing on Cloudways Flexible. While some hosting providers have moved to Nginx-only stacks with PHP-FPM, Cloudways Flexible retains Apache because it provides better compatibility with WordPress plugins that rely on .htaccess rules.

Many WordPress plugins write rewrite rules directly to .htaccess. Plugins like Yoast SEO, Redirection, and various security plugins depend on Apache’s mod_rewrite. Running a pure Nginx stack would break these plugins unless you manually convert every .htaccess rule to Nginx configuration syntax.

Apache on Cloudways Flexible uses either the Event MPM or Worker MPM (depending on PHP version), which are significantly more efficient than the older Prefork MPM used by many legacy hosts.

How the Three Layers Work Together

The full request flow on Flexible looks like this for an uncached page:

1. Client sends GET /about-us HTTP/1.1
2. Varnish receives request on port 80
3. Varnish checks cache → MISS
4. Varnish forwards to Nginx on port 8080
5. Nginx determines this is a dynamic request (not a static file)
6. Nginx proxies to Apache on port 8090
7. Apache invokes PHP-FPM
8. PHP-FPM executes WordPress, which queries MariaDB
9. WordPress generates HTML response
10. Apache sends response to Nginx
11. Nginx sends response to Varnish
12. Varnish caches the response and sends it to the client
13. Next request for /about-us → Varnish HIT → 2ms response

For a cached page, only steps 1, 2, and 13 occur. This is why Varnish hit rates are so critical to Flexible performance. A 95% Varnish hit rate means only 5% of requests ever touch PHP, making your effective capacity much higher than your raw PHP worker count would suggest.

Autonomous Edge Caching vs. Flexible Varnish: A Direct Comparison

Both plans cache full HTML pages, but the caching topology is fundamentally different.

Aspect Flexible (Varnish) Autonomous (Cloudflare Enterprise Edge)
Cache location On the origin server 275+ global edge PoPs
TTFB for cached page (same region) 5-20ms 10-30ms
TTFB for cached page (cross-continent) 150-300ms (network latency) 10-50ms (served from local edge)
Cache invalidation Immediate (local purge) Near-instant (global purge via API, typically under 5 seconds)
Cache warmup First request to origin caches it Each edge PoP caches on first request (tiered caching reduces origin pulls)
DDoS protection Server-level only Network-level (155+ Tbps capacity)
Customization VCL rules, very flexible Limited (managed by Cloudways)

For a site with a geographically concentrated audience (e.g., a local business serving one city), Varnish on Flexible can actually deliver lower TTFB than Cloudflare’s edge. The request never leaves the datacenter. But for any site with a distributed audience, edge caching wins dramatically. The physics of network latency make this unavoidable.

Auto-Scaling Mechanics on Autonomous

Kubernetes auto-scaling on Autonomous operates at multiple levels. Understanding these levels helps you predict how your application will respond to traffic changes.

Horizontal Pod Autoscaler (HPA)

The HPA monitors resource utilization metrics for your application’s pods and adjusts the number of pod replicas. Here is a simplified representation of how an HPA configuration works:

# Conceptual HPA configuration for a WordPress application
# (Cloudways manages this; shown for understanding)

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: wordpress-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: wordpress-app
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 30
      policies:
      - type: Pods
        value: 4
        periodSeconds: 60
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Pods
        value: 1
        periodSeconds: 120

Key behaviors to understand:

Scale-up is aggressive. When CPU utilization across existing pods exceeds 70%, the HPA can add up to 4 new pods per minute. The stabilization window is short (30 seconds), meaning the system reacts quickly to load increases. New pods typically become ready in 10-30 seconds, depending on the container image size and initialization time.

Scale-down is conservative. The system waits 5 minutes (300 seconds) after load decreases before removing pods, and removes only 1 pod every 2 minutes. This prevents flapping, where pods are rapidly added and removed during fluctuating traffic, which wastes resources and can cause request failures.

What Triggers Scaling

Several events trigger auto-scaling on Autonomous:

  • Traffic spikes: A surge in HTTP requests that increases CPU utilization
  • Heavy WooCommerce operations: Bulk order processing, inventory updates, or complex product queries
  • Cron jobs: WordPress cron tasks that run heavy database operations
  • Search indexing: If you use plugins like SearchWP or ElasticPress that perform intensive indexing
  • Bulk content operations: Importing posts, updating ACF fields across hundreds of posts, or running WP-CLI bulk commands

Scaling Limitations and Edge Cases

Auto-scaling is not magic. Several scenarios can cause performance issues even with auto-scaling:

Cold starts during zero-to-spike traffic. If your site receives zero traffic for an extended period and then gets hit with a sudden spike (e.g., a site featured on Hacker News), the initial pods need time to spin up. The first 10-30 seconds may see degraded performance before additional pods come online.

Database bottlenecks. PHP pods can scale horizontally, but if your database becomes the bottleneck (too many concurrent write operations, unoptimized queries, missing indexes), adding more PHP pods can actually make things worse by increasing database connection pressure. Read replicas help with read-heavy workloads, but write scaling remains constrained.

Plugin compatibility. Some WordPress plugins write to local disk (file-based caching plugins, some logging plugins). In a Kubernetes environment where pods are ephemeral, local disk writes may not persist across pod restarts. Autonomous handles this through persistent volume claims for the wp-content/uploads directory and shared storage, but plugins that write to non-standard locations may behave unexpectedly.

Redis and Memcached: Object Caching Differences

Object caching sits between WordPress’s PHP layer and the database, storing the results of database queries in memory so they can be retrieved without hitting MariaDB on subsequent requests.

Flexible: Redis or Memcached, Your Choice

Cloudways Flexible supports both Redis and Memcached. You enable them through the server management dashboard:

Redis on Flexible runs as a standalone service on your server, typically allocated 64-256MB of memory depending on your server size. You pair it with the free Redis Object Cache plugin by Till Kruss:

# Enabling Redis on Cloudways Flexible
# 1. Enable Redis via Cloudways Dashboard → Server → Manage Services
# 2. Install Redis Object Cache plugin
# 3. Add to wp-config.php:

define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_REDIS_DATABASE', 0);
define('WP_REDIS_TIMEOUT', 1);
define('WP_REDIS_READ_TIMEOUT', 1);

// Optional: Redis key prefix (important for multisite or shared Redis)
define('WP_REDIS_PREFIX', 'mysite_');

Memcached is the older option. It is simpler (key-value only, no data structures) and slightly faster for basic cache operations. However, Redis has effectively replaced Memcached for WordPress object caching because Redis supports data persistence (survives restarts), data structures (sorted sets, lists, hashes), and atomic operations.

Unless you have a specific reason to use Memcached, Redis is the better choice on Flexible.

Autonomous: Redis Object Cache Pro, Pre-Configured

As discussed in the database section, Autonomous includes Redis Object Cache Pro. The key technical differences from the free plugin are worth elaborating:

Relay Integration. Object Cache Pro supports Relay, a shared-memory caching layer that sits between PHP and Redis. Instead of making a network round-trip to Redis for every cache lookup, Relay stores frequently accessed cache keys in PHP’s shared memory (OPcache). This reduces Redis network calls by 80-90% and cuts object cache latency from ~0.5ms per call to ~0.01ms.

// Object Cache Pro with Relay configuration (Autonomous)
// These settings are managed by Cloudways, shown for reference

define('WP_REDIS_CONFIG', [
    'client' => 'relay',       // Use Relay instead of PhpRedis
    'host' => 'redis.internal',
    'port' => 6379,
    'database' => 0,
    'timeout' => 0.5,
    'retry_interval' => 10,
    'read_timeout' => 0.5,
    'compression' => 'zstd',   // Compress cached data
    'serializer' => 'igbinary', // Binary serialization (faster than PHP serialize)
    'prefetch' => true,         // Prefetch related keys
    'split_alloptions' => true, // Split alloptions for granular invalidation
    'strict' => true,
    'debug' => false,
    'save_commands' => false,
]);

The split_alloptions setting deserves special attention. WordPress stores all autoloaded options in a single cache key called alloptions. On sites with many plugins, this key can grow to several megabytes. Every time any option is updated, the entire alloptions cache is invalidated, forcing a full database query to rebuild it. Object Cache Pro’s split_alloptions feature breaks this monolithic key into individual option keys, so updating one option only invalidates that specific option’s cache entry.

For WooCommerce sites with 50+ active plugins, this optimization alone can reduce database queries during option-heavy operations (like saving settings pages) by 40-60%.

Breeze Plugin Integration on Flexible

Cloudways develops and maintains the Breeze caching plugin, which integrates tightly with the Flexible server stack. Understanding how Breeze interacts with Varnish is critical for avoiding common misconfiguration issues.

Breeze and Varnish: Complementary, Not Redundant

A common mistake is thinking Breeze and Varnish do the same thing. They do not.

Varnish caches full HTML pages at the server level, before the request reaches WordPress. It operates independently of WordPress and knows nothing about WordPress’s content structure.

Breeze is a WordPress plugin that:
1. Sets appropriate cache headers to tell Varnish what to cache and for how long
2. Purges Varnish cache when WordPress content changes (post updates, comment approvals, theme changes)
3. Provides browser-level caching via Cache-Control headers
4. Offers minification and file combination for CSS/JS
5. Implements database optimization (clearing transients, post revisions, etc.)
6. Manages CDN integration for static assets

The critical integration point is Varnish auto-purge. When you update a post in WordPress, Breeze sends PURGE requests to Varnish for the relevant URLs:

// Simplified Breeze Varnish purge logic
// When a post is updated, Breeze purges:

$urls_to_purge = array(
    get_permalink($post_id),           // The post itself
    get_home_url(),                     // Homepage
    get_post_type_archive_link('post'), // Blog archive
    // Category/tag archives the post belongs to
    // Author archive
    // Feed URLs
);

foreach ($urls_to_purge as $url) {
    $parsed = parse_url($url);
    $purge_request = new WP_Http();
    $purge_request->request(
        'http://127.0.0.1:6081' . $parsed['path'],
        array(
            'method' => 'PURGE',
            'headers' => array(
                'Host' => $parsed['host'],
                'X-Purge-Method' => 'exact',
            ),
        )
    );
}

Without Breeze (or a similar Varnish integration plugin), updating a post would leave stale content in Varnish until the cache TTL expires. Visitors would see old content for hours or days after you publish changes.

Breeze Minification: When to Use It

Breeze offers CSS and JavaScript minification and combination. This can reduce the number of HTTP requests and total file sizes, but it can also break things.

Safe to enable:

  • HTML minification (removes whitespace, safe in virtually all cases)
  • CSS minification (removes whitespace from CSS files)
  • JS minification (removes whitespace from JavaScript files)

Use with caution:

  • CSS combination (merging multiple CSS files into one can break load order)
  • JS combination (combining JavaScript files can cause dependency conflicts)
  • Inline CSS/JS moving (can break scripts that depend on DOM position)

A practical testing workflow:

# Step 1: Enable HTML minification only, test site
# Step 2: Enable CSS minification, test all pages
# Step 3: Enable JS minification, test all interactive features
# Step 4: Try CSS combination, test thoroughly
# Step 5: Try JS combination, test every form, slider, popup, and AJAX feature

# If something breaks at any step, disable that specific option
# and move to the next step

For sites using modern WordPress block themes or page builders that already optimize their asset loading, Breeze minification provides diminishing returns. The biggest performance gains from Breeze come from its Varnish integration and browser caching headers, not from minification.

Migration Between Plans: What Changes and What Breaks

Moving between Autonomous and Flexible is not a simple toggle. The infrastructure differences mean a migration involves real technical considerations.

Flexible to Autonomous Migration

When migrating from Flexible to Autonomous, consider these factors:

What transitions smoothly:

  • WordPress files and database content
  • Media uploads and wp-content directory
  • Theme and plugin files
  • Basic WordPress configuration

What requires attention:

  • Server-level customizations are lost. Any custom Nginx configurations, Varnish VCL rules, Apache .htaccess overrides, or MariaDB tuning parameters you have applied on Flexible will not carry over to Autonomous. The Kubernetes environment uses a different configuration model.
  • Caching plugins may conflict. If you are running Breeze, WP Super Cache, W3 Total Cache, or WP Rocket on Flexible, you should deactivate server-side caching features before migrating. Autonomous has its own caching layer, and running two full-page caching systems simultaneously causes unpredictable behavior (stale content, double-compressed responses, header conflicts).
  • File-based caching breaks. Plugins that write cache files to the local filesystem (WP Super Cache in disk mode, for example) will not work correctly in a Kubernetes pod environment. Object caching through Redis Object Cache Pro is the correct approach on Autonomous.
  • Cron behavior changes. WordPress cron on Flexible runs via the standard wp-cron.php mechanism (triggered by page visits) or via a real system cron if configured. On Autonomous, cron execution is managed differently by the Kubernetes environment. Heavy cron jobs that worked fine on a dedicated VPS may need optimization for the containerized environment.
  • SSH/SFTP access works differently. You still get access, but the underlying filesystem and session behavior differ from a traditional VPS.

Recommended migration process:

# Pre-migration checklist for Flexible → Autonomous

1. Document all server-level customizations:
   - Custom Nginx rules
   - Varnish VCL modifications
   - PHP ini overrides
   - MariaDB tuning parameters

2. Audit plugins for compatibility:
   - Deactivate file-based caching plugins
   - Check for plugins writing to non-standard filesystem paths
   - Verify cron job plugins work in containerized environments

3. Test on staging:
   - Cloudways provides staging environments on both plans
   - Deploy to Autonomous staging first
   - Run full QA across all site functionality

4. Migrate during low-traffic window:
   - DNS TTL should be lowered to 300 seconds 48 hours before migration
   - Use Cloudways migration tool or manual migration
   - Verify database integrity post-migration

5. Post-migration:
   - Configure Redis Object Cache Pro
   - Verify Cloudflare Enterprise CDN is caching correctly
   - Monitor error logs for 48 hours
   - Check cron execution

Autonomous to Flexible Migration

Moving from Autonomous back to Flexible is less common but sometimes necessary (for cost reasons or for workloads that need more server-level control).

The primary concern here is rebuilding the caching stack. On Flexible, you need to:

1. Install and configure Breeze for Varnish integration
2. Set up Redis or Memcached for object caching
3. Configure your own CDN (Cloudflare, KeyCDN, BunnyCDN, etc.)
4. Tune PHP-FPM worker pools for your traffic levels
5. Optimize MariaDB settings for your server resources

You lose auto-scaling, read replicas, Object Cache Pro, and Cloudflare Enterprise. For sites that benefited from these features, the performance regression will be noticeable.

Cost Modeling: When Each Plan Makes Financial Sense

Cost analysis is where most comparisons fail because they only look at list prices. The real cost of hosting includes not just the monthly fee but also the time spent on optimization, the plugins you need to add, and the cost of downtime during traffic spikes.

Cloudways Flexible Pricing Model

Flexible pricing is straightforward: you pay for a server of a specific size from a specific cloud provider. As of late 2022, entry-level pricing starts at roughly $14/month for a 1GB DigitalOcean server and goes up based on provider and resources.

A realistic Flexible setup for a mid-traffic WooCommerce store might look like this:

# Flexible cost model: Mid-traffic WooCommerce store
# ~50,000 monthly visitors, ~500 orders/month

Server: DigitalOcean 4GB RAM, 2 vCPU           $54/month
Cloudflare Pro plan (self-managed)               $20/month
Redis Object Cache Pro license (optional)        $95/month
Monitoring (New Relic, optional)                 $0-25/month
Backup add-on (off-server)                       $0-10/month
------------------------------------------------------------
Total range:                                     $74 - $204/month

# Developer time for optimization and maintenance:
# Initial setup and tuning: 4-8 hours
# Monthly maintenance: 1-2 hours
# Traffic spike debugging: 2-4 hours per incident

The “hidden” cost on Flexible is developer time. Tuning PHP-FPM, optimizing Varnish, configuring Cloudflare page rules, and debugging performance issues during traffic spikes all require expertise and hours.

Cloudways Autonomous Pricing Model

Autonomous uses consumption-based pricing. You pay based on the resources your application actually consumes (CPU, memory, bandwidth, storage), plus a base platform fee. Pricing starts at approximately $35/month for a small WordPress site.

# Autonomous cost model: Same mid-traffic WooCommerce store
# ~50,000 monthly visitors, ~500 orders/month

Base platform fee:                               ~$35/month
Resource consumption (typical):                  ~$25-60/month
Cloudflare Enterprise CDN:                       Included
Redis Object Cache Pro:                          Included
Read replicas (if enabled):                      Additional cost
Auto-scaling during spikes:                      Pay-per-use
------------------------------------------------------------
Total typical range:                             $60 - $95/month
During spike months (Black Friday):              $100 - $200/month

# Developer time for optimization:
# Initial setup: 1-2 hours
# Monthly maintenance: 0.5-1 hours
# Traffic spike response: None (auto-scaling handles it)

The Crossover Point

For steady, predictable traffic, Flexible is often cheaper. You provision exactly the resources you need, and the fixed cost is predictable month to month. A 4GB server handles 50,000 monthly visitors comfortably, and you are not paying for features you do not use.

For variable traffic, Autonomous typically wins. Here is why:

On Flexible, you need to provision for your peak traffic. If your WooCommerce store does 50,000 visitors normally but 200,000 during Black Friday week, you need a server capable of handling 200,000 visitors. That means paying for a larger server year-round to handle one week of peak traffic, or manually upgrading before the event and downgrading after (which involves downtime and DNS changes).

On Autonomous, you pay base rates during normal months and only pay for additional resources during the spike. The auto-scaling handles the traffic increase automatically, with no manual intervention. Over a full year, the total cost may be lower because you are not over-provisioned for 11 months to handle 1 month of peak demand.

Cost Comparison Table: Annual Totals

# Scenario: WooCommerce store
# Normal traffic: 50,000 visits/month
# Peak months (Nov, Dec): 200,000 visits/month
# Rest of year: 50,000 visits/month

FLEXIBLE (provisioned for peak):
  Server (8GB to handle peak):    $100/month × 12  = $1,200
  Cloudflare Pro:                  $20/month × 12  =   $240
  Redis Object Cache Pro:          $95/month × 12  = $1,140
  ----------------------------------------------------------
  Annual total:                                      $2,580

FLEXIBLE (manual scaling):
  Server (4GB normal):            $54/month × 10   =   $540
  Server (8GB peak months):       $100/month × 2   =   $200
  Cloudflare Pro:                  $20/month × 12  =   $240
  Downtime risk during scaling:    Unquantified
  Developer time for scaling:      ~4 hours/year
  ----------------------------------------------------------
  Annual total:                                      $980 + time

AUTONOMOUS:
  Base + consumption (normal):     $75/month × 10  =   $750
  Base + consumption (peak):      $150/month × 2   =   $300
  CDN, Object Cache Pro:           Included
  ----------------------------------------------------------
  Annual total:                                      $1,050

The numbers shift depending on your specific traffic patterns, but the pattern is consistent: Autonomous is most cost-effective when traffic variance is high.

Server Stack Optimization on Flexible: A Practical Guide

If you choose Flexible, maximizing performance requires hands-on optimization across the entire stack. Here is a practical guide based on real-world configurations.

PHP Optimization

# PHP settings to optimize on Cloudways Flexible
# Dashboard → Application → PHP Settings

# OPcache (critical for PHP performance)
opcache.enable = 1
opcache.memory_consumption = 256       # MB allocated for opcode cache
opcache.interned_strings_buffer = 16   # MB for interned strings
opcache.max_accelerated_files = 10000  # Max cached scripts
opcache.revalidate_freq = 60           # Seconds between file change checks
opcache.validate_timestamps = 1         # Set to 0 in production for max speed
opcache.save_comments = 1              # Required by some frameworks

# Memory and execution limits
memory_limit = 256M                     # Per-worker memory limit
max_execution_time = 300                # Seconds (increase for imports/migrations)
max_input_vars = 5000                   # Increase for complex forms/WooCommerce

# Upload limits
upload_max_filesize = 100M
post_max_size = 100M

The opcache.memory_consumption setting is frequently under-provisioned. A WordPress site with WooCommerce and 20-30 active plugins easily generates 5,000+ PHP files. If OPcache runs out of memory, it starts evicting cached scripts, forcing PHP to recompile them from disk on every request. Monitor OPcache usage with a tool like OPcache GUI or the opcache_get_status() function:

<?php
// Quick OPcache health check (create as a temporary PHP file, delete after checking)
$status = opcache_get_status(false);
$memory = $status['memory_usage'];
$stats = $status['opcache_statistics'];

echo "Used Memory: " . round($memory['used_memory'] / 1024 / 1024, 2) . " MB\n";
echo "Free Memory: " . round($memory['free_memory'] / 1024 / 1024, 2) . " MB\n";
echo "Wasted Memory: " . round($memory['wasted_memory'] / 1024 / 1024, 2) . " MB\n";
echo "Hit Rate: " . round($stats['opcache_hit_rate'], 2) . "%\n";
echo "Cached Scripts: " . $stats['num_cached_scripts'] . "\n";
echo "Cache Full: " . ($stats['oom_restarts'] > 0 ? "YES - INCREASE MEMORY" : "No") . "\n";

If oom_restarts is greater than zero, your OPcache memory is too low. Increase opcache.memory_consumption until restarts stop.

MariaDB Optimization

Beyond the InnoDB buffer pool size discussed earlier, several MariaDB settings significantly impact WordPress performance:

# MariaDB optimizations for WordPress on Cloudways Flexible

# Slow query log (enable temporarily for debugging)
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow-query.log
long_query_time = 1          # Log queries taking longer than 1 second

# Table open cache (WordPress opens many tables)
table_open_cache = 4000
table_definition_cache = 2000

# Thread handling
thread_cache_size = 50       # Cache threads to avoid thread creation overhead

# InnoDB settings
innodb_flush_method = O_DIRECT         # Avoid double-buffering with OS cache
innodb_io_capacity = 2000              # Increase for SSD storage
innodb_io_capacity_max = 4000
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_buffer_pool_instances = 4       # Split buffer pool for concurrency

Slow query identification is the highest-value database optimization activity. Enable the slow query log, let it run for 24-48 hours of normal traffic, then analyze:

# Analyze slow queries on Cloudways Flexible via SSH
mysqldumpslow -s t -t 20 /var/log/mysql/slow-query.log

# This shows the top 20 slowest queries, sorted by total time
# Look for:
# - Queries without indexes (add appropriate indexes)
# - Queries scanning full tables (optimize or add WHERE clauses)
# - Queries run by plugins you can replace or configure better

Common WordPress slow query culprits include: wp_options table queries where autoload is “yes” on too many rows, postmeta table queries lacking proper indexes, and WooCommerce order queries on large stores without HPOS (High-Performance Order Storage) enabled.

Varnish Monitoring and Tuning

Monitor your Varnish hit rate to understand how effectively the cache is working:

# Check Varnish statistics via SSH on Cloudways Flexible
varnishstat -1 | grep -E "cache_hit|cache_miss|client_req"

# Key metrics:
# MAIN.cache_hit    - Requests served from cache
# MAIN.cache_miss   - Requests that needed backend processing
# MAIN.client_req   - Total client requests

# Calculate hit rate:
# Hit Rate = cache_hit / (cache_hit + cache_miss) × 100

# Target: 85%+ for content sites, 60-75% for WooCommerce

If your Varnish hit rate is below 60%, investigate these common causes:

1. Too many plugins setting cookies: Any response with a Set-Cookie header bypasses Varnish. Plugins that set tracking cookies, A/B testing cookies, or session cookies on every page view kill Varnish performance.
2. Query strings: By default, URLs with different query strings are cached separately. UTM parameters, fbclid, and other tracking parameters create thousands of unique cache entries for the same content.
3. Cache TTL too short: If your Varnish TTL is set to 5 minutes, you are rebuilding the entire cache 288 times per day.
4. Not purging correctly: If Breeze is not properly connected to Varnish, stale content leads to manual purges, which often purge everything rather than specific URLs.

Practical Decision Framework: Choosing Between Autonomous and Flexible

After examining the technical details, here is a practical framework for choosing between the two plans. This is not a “one size fits all” recommendation. The right choice depends on your specific workload, traffic patterns, technical expertise, and budget.

Choose Autonomous When:

Your traffic is unpredictable or spiky. Media sites, viral content publishers, product launch pages, and seasonal e-commerce stores benefit most from auto-scaling. If you cannot predict next month’s traffic within a 2x range, Autonomous removes the guesswork.

You run WooCommerce with flash sales or promotions. The combination of unlimited PHP workers, database read replicas, and auto-scaling makes Autonomous ideal for WooCommerce stores where checkout concurrency spikes matter. A failed checkout is a lost sale; auto-scaling prevents that failure mode.

You want managed performance without deep server expertise. Autonomous requires less manual optimization. Cloudflare Enterprise, Object Cache Pro, and auto-scaling are configured out of the box. If your team is strong in WordPress development but not in server administration, Autonomous reduces the operational burden.

You manage multiple applications with different PHP requirements. Per-application PHP version configuration eliminates the server-grouping problem. If you are an agency managing 20+ client sites across PHP 8.0, 8.1, and 8.2, Autonomous simplifies management considerably.

Your audience is geographically distributed. If you serve visitors across multiple continents, Cloudflare Enterprise’s edge caching delivers consistently low TTFB worldwide. Varnish on a single-location server cannot match this for global audiences.

Choose Flexible When:

Your traffic is steady and predictable. If your site receives consistent traffic month after month (a corporate site, a membership site with stable membership, an internal application), the fixed-cost model of Flexible is more predictable and often cheaper.

You need deep server-level customization. Flexible gives you SSH access to a real server where you can modify Nginx configs, write custom VCL for Varnish, tune MariaDB, install custom PHP extensions, run background processes, and configure the server exactly to your specifications. Autonomous abstracts this layer away by design.

You have sysadmin expertise and want to optimize every layer. If you or your team can tune PHP-FPM pools, optimize MariaDB queries, configure Varnish VCL, and manage Cloudflare manually, Flexible gives you the control to squeeze maximum performance from every dollar spent. The performance ceiling on a well-optimized Flexible server is extremely high.

Budget is the primary constraint for small sites. For a small WordPress blog or brochure site with under 10,000 monthly visitors, the entry-level Flexible plan (starting around $14/month on DigitalOcean) is significantly cheaper than Autonomous. These sites do not need auto-scaling, read replicas, or Cloudflare Enterprise.

You host many low-traffic sites on shared servers. Flexible allows multiple applications per server. If you manage 15 client brochure sites that each get 5,000 monthly visitors, putting them all on a single 4GB Flexible server is far more cost-effective than paying per-application on Autonomous.

The Hybrid Approach

Many agencies and businesses find that the best approach is using both plans strategically:

  • Autonomous for revenue-critical sites (primary WooCommerce store, high-traffic marketing site, SaaS application front-end)
  • Flexible for supporting sites (staging environments, development sites, low-traffic client brochure sites, internal tools)

Cloudways supports running both plan types under the same account, which makes this hybrid approach practical.

Benchmarking Methodology: How to Test for Yourself

Numbers from benchmark articles (including this one) should inform your decision but not replace your own testing. Every WordPress site is unique in its plugin combination, theme complexity, database size, and traffic patterns. Here is a methodology for running your own comparison.

Tools You Need

# Load testing tools
# Option 1: k6 (recommended, scriptable, free)
brew install k6   # macOS
# or
sudo apt install k6  # Ubuntu/Debian

# Option 2: Apache Bench (ab), simple but limited
# Pre-installed on most systems

# Option 3: Loader.io (cloud-based, free tier available)
# https://loader.io

# Monitoring tools
# - Cloudways built-in monitoring (both plans)
# - Query Monitor plugin (database query analysis)
# - New Relic APM (detailed application performance, free tier)

Test Script Example (k6)

// k6 load test script for WordPress
// Save as load-test.js, run with: k6 run load-test.js

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    stages: [
        { duration: '2m', target: 50 },    // Ramp up to 50 concurrent users
        { duration: '5m', target: 50 },    // Hold at 50 users
        { duration: '2m', target: 200 },   // Spike to 200 users
        { duration: '3m', target: 200 },   // Hold spike
        { duration: '2m', target: 50 },    // Back to normal
        { duration: '1m', target: 0 },     // Ramp down
    ],
    thresholds: {
        http_req_duration: ['p(95)<2000'],  // 95% of requests under 2s
        http_req_failed: ['rate<0.01'],     // Less than 1% failure rate
    },
};

const BASE_URL = 'https://your-site.com';

export default function() {
    // Mix of page types to simulate real traffic
    let pages = [
        '/',
        '/shop/',
        '/product/sample-product/',
        '/blog/',
        '/about/',
        '/contact/',
    ];

    let page = pages[Math.floor(Math.random() * pages.length)];
    let response = http.get(`${BASE_URL}${page}`);

    check(response, {
        'status is 200': (r) => r.status === 200,
        'response time OK': (r) => r.timings.duration < 2000,
    });

    sleep(Math.random() * 3 + 1); // Random think time: 1-4 seconds
}

What to Measure

Run identical tests against both Flexible and Autonomous deployments of the same site, then compare:

  • TTFB (Time to First Byte): How fast the server begins responding. Test from multiple geographic locations.
  • P95 response time: The response time at the 95th percentile. This captures the experience of your slowest 5% of visitors, which is often where the real problems hide.
  • Error rate under load: What percentage of requests fail (503, 504, timeout) at 50, 100, 200, and 500 concurrent users?
  • Recovery time: After a traffic spike, how quickly does response time return to baseline?
  • Database query time: Use Query Monitor to measure average query execution time under load.
  • Cache hit rate: Check Varnish stats (Flexible) or Cloudflare analytics (Autonomous) to see what percentage of requests are served from cache.

Common Pitfalls in Testing

Testing from a single location. If your test machine is in the same region as your server, you are not measuring real-world latency for global visitors. Use a distributed testing tool or run tests from multiple cloud regions.

Testing with cache primed vs. cold. Run tests both with a warm cache (after browsing the site to populate caches) and with a cold cache (after purging all caches). The difference reveals how much work your server does on cache misses.

Not testing WooCommerce-specific flows. If you run WooCommerce, your load test must include add-to-cart and checkout flows, not just page views. These are the PHP-heavy, uncacheable operations that stress the server most.

Ignoring the "real" user experience. Server response time is only part of the story. Use Google Lighthouse, WebPageTest, or Core Web Vitals data to measure what users actually experience, including client-side rendering and layout shifts.

Final Technical Recommendations

After running WordPress sites on both Cloudways Flexible and Autonomous across various workloads, here are the concrete technical recommendations for different scenarios.

For a single WooCommerce store doing $10K-100K/month in revenue: Start with Autonomous. The auto-scaling and included Cloudflare Enterprise CDN justify the cost. Revenue-critical sites should not depend on manual scaling decisions. Set up Object Cache Pro with Relay, enable read replicas if your product catalog exceeds 5,000 SKUs, and monitor your resource consumption dashboard for cost optimization opportunities.

For an agency managing 20+ client sites: Use both. Put high-value, high-traffic client sites on Autonomous. Consolidate low-traffic brochure sites on Flexible servers grouped by PHP version. Use the Cloudways API to automate provisioning and monitoring across both plan types.

For a high-traffic content/media site: Autonomous's Cloudflare Enterprise edge caching is the clear winner for global audiences. The edge cache hit rate on a well-configured content site can exceed 98%, meaning less than 2% of requests ever reach your origin. This effectively makes your origin server size almost irrelevant for cached content.

For a developer building a custom WordPress application: Start with Flexible for development and staging (cheaper, more control for debugging). Deploy production to Autonomous if the application has variable load patterns, or Flexible if the load is predictable and you want granular control over every stack layer.

For a membership or LMS site: These sites have high login rates, which means most page views cannot be cached (logged-in users bypass Varnish and edge caching). PHP worker count becomes critical. Autonomous's dynamic PHP worker scaling handles this better than Flexible's fixed pools, especially if membership surges during course launches or enrollment periods.

The bottom line is this: Cloudways Flexible gives you a powerful, well-tuned traditional hosting stack with full control. Cloudways Autonomous gives you a modern, auto-scaling platform with less control but more resilience. Neither is universally better. The right choice depends on your traffic patterns, technical resources, and tolerance for infrastructure management.

Whichever you choose, invest time in understanding your site's specific performance profile. Install Query Monitor, enable slow query logging, monitor your cache hit rates, and test under realistic load conditions. The best hosting plan in the world cannot fix unoptimized database queries, bloated plugins, or misconfigured caching layers. Infrastructure choice is important, but application-level optimization is where the biggest performance gains live.

Share this article

Sarah Kim

Systems administrator and WordPress hosting specialist. Has managed infrastructure at two managed WordPress hosting companies. Writes about server stacks, caching, and monitoring.