Back to Blog
DevOps & Deployment

CI/CD for WordPress with GitLab CI and Bitbucket Pipelines: Beyond GitHub Actions

Tom Bradley
45 min read

Most WordPress CI/CD tutorials assume you are using GitHub Actions. That makes sense given GitHub’s market share, but it ignores a significant portion of development teams that run their repositories on GitLab or Bitbucket. These teams are not edge cases. GitLab dominates in enterprise environments where self-hosting is a requirement. Bitbucket is deeply embedded in Atlassian-centric shops running Jira and Confluence. If you are on one of these platforms, the GitHub Actions documentation floating around the WordPress community is only marginally useful to you.

This article covers CI/CD pipeline construction for WordPress projects using GitLab CI and Bitbucket Pipelines. We will build real pipelines that lint PHP, run PHPUnit test suites against a WordPress test database, compile front-end assets, and deploy to managed hosting providers like WP Engine, Pantheon, and Kinsta. Every YAML configuration is production-tested. Every deployment script handles the edge cases that tutorials typically skip.

We are not going to rehash the basics of what CI/CD means. You already know. Let us get into the engineering.

GitLab CI Fundamentals for WordPress Projects

GitLab CI uses a single file called .gitlab-ci.yml at the root of your repository. Unlike GitHub Actions, which scatters workflow files across a .github/workflows/ directory, GitLab keeps everything in one place. This has tradeoffs. A large pipeline can produce a sprawling YAML file, but it also means you can see your entire CI/CD configuration at a glance.

GitLab CI organizes work into stages. Jobs within the same stage run in parallel. Stages execute sequentially. A typical WordPress pipeline has four or five stages:

  • lint for code quality checks (PHPCS, PHPStan, ESLint)
  • test for PHPUnit and integration tests
  • build for asset compilation (Webpack, Vite, Tailwind)
  • deploy_staging for pushing to staging environments
  • deploy_production for pushing to production with a manual gate

Here is a foundational .gitlab-ci.yml that establishes this structure:

image: php:8.2-cli

stages:
  - lint
  - test
  - build
  - deploy_staging
  - deploy_production

variables:
  MYSQL_DATABASE: wordpress_test
  MYSQL_ROOT_PASSWORD: root_password
  WP_VERSION: latest
  PHP_VERSION: "8.2"

cache:
  key:
    files:
      - composer.lock
      - package-lock.json
  paths:
    - vendor/
    - node_modules/

before_script:
  - apt-get update -yqq
  - apt-get install -yqq git unzip libzip-dev libpng-dev libjpeg-dev
  - docker-php-ext-install zip gd mysqli pdo_mysql
  - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

The before_script block runs before every job unless a job overrides it. For WordPress, you almost always need the mysqli extension, gd for image processing, and zip for Composer operations.

The cache configuration uses composer.lock and package-lock.json as cache keys. When either lockfile changes, the cache invalidates and dependencies are reinstalled from scratch. This prevents the stale dependency bugs that plague loosely-configured caches.

The Lint Stage: PHPCS and PHPStan as Quality Gates

Code quality tools should be the first thing that runs in your pipeline. They are fast, they catch obvious problems, and they fail early, saving you from waiting ten minutes for a PHPUnit run only to discover a coding standards violation.

phpcs:
  stage: lint
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - ./vendor/bin/phpcs --standard=WordPress --extensions=php --ignore=vendor/,node_modules/,tests/ .
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == "develop"
    - if: $CI_COMMIT_BRANCH == "main"
  allow_failure: false

phpstan:
  stage: lint
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - ./vendor/bin/phpstan analyse --level=6 --configuration=phpstan.neon
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == "develop"
    - if: $CI_COMMIT_BRANCH == "main"
  allow_failure: false

eslint:
  stage: lint
  image: node:18-alpine
  before_script:
    - npm ci --prefer-offline
  script:
    - npx eslint "wp-content/themes/your-theme/src/**/*.js"
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - "**/*.js"
  allow_failure: true

Notice that the eslint job uses a different Docker image (node:18-alpine) and overrides the global before_script. GitLab CI lets you mix images freely across jobs, which is one of its strengths over Bitbucket Pipelines where image switching is more constrained.

The rules section controls when each job runs. PHPCS and PHPStan run on merge requests and pushes to develop and main. ESLint only runs on merge requests where JavaScript files have changed. This keeps pipeline times reasonable for back-end-only changes.

For the PHPStan configuration, you will need a phpstan.neon file that understands WordPress’s function signatures:

includes:
  - vendor/szepeviktor/phpstan-wordpress/extension.neon

parameters:
  level: 6
  paths:
    - wp-content/plugins/your-plugin/
    - wp-content/themes/your-theme/
  excludePaths:
    - vendor/
    - node_modules/
    - wp-content/themes/your-theme/dist/
  bootstrapFiles:
    - vendor/php-stubs/wordpress-stubs/wordpress-stubs.php
  ignoreErrors:
    - '#Function apply_filters invoked with \d+ parameters, 2 required#'

The szepeviktor/phpstan-wordpress package provides type stubs for WordPress core functions. Without it, PHPStan will flag nearly every WordPress function call as an error because it cannot find the type definitions.

Running WordPress PHPUnit Tests in GitLab CI

The WordPress test suite requires a running MySQL or MariaDB instance and the WordPress test library (wp-tests-lib). Setting this up in CI is where most teams hit their first wall.

GitLab CI uses services to spin up auxiliary containers alongside your job container. For WordPress testing, you need a MySQL service:

phpunit:
  stage: test
  services:
    - name: mysql:8.0
      alias: mysql
      variables:
        MYSQL_DATABASE: wordpress_test
        MYSQL_ROOT_PASSWORD: root_password
  variables:
    DB_HOST: mysql
    DB_NAME: wordpress_test
    DB_USER: root
    DB_PASS: root_password
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - |
      bash bin/install-wp-tests.sh \
        $DB_NAME $DB_USER $DB_PASS $DB_HOST $WP_VERSION true
    - ./vendor/bin/phpunit --configuration phpunit.xml --coverage-text
  artifacts:
    when: always
    reports:
      junit: build/logs/junit.xml
    paths:
      - build/logs/
    expire_in: 30 days
  coverage: '/^\s*Lines:\s*(\d+\.\d+)%/'
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH == "develop"
    - if: $CI_COMMIT_BRANCH == "main"

The install-wp-tests.sh script is the standard WordPress test bootstrapper. If you generated your plugin scaffold with WP-CLI’s wp scaffold plugin command, you already have it. If not, here is a version that works reliably in CI environments:

#!/usr/bin/env bash

DB_NAME=$1
DB_USER=$2
DB_PASS=$3
DB_HOST=$4
WP_VERSION=${5:-latest}
SKIP_DB_CREATE=${6:-false}

WP_TESTS_DIR=${WP_TESTS_DIR:-/tmp/wordpress-tests-lib}
WP_CORE_DIR=${WP_CORE_DIR:-/tmp/wordpress}

download() {
  if [ "$(which curl)" ]; then
    curl -s "$1" > "$2"
  elif [ "$(which wget)" ]; then
    wget -nv -O "$2" "$1"
  fi
}

if [[ $WP_VERSION =~ ^[0-9]+\.[0-9]+\-(beta|RC)[0-9]+$ ]]; then
  WP_BRANCH=${WP_VERSION%\-*}
  WP_TESTS_TAG="branches/$WP_BRANCH"
elif [[ $WP_VERSION =~ ^[0-9]+\.[0-9]+$ ]]; then
  WP_TESTS_TAG="branches/$WP_VERSION"
elif [[ "$WP_VERSION" == "latest" ]]; then
  local TAG=$(download https://api.wordpress.org/core/version-check/1.7/ /dev/stdout | grep -o '"version":"[^"]*"' | head -1 | sed 's/"version":"//;s/"//')
  if [[ -z "$TAG" ]]; then
    echo "Could not determine latest WordPress version"
    exit 1
  fi
  WP_TESTS_TAG="tags/$TAG"
else
  WP_TESTS_TAG="tags/$WP_VERSION"
fi

# Set up WordPress core
mkdir -p "$WP_CORE_DIR"
download "https://wordpress.org/wordpress-${WP_VERSION}.tar.gz" /tmp/wordpress.tar.gz
tar --strip-components=1 -zxmf /tmp/wordpress.tar.gz -C "$WP_CORE_DIR"

# Set up test library
mkdir -p "$WP_TESTS_DIR"
svn co --quiet "https://develop.svn.wordpress.org/${WP_TESTS_TAG}/tests/phpunit/includes/" "$WP_TESTS_DIR/includes"
svn co --quiet "https://develop.svn.wordpress.org/${WP_TESTS_TAG}/tests/phpunit/data/" "$WP_TESTS_DIR/data"

# Create wp-tests-config.php
cat > "$WP_TESTS_DIR/wp-tests-config.php" << EOF
/dev/null || true
fi

A few things to note about the GitLab CI job configuration. The DB_HOST variable is set to mysql, which matches the alias of the MySQL service. GitLab creates a Docker network where service containers are reachable by their alias as a hostname.

The artifacts section captures test results in JUnit format, which GitLab can parse and display directly in merge request views. The coverage regex extracts the line coverage percentage from PHPUnit’s text output, and GitLab displays it as a badge on the pipeline.

Test Matrix: Multiple PHP and WordPress Versions

Testing against a single PHP version is not enough for plugins or themes distributed publicly. GitLab CI supports parallel job generation using the parallel:matrix keyword:

phpunit-matrix:
  stage: test
  image: php:${PHP_VERSION}-cli
  services:
    - name: mysql:8.0
      alias: mysql
      variables:
        MYSQL_DATABASE: wordpress_test
        MYSQL_ROOT_PASSWORD: root_password
  parallel:
    matrix:
      - PHP_VERSION: ["8.0", "8.1", "8.2", "8.3"]
        WP_VERSION: ["6.4", "6.5", "latest"]
  variables:
    DB_HOST: mysql
    DB_NAME: wordpress_test
    DB_USER: root
    DB_PASS: root_password
  before_script:
    - apt-get update -yqq
    - apt-get install -yqq git unzip libzip-dev libpng-dev libjpeg-dev subversion default-mysql-client
    - docker-php-ext-install zip gd mysqli
    - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - bash bin/install-wp-tests.sh $DB_NAME $DB_USER $DB_PASS $DB_HOST $WP_VERSION true
    - ./vendor/bin/phpunit --configuration phpunit.xml
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
  allow_failure:
    exit_codes:
      - 137

This generates 12 parallel jobs (4 PHP versions times 3 WordPress versions). The allow_failure with exit code 137 handles OOM kills gracefully on shared GitLab runners, which can happen with memory-intensive test suites.

You will want to exclude certain combinations that are not compatible. PHP 8.3 with WordPress 6.2, for example, throws deprecation warnings that can break strict test configurations. Add an exclude block to the matrix if needed, or handle it in your test bootstrap by adjusting error reporting levels.

Bitbucket Pipelines for WordPress

Bitbucket Pipelines uses bitbucket-pipelines.yml and takes a fundamentally different approach from GitLab CI. Where GitLab gives you stages and arbitrary job names, Bitbucket organizes pipelines around steps within named pipelines. Each step runs in its own Docker container, and steps within a pipeline execute sequentially by default.

Here is a foundational bitbucket-pipelines.yml for a WordPress project:

image: php:8.2-cli

definitions:
  caches:
    composer:
      key:
        files:
          - composer.lock
      path: vendor
    npm:
      key:
        files:
          - package-lock.json
      path: node_modules

  services:
    mysql:
      image: mysql:8.0
      variables:
        MYSQL_DATABASE: wordpress_test
        MYSQL_ROOT_PASSWORD: root_password
      memory: 1024

  steps:
    - step: &lint-php
        name: PHP Linting
        caches:
          - composer
        script:
          - apt-get update && apt-get install -y git unzip libzip-dev
          - docker-php-ext-install zip
          - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
          - composer install --no-interaction --prefer-dist
          - ./vendor/bin/phpcs --standard=WordPress --extensions=php --ignore=vendor/,node_modules/,tests/ .
          - ./vendor/bin/phpstan analyse --level=6 --configuration=phpstan.neon

    - step: &test-php
        name: PHPUnit Tests
        caches:
          - composer
        services:
          - mysql
        script:
          - apt-get update && apt-get install -y git unzip libzip-dev libpng-dev subversion default-mysql-client
          - docker-php-ext-install zip gd mysqli
          - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
          - composer install --no-interaction --prefer-dist
          - bash bin/install-wp-tests.sh wordpress_test root root_password 127.0.0.1 latest true
          - ./vendor/bin/phpunit --configuration phpunit.xml

    - step: &build-assets
        name: Build Assets
        image: node:18
        caches:
          - npm
        script:
          - cd wp-content/themes/your-theme
          - npm ci
          - npm run build
        artifacts:
          - wp-content/themes/your-theme/dist/**

pipelines:
  pull-requests:
    '**':
      - step: *lint-php
      - step: *test-php
      - step: *build-assets

  branches:
    develop:
      - step: *lint-php
      - step: *test-php
      - step: *build-assets
      - step:
          name: Deploy to Staging
          deployment: staging
          script:
            - pipe: atlassian/rsync-deploy:0.12.0
              variables:
                USER: deploy
                SERVER: staging.example.com
                REMOTE_PATH: /var/www/wordpress/
                LOCAL_PATH: ./
                EXTRA_ARGS: >-
                  --exclude='.git'
                  --exclude='node_modules'
                  --exclude='.bitbucket'
                  --exclude='tests'

    main:
      - step: *lint-php
      - step: *test-php
      - step: *build-assets
      - step:
          name: Deploy to Production
          deployment: production
          trigger: manual
          script:
            - pipe: atlassian/rsync-deploy:0.12.0
              variables:
                USER: deploy
                SERVER: production.example.com
                REMOTE_PATH: /var/www/wordpress/
                LOCAL_PATH: ./
                EXTRA_ARGS: >-
                  --exclude='.git'
                  --exclude='node_modules'
                  --exclude='.bitbucket'
                  --exclude='tests'

One major difference from GitLab: Bitbucket Pipelines uses YAML anchors (&lint-php and *lint-php) for step reuse instead of GitLab’s extends keyword or include directives. The anchor approach is standard YAML and works fine, but it means you cannot override individual fields the way you can with GitLab’s inheritance model.

Another critical difference: the MySQL service in Bitbucket is accessible at 127.0.0.1, not at a hostname alias. Bitbucket runs services in the same network namespace as the step container, so all services appear as localhost. This trips up developers migrating from GitLab or GitHub Actions where services have distinct hostnames.

Bitbucket Parallel Steps

Bitbucket supports parallel step execution, which can cut your pipeline time significantly when lint, test, and build steps do not depend on each other:

pipelines:
  pull-requests:
    '**':
      - parallel:
          - step: *lint-php
          - step: *test-php
          - step: *build-assets

All three steps run simultaneously in separate containers. If any step fails, the overall pipeline fails. This is equivalent to GitLab’s same-stage parallelism but requires the explicit parallel keyword.

Bitbucket Test Matrix Workaround

Bitbucket Pipelines does not have a native matrix feature like GitLab’s parallel:matrix or GitHub Actions’ strategy.matrix. To test across multiple PHP versions, you must define separate steps:

definitions:
  steps:
    - step: &test-php80
        name: PHPUnit (PHP 8.0)
        image: php:8.0-cli
        services:
          - mysql
        script:
          - apt-get update && apt-get install -y git unzip libzip-dev libpng-dev subversion default-mysql-client
          - docker-php-ext-install zip gd mysqli
          - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
          - composer install --no-interaction --prefer-dist
          - bash bin/install-wp-tests.sh wordpress_test root root_password 127.0.0.1 latest true
          - ./vendor/bin/phpunit

    - step: &test-php82
        name: PHPUnit (PHP 8.2)
        image: php:8.2-cli
        services:
          - mysql
        script:
          - apt-get update && apt-get install -y git unzip libzip-dev libpng-dev subversion default-mysql-client
          - docker-php-ext-install zip gd mysqli
          - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
          - composer install --no-interaction --prefer-dist
          - bash bin/install-wp-tests.sh wordpress_test root root_password 127.0.0.1 latest true
          - ./vendor/bin/phpunit

    - step: &test-php83
        name: PHPUnit (PHP 8.3)
        image: php:8.3-cli
        services:
          - mysql
        script:
          - apt-get update && apt-get install -y git unzip libzip-dev libpng-dev subversion default-mysql-client
          - docker-php-ext-install zip gd mysqli
          - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
          - composer install --no-interaction --prefer-dist
          - bash bin/install-wp-tests.sh wordpress_test root root_password 127.0.0.1 latest true
          - ./vendor/bin/phpunit

pipelines:
  pull-requests:
    '**':
      - parallel:
          - step: *test-php80
          - step: *test-php82
          - step: *test-php83

This is verbose, and there is no way around it. The duplication is the price you pay for Bitbucket’s simpler pipeline model. You can reduce it somewhat by moving the setup commands into a shared shell script that each step calls, but the step definitions themselves must be separate.

Docker-in-Docker for WordPress Integration Tests

Unit tests with mocked WordPress functions only get you so far. For integration tests that exercise real WordPress behavior, including database queries, hook execution, and HTTP requests, you need a running WordPress instance inside your CI pipeline.

Docker-in-Docker (DinD) lets you spin up a full WordPress stack within a CI job. GitLab CI has first-class DinD support:

integration-tests:
  stage: test
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_HOST: tcp://docker:2376
    DOCKER_TLS_VERIFY: 1
    DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
  before_script:
    - apk add --no-cache docker-compose bash curl
  script:
    - |
      cat > docker-compose.ci.yml << 'COMPOSE'
      version: '3.8'
      services:
        wordpress:
          image: wordpress:php8.2-apache
          ports:
            - "8080:80"
          environment:
            WORDPRESS_DB_HOST: db
            WORDPRESS_DB_USER: wordpress
            WORDPRESS_DB_PASSWORD: wordpress
            WORDPRESS_DB_NAME: wordpress
          volumes:
            - ./wp-content/plugins/your-plugin:/var/www/html/wp-content/plugins/your-plugin
            - ./wp-content/themes/your-theme:/var/www/html/wp-content/themes/your-theme
          depends_on:
            db:
              condition: service_healthy
        db:
          image: mariadb:10.6
          environment:
            MYSQL_DATABASE: wordpress
            MYSQL_USER: wordpress
            MYSQL_PASSWORD: wordpress
            MYSQL_ROOT_PASSWORD: root
          healthcheck:
            test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
            interval: 5s
            timeout: 5s
            retries: 10
        test-runner:
          image: php:8.2-cli
          depends_on:
            - wordpress
          volumes:
            - ./tests:/tests
            - ./vendor:/vendor
          working_dir: /tests
          entrypoint: ["bash", "-c"]
          command:
            - |
              sleep 10
              php /vendor/bin/phpunit --configuration /tests/integration/phpunit.xml
      COMPOSE
    - docker-compose -f docker-compose.ci.yml up --exit-code-from test-runner --abort-on-container-exit
  after_script:
    - docker-compose -f docker-compose.ci.yml down -v 2>/dev/null || true
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
  timeout: 15m

The --exit-code-from test-runner flag tells Docker Compose to use the test runner container’s exit code as the overall command exit code. If tests pass, the job passes. If they fail, the job fails.

For Bitbucket Pipelines, DinD requires enabling Docker in your step configuration:

integration-test:
  name: Integration Tests
  size: 2x
  script:
    - apt-get update && apt-get install -y docker-compose
    - docker-compose -f docker-compose.ci.yml up -d db wordpress
    - sleep 15
    - docker-compose -f docker-compose.ci.yml run test-runner
    - docker-compose -f docker-compose.ci.yml down -v
  services:
    - docker

The size: 2x is important. Integration tests with Docker Compose eat memory fast, and Bitbucket’s default 4GB allocation for a step is often not enough when you are running WordPress, MariaDB, and a test runner simultaneously. The 2x size doubles the available resources.

Deploying to WP Engine from GitLab CI and Bitbucket Pipelines

WP Engine supports Git-based deployments. You push to a WP Engine Git remote, and their infrastructure handles the deployment. The remote URL format is [email protected]:production/your-install-name.git for production and [email protected]:staging/your-install-name.git for staging.

The trick in CI is SSH key management. You need to add your CI pipeline’s SSH key to WP Engine’s authorized keys list, then configure the pipeline to use that key when pushing.

GitLab CI Deployment to WP Engine

First, generate an SSH key pair and add the public key to WP Engine. Store the private key as a GitLab CI/CD variable named WPE_SSH_KEY (masked, file type).

deploy-staging:
  stage: deploy_staging
  image: alpine:latest
  before_script:
    - apk add --no-cache git openssh-client rsync
    - eval $(ssh-agent -s)
    - echo "$WPE_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo -e "Host git.wpengine.com\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
  script:
    - git config user.email "[email protected]"
    - git config user.name "GitLab CI"
    - |
      # Remove dev-only files before deployment
      rm -rf node_modules tests .gitlab-ci.yml phpunit.xml phpstan.neon
      rm -rf wp-content/themes/your-theme/src
      rm -rf wp-content/themes/your-theme/tailwind.config.js
      rm -rf wp-content/themes/your-theme/postcss.config.js
      rm -rf wp-content/themes/your-theme/package.json
      rm -rf wp-content/themes/your-theme/package-lock.json
    - git add -A
    - git commit -m "CI build ${CI_COMMIT_SHORT_SHA}" --allow-empty
    - git remote add wpe [email protected]:staging/${WPE_INSTALL_NAME}.git
    - git push wpe HEAD:main --force
  environment:
    name: staging
    url: https://staging.yourdomain.com
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"
  dependencies:
    - build-assets

deploy-production:
  stage: deploy_production
  image: alpine:latest
  before_script:
    - apk add --no-cache git openssh-client
    - eval $(ssh-agent -s)
    - echo "$WPE_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo -e "Host git.wpengine.com\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
  script:
    - git config user.email "[email protected]"
    - git config user.name "GitLab CI"
    - rm -rf node_modules tests .gitlab-ci.yml phpunit.xml phpstan.neon
    - rm -rf wp-content/themes/your-theme/src
    - git add -A
    - git commit -m "Production deploy ${CI_COMMIT_SHORT_SHA}" --allow-empty
    - git remote add wpe [email protected]:production/${WPE_INSTALL_NAME}.git
    - git push wpe HEAD:main --force
  environment:
    name: production
    url: https://yourdomain.com
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual
  dependencies:
    - build-assets

The when: manual on the production deployment is a safety gate. Someone must click a button in the GitLab UI to trigger the production push. This prevents accidental production deployments from a rogue merge.

The dependencies keyword ensures the build artifacts from the build-assets job are available in the deploy job. Without this, the compiled CSS and JavaScript would not be included in the deployment.

Bitbucket Pipelines Deployment to WP Engine

In Bitbucket, SSH keys are managed under Repository Settings > SSH Keys. Add your deployment key there, and it becomes available to all pipeline steps automatically. No need to manually configure ssh-agent.

- step:
    name: Deploy to WP Engine Staging
    deployment: staging
    image: alpine/git:latest
    script:
      - git config user.email "[email protected]"
      - git config user.name "Bitbucket Pipelines"
      - rm -rf node_modules tests bitbucket-pipelines.yml phpunit.xml
      - git add -A
      - git commit -m "Build ${BITBUCKET_BUILD_NUMBER}" --allow-empty
      - git remote add wpe [email protected]:staging/${WPE_INSTALL_NAME}.git
      - git push wpe HEAD:main --force

Bitbucket’s deployment environment tracking (deployment: staging) gives you a deployment history dashboard, which is useful for tracking what was deployed when and by whom.

Deploying to Pantheon

Pantheon also uses Git-based deployments, but with a twist: Pantheon repositories contain the full WordPress core, not just your custom code. The deployment workflow involves pushing to a Pantheon-hosted Git repository, and Terminus (Pantheon’s CLI) handles environment management.

deploy-pantheon:
  stage: deploy_staging
  image: php:8.2-cli
  before_script:
    - apt-get update && apt-get install -y git openssh-client curl
    - mkdir -p ~/.ssh
    - echo "$PANTHEON_SSH_KEY" | tr -d '\r' > ~/.ssh/id_rsa
    - chmod 600 ~/.ssh/id_rsa
    - ssh-keyscan -t rsa codeserver.dev.*.drush.in >> ~/.ssh/known_hosts 2>/dev/null || true
    - curl -sL https://github.com/pantheon-systems/terminus/releases/latest/download/terminus.phar -o /usr/local/bin/terminus
    - chmod +x /usr/local/bin/terminus
    - terminus auth:login --machine-token="$PANTHEON_MACHINE_TOKEN"
  script:
    - |
      SITE_NAME="your-site-name"
      PANTHEON_REPO=$(terminus connection:info ${SITE_NAME}.dev --field=git_url)
      
      git remote add pantheon "$PANTHEON_REPO"
      
      # Set Pantheon to Git mode
      terminus connection:set ${SITE_NAME}.dev git
      
      # Push to dev environment
      git push pantheon HEAD:master --force
      
      # Clear caches
      terminus env:clear-cache ${SITE_NAME}.dev
      
      # Run database updates if needed
      terminus wp ${SITE_NAME}.dev -- core update-db
      
      echo "Deployed to dev environment"
      echo "URL: https://dev-${SITE_NAME}.pantheonsite.io"
  environment:
    name: pantheon-dev
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"

Promoting from dev to test (staging) and then to live (production) on Pantheon happens through Terminus, not through Git pushes:

promote-to-staging:
  stage: deploy_staging
  image: php:8.2-cli
  before_script:
    - curl -sL https://github.com/pantheon-systems/terminus/releases/latest/download/terminus.phar -o /usr/local/bin/terminus
    - chmod +x /usr/local/bin/terminus
    - terminus auth:login --machine-token="$PANTHEON_MACHINE_TOKEN"
  script:
    - terminus env:deploy your-site-name.test --updatedb --note="Pipeline deploy ${CI_COMMIT_SHORT_SHA}"
    - terminus env:clear-cache your-site-name.test
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"
  needs:
    - deploy-pantheon

promote-to-production:
  stage: deploy_production
  image: php:8.2-cli
  before_script:
    - curl -sL https://github.com/pantheon-systems/terminus/releases/latest/download/terminus.phar -o /usr/local/bin/terminus
    - chmod +x /usr/local/bin/terminus
    - terminus auth:login --machine-token="$PANTHEON_MACHINE_TOKEN"
  script:
    - terminus env:deploy your-site-name.live --updatedb --note="Production deploy ${CI_COMMIT_SHORT_SHA}"
    - terminus env:clear-cache your-site-name.live
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual

Deploying to Kinsta via SSH

Kinsta does not support Git-based deployments natively. You deploy via SSH/SFTP, which means using rsync or a similar tool. This is less elegant than WP Engine or Pantheon’s Git push model, but it gives you more control over exactly what gets transferred.

deploy-kinsta:
  stage: deploy_production
  image: alpine:latest
  before_script:
    - apk add --no-cache openssh-client rsync
    - eval $(ssh-agent -s)
    - echo "$KINSTA_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo -e "Host *.kinsta.cloud\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
  script:
    - |
      KINSTA_HOST="${KINSTA_SSH_USER}@${KINSTA_SSH_HOST}"
      KINSTA_PORT="${KINSTA_SSH_PORT}"
      REMOTE_PATH="/www/your-site/public"
      
      rsync -avz --delete \
        --exclude='.git' \
        --exclude='node_modules' \
        --exclude='tests' \
        --exclude='.gitlab-ci.yml' \
        --exclude='phpunit.xml' \
        --exclude='phpstan.neon' \
        --exclude='composer.json' \
        --exclude='composer.lock' \
        --exclude='package.json' \
        --exclude='package-lock.json' \
        --exclude='wp-config.php' \
        --exclude='wp-content/uploads' \
        --exclude='wp-content/cache' \
        -e "ssh -p ${KINSTA_PORT}" \
        ./ "${KINSTA_HOST}:${REMOTE_PATH}/"
      
      # Clear object cache after deployment
      ssh -p ${KINSTA_PORT} ${KINSTA_HOST} "cd ${REMOTE_PATH} && wp cache flush"
      
      # Run database migrations if applicable
      ssh -p ${KINSTA_PORT} ${KINSTA_HOST} "cd ${REMOTE_PATH} && wp core update-db"
  environment:
    name: production
    url: https://yourdomain.com
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual
  dependencies:
    - build-assets

Critical detail: the --exclude='wp-config.php' and --exclude='wp-content/uploads' lines. You never want to overwrite the production wp-config.php with your local or CI version. And you certainly do not want to wipe the uploads directory. The --delete flag removes files on the remote that do not exist locally, which is what you want for clean deployments, but without these excludes it would destroy your production configuration and media files.

Environment-Specific Builds

Staging and production environments often need different build configurations. Staging might include source maps, debug logging, and development API keys. Production needs minified assets, production API endpoints, and no debug output.

The cleanest approach is environment variables that your build tools read at compile time.

GitLab CI Environment-Specific Variables

GitLab lets you scope CI/CD variables to specific environments:

build-assets:
  stage: build
  image: node:18-alpine
  before_script:
    - cd wp-content/themes/your-theme
    - npm ci --prefer-offline
  script:
    - |
      if [ "$CI_ENVIRONMENT_NAME" = "production" ]; then
        export NODE_ENV=production
        export API_BASE_URL="https://api.yourdomain.com"
        export ENABLE_SOURCE_MAPS=false
      else
        export NODE_ENV=development
        export API_BASE_URL="https://api-staging.yourdomain.com"
        export ENABLE_SOURCE_MAPS=true
      fi
    - npm run build
  artifacts:
    paths:
      - wp-content/themes/your-theme/dist/
    expire_in: 1 hour
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"
      variables:
        CI_ENVIRONMENT_NAME: staging
    - if: $CI_COMMIT_BRANCH == "main"
      variables:
        CI_ENVIRONMENT_NAME: production

Your webpack.config.js or vite.config.js can then read these environment variables:

// vite.config.js
import { defineConfig } from 'vite';

export default defineConfig({
  build: {
    sourcemap: process.env.ENABLE_SOURCE_MAPS === 'true',
    minify: process.env.NODE_ENV === 'production' ? 'terser' : false,
    rollupOptions: {
      output: {
        entryFileNames: process.env.NODE_ENV === 'production'
          ? 'js/[name].[hash].js'
          : 'js/[name].js',
      },
    },
  },
  define: {
    __API_BASE__: JSON.stringify(process.env.API_BASE_URL || 'http://localhost:8080'),
  },
});

WordPress-Specific Environment Configuration

For WordPress plugins that need different behavior in staging versus production, a common pattern is to set a constant during deployment:

deploy-staging:
  stage: deploy_staging
  script:
    - |
      # Inject environment marker into wp-config.php on the remote
      ssh ${DEPLOY_USER}@${DEPLOY_HOST} "
        cd /var/www/wordpress
        if ! grep -q 'WP_ENVIRONMENT_TYPE' wp-config.php; then
          sed -i \"/\\/\\* That's all, stop editing/i define( 'WP_ENVIRONMENT_TYPE', 'staging' );\" wp-config.php
        fi
      "
    - # ... rest of deployment

WordPress 5.5 introduced the WP_ENVIRONMENT_TYPE constant and the wp_get_environment_type() function. Your plugin or theme code can branch based on this:

<?php
if ( wp_get_environment_type() === 'staging' ) {
    // Enable verbose logging
    define( 'WPKITE_DEBUG_LOG', true );
    
    // Use sandbox API credentials
    define( 'STRIPE_API_KEY', get_option( 'stripe_test_key' ) );
} else {
    define( 'WPKITE_DEBUG_LOG', false );
    define( 'STRIPE_API_KEY', get_option( 'stripe_live_key' ) );
}

Secrets Management Across Platforms

Handling secrets in CI/CD pipelines is a real concern. SSH keys, API tokens, database credentials, and Stripe keys all need to be available to your pipeline without being committed to your repository.

GitLab CI/CD Variables

GitLab stores secrets as CI/CD variables at the project, group, or instance level. Key features:

  • Masked variables are redacted from job logs. Any output line containing the variable’s value is replaced with [MASKED].
  • Protected variables are only available to jobs running on protected branches (like main or develop).
  • File-type variables are written to a temporary file, and the variable contains the file path. This is useful for SSH keys and certificates.
  • Environment-scoped variables are only available to jobs targeting a specific environment.

Best practices for GitLab secrets:

# In your .gitlab-ci.yml, reference variables set in the UI
deploy-production:
  stage: deploy_production
  script:
    # $DEPLOY_SSH_KEY is a file-type variable containing the SSH private key
    - eval $(ssh-agent -s)
    - ssh-add "$DEPLOY_SSH_KEY"
    
    # $STRIPE_LIVE_KEY is a masked, protected variable
    - |
      ssh ${DEPLOY_HOST} "
        wp option update stripe_live_key '${STRIPE_LIVE_KEY}'
      "
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual

Never put secrets directly in your .gitlab-ci.yml file. That file is committed to your repository. Even if the repository is private, anyone with read access can see those values.

Bitbucket Repository Variables

Bitbucket’s variable system is simpler. You set variables under Repository Settings > Repository Variables or Deployments > Environment Variables. Variables can be marked as “secured,” which masks them in logs and prevents them from being read via the API after creation.

# Bitbucket variable references
- step:
    name: Deploy
    deployment: production
    script:
      # $WPE_SSH_KEY, $WPE_INSTALL_NAME set as secured repository variables
      - echo "$WPE_SSH_KEY" > ~/.ssh/deploy_key
      - chmod 600 ~/.ssh/deploy_key
      - git push wpe-${WPE_INSTALL_NAME} HEAD:main --force

Bitbucket also supports deployment-level variables, which are only available to steps tagged with a specific deployment value. Your staging Stripe key is only available to steps where deployment: staging, and your production Stripe key is only available to deployment: production.

External Secret Stores

For teams that need more control, both GitLab and Bitbucket can integrate with external secret management tools. HashiCorp Vault is the most common:

# GitLab CI with Vault integration
deploy-production:
  stage: deploy_production
  id_tokens:
    VAULT_ID_TOKEN:
      aud: https://vault.yourdomain.com
  secrets:
    DEPLOY_SSH_KEY:
      vault: production/deploy/ssh_key@secrets
      file: true
    STRIPE_LIVE_KEY:
      vault: production/stripe/live_key@secrets
  script:
    - eval $(ssh-agent -s)
    - ssh-add "$DEPLOY_SSH_KEY"
    - echo "Deploying with secrets from Vault"

GitLab’s native Vault integration uses JWT authentication. The CI job receives a short-lived token that can only access the specific secrets you have configured. This is significantly more secure than storing secrets as CI/CD variables, because the secrets are never persisted in GitLab’s database.

Artifact Caching Strategies

Caching is the single most impactful optimization for CI pipeline speed. Installing Composer dependencies and npm packages from scratch on every job can add two to five minutes per job. With proper caching, that drops to seconds.

GitLab CI Caching

GitLab supports two caching mechanisms: cache (between pipeline runs) and artifacts (between jobs in the same pipeline).

# Global cache for dependencies
cache:
  - key:
      files:
        - composer.lock
    paths:
      - vendor/
    policy: pull-push
  - key:
      files:
        - package-lock.json
    paths:
      - node_modules/
    policy: pull-push

# Job-specific cache for PHPUnit test results
phpunit:
  stage: test
  cache:
    - key: wp-test-lib-${WP_VERSION}
      paths:
        - /tmp/wordpress-tests-lib/
        - /tmp/wordpress/
      policy: pull-push
  script:
    - # Tests run faster when WP core is cached

The policy: pull-push is the default. Jobs pull the cache at the start and push updates at the end. For jobs that only read the cache (like deployment jobs), use policy: pull to avoid unnecessary uploads.

One gotcha: GitLab caches are per-branch by default. A cache created on your feature branch is not available on main. You can change this with the key:prefix setting, but be careful about cache poisoning from feature branches.

Bitbucket Caching

Bitbucket uses named caches defined in the definitions section:

definitions:
  caches:
    composer:
      key:
        files:
          - composer.lock
      path: vendor
    npm:
      key:
        files:
          - package-lock.json
      path: node_modules
    wordpress-test-lib:
      key:
        files:
          - bin/install-wp-tests.sh
      path: /tmp/wordpress-tests-lib

Bitbucket caches are shared across all branches and pipelines. This is simpler than GitLab’s per-branch default but means a corrupted cache affects every pipeline until it is manually cleared.

Artifacts for Cross-Job Data

When your build job compiles CSS and JavaScript, the deploy job needs those compiled files. Artifacts handle this:

# GitLab
build-assets:
  stage: build
  image: node:18-alpine
  script:
    - cd wp-content/themes/your-theme
    - npm ci
    - npm run build
  artifacts:
    paths:
      - wp-content/themes/your-theme/dist/
    expire_in: 1 hour

deploy-staging:
  stage: deploy_staging
  dependencies:
    - build-assets
  script:
    # The dist/ directory from build-assets is available here
    - ls wp-content/themes/your-theme/dist/

In Bitbucket, artifacts are passed automatically to subsequent steps in the same pipeline. You just need to declare which paths to preserve:

- step:
    name: Build
    script:
      - cd wp-content/themes/your-theme && npm ci && npm run build
    artifacts:
      - wp-content/themes/your-theme/dist/**

- step:
    name: Deploy
    script:
      # Artifacts from the Build step are available automatically
      - ls wp-content/themes/your-theme/dist/

Notification and Monitoring Integrations

A pipeline that runs silently is only half useful. You need notifications when deployments succeed, when they fail, and when someone needs to approve a production release.

Slack Notifications

GitLab CI can send Slack notifications using a simple curl call to a Slack webhook:

notify-slack-success:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      curl -X POST "$SLACK_WEBHOOK_URL" \
        -H 'Content-type: application/json' \
        --data "{
          \"blocks\": [
            {
              \"type\": \"section\",
              \"text\": {
                \"type\": \"mrkdwn\",
                \"text\": \":white_check_mark: *Deployment Successful*\n*Project:* ${CI_PROJECT_NAME}\n*Branch:* ${CI_COMMIT_BRANCH}\n*Commit:* ${CI_COMMIT_SHORT_SHA}\n*Author:* ${GITLAB_USER_NAME}\n*Pipeline:* <${CI_PIPELINE_URL}|View>\"
              }
            }
          ]
        }"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success

notify-slack-failure:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      curl -X POST "$SLACK_WEBHOOK_URL" \
        -H 'Content-type: application/json' \
        --data "{
          \"blocks\": [
            {
              \"type\": \"section\",
              \"text\": {
                \"type\": \"mrkdwn\",
                \"text\": \":x: *Pipeline Failed*\n*Project:* ${CI_PROJECT_NAME}\n*Branch:* ${CI_COMMIT_BRANCH}\n*Commit:* ${CI_COMMIT_SHORT_SHA}\n*Author:* ${GITLAB_USER_NAME}\n*Pipeline:* <${CI_PIPELINE_URL}|View>\"
              }
            }
          ]
        }"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_failure

The .post stage is a special GitLab stage that always runs after all other stages. The when: on_success and when: on_failure conditions ensure you get the right notification based on the pipeline outcome.

For Bitbucket, the Slack Notify pipe simplifies this:

- step:
    name: Notify Slack
    script:
      - pipe: atlassian/slack-notify:2.0.0
        variables:
          WEBHOOK_URL: $SLACK_WEBHOOK_URL
          MESSAGE: "Deployed ${BITBUCKET_COMMIT} to production by ${BITBUCKET_BUILD_CREATOR}"

Post-Deployment Health Checks

Notifications are better when they include actual verification. Instead of just saying “deployment succeeded,” check that the site is actually responding:

verify-deployment:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl jq
  script:
    - |
      SITE_URL="https://yourdomain.com"
      MAX_RETRIES=5
      RETRY_DELAY=10
      
      for i in $(seq 1 $MAX_RETRIES); do
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "${SITE_URL}")
        
        if [ "$HTTP_CODE" = "200" ]; then
          echo "Site is responding with HTTP 200"
          
          # Check WordPress version endpoint
          WP_VERSION=$(curl -s "${SITE_URL}/wp-json/" | jq -r '.namespaces[]' | head -1)
          
          if [ -n "$WP_VERSION" ]; then
            echo "REST API is responding"
            exit 0
          fi
        fi
        
        echo "Attempt ${i}/${MAX_RETRIES}: HTTP ${HTTP_CODE}. Retrying in ${RETRY_DELAY}s..."
        sleep $RETRY_DELAY
      done
      
      echo "DEPLOYMENT VERIFICATION FAILED"
      exit 1
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
  allow_failure: false

If the health check fails, you know immediately that something went wrong. Pair this with a rollback mechanism (a Git revert and force push) and you have a safety net for bad deployments.

Monitoring Hooks with New Relic and Datadog

When deploying to production, it helps to mark the deployment in your monitoring tool so you can correlate performance changes with specific releases.

mark-deployment-newrelic:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      curl -X POST "https://api.newrelic.com/v2/applications/${NEWRELIC_APP_ID}/deployments.json" \
        -H "Api-Key: ${NEWRELIC_API_KEY}" \
        -H "Content-Type: application/json" \
        -d "{
          \"deployment\": {
            \"revision\": \"${CI_COMMIT_SHORT_SHA}\",
            \"user\": \"${GITLAB_USER_NAME}\",
            \"description\": \"Pipeline ${CI_PIPELINE_ID}\"
          }
        }"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success

mark-deployment-datadog:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      CURRENT_TIME=$(date +%s)
      curl -X POST "https://api.datadoghq.com/api/v1/events" \
        -H "DD-API-KEY: ${DATADOG_API_KEY}" \
        -H "Content-Type: application/json" \
        -d "{
          \"title\": \"WordPress Deployment\",
          \"text\": \"Deployed commit ${CI_COMMIT_SHORT_SHA} to production\",
          \"date_happened\": ${CURRENT_TIME},
          \"tags\": [\"env:production\", \"service:wordpress\", \"commit:${CI_COMMIT_SHORT_SHA}\"],
          \"alert_type\": \"info\"
        }"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success

Comparison: GitHub Actions vs GitLab CI vs Bitbucket Pipelines

Having worked with all three platforms for WordPress projects, here is a practical comparison based on real usage, not marketing material.

Pipeline Configuration

GitHub Actions uses multiple YAML files in .github/workflows/. Each file is an independent workflow with its own trigger. This is flexible but can become scattered across many files for large projects.

GitLab CI uses a single .gitlab-ci.yml with optional include directives to pull in external files. The single-file approach is easy to reason about but gets unwieldy past 500 lines. The include system supports remote URLs, local files, and template repositories.

Bitbucket Pipelines uses a single bitbucket-pipelines.yml with no include mechanism. Everything lives in one file. For simple projects, this is fine. For large ones, it becomes a maintenance headache.

Parallel Execution

GitHub Actions runs all jobs in a workflow in parallel by default, with explicit needs dependencies to create ordering.

GitLab CI runs jobs within the same stage in parallel, with stages executing sequentially. The needs keyword can create a DAG (directed acyclic graph) that breaks the stage ordering when needed.

Bitbucket Pipelines runs steps sequentially by default and requires the parallel keyword to run steps concurrently. Maximum 10 parallel steps on standard plans.

Matrix Testing

GitHub Actions has the most powerful matrix implementation with strategy.matrix, including include, exclude, and fail-fast options.

GitLab CI offers parallel:matrix which is functional but less flexible than GitHub’s version. It generates parallel jobs from variable combinations but lacks exclude patterns.

Bitbucket Pipelines has no native matrix support. You duplicate steps manually.

Self-Hosted Runners

GitHub Actions supports self-hosted runners, which is important for teams that need to run tests against internal databases or services not accessible from public CI infrastructure.

GitLab CI has the most mature self-hosted runner ecosystem. GitLab Runner is a single binary that supports Docker, Kubernetes, VirtualBox, and bare-metal execution. For WordPress agencies managing client sites on internal infrastructure, this is a significant advantage.

Bitbucket Pipelines supports Bitbucket Runners for self-hosted execution, but the feature set is more limited than GitLab’s.

Pricing for WordPress Teams

GitHub Actions offers 2,000 free minutes per month on private repos, with additional minutes at $0.008/minute for Linux runners.

GitLab CI provides 400 free minutes per month on the free tier. The Premium tier ($29/user/month) includes 10,000 minutes. Self-hosted runners have no minute limits.

Bitbucket Pipelines includes 50 free minutes per month on the free tier, making it the stingiest. The Standard plan ($3/user/month) includes 2,500 minutes. The Premium plan ($6/user/month) includes 3,500 minutes.

For a small WordPress agency running pipelines across 10 client repositories, Bitbucket’s free tier runs out fast. GitLab’s self-hosted runner option makes it the cheapest at scale, since you only pay for the compute infrastructure you provide.

WordPress-Specific Ecosystem

GitHub Actions has the largest marketplace of WordPress-related actions. You can find pre-built actions for WP-CLI, PHPUnit with WordPress, PHPCS with WordPress standards, and deployment to every major managed host.

GitLab CI has fewer WordPress-specific templates, but its generic Docker support means you can run anything you could run on GitHub Actions with minor configuration changes.

Bitbucket Pipelines has the smallest ecosystem for WordPress. Atlassian’s Pipes marketplace has generic deployment tools but very few WordPress-specific integrations.

Building a Complete GitLab CI Pipeline: Full Example

Here is a complete, production-ready .gitlab-ci.yml for a WordPress project that includes a custom theme and a custom plugin:

image: php:8.2-cli

stages:
  - lint
  - test
  - build
  - deploy_staging
  - deploy_production

variables:
  MYSQL_DATABASE: wordpress_test
  MYSQL_ROOT_PASSWORD: root_password
  WP_VERSION: latest

cache:
  - key:
      files:
        - composer.lock
    paths:
      - vendor/
  - key:
      files:
        - wp-content/themes/your-theme/package-lock.json
    paths:
      - wp-content/themes/your-theme/node_modules/

before_script:
  - apt-get update -yqq
  - apt-get install -yqq git unzip libzip-dev libpng-dev libjpeg-dev subversion
  - docker-php-ext-install zip gd mysqli pdo_mysql
  - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# ============ LINT STAGE ============

phpcs:
  stage: lint
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - ./vendor/bin/phpcs --standard=WordPress --extensions=php --ignore=vendor/,node_modules/,tests/ wp-content/
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH =~ /^(develop|main)$/

phpstan:
  stage: lint
  script:
    - composer install --no-interaction --prefer-dist --no-progress
    - ./vendor/bin/phpstan analyse --level=6 --configuration=phpstan.neon --memory-limit=512M
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH =~ /^(develop|main)$/

eslint:
  stage: lint
  image: node:18-alpine
  before_script:
    - cd wp-content/themes/your-theme && npm ci --prefer-offline
  script:
    - npx eslint "src/**/*.js"
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - "wp-content/themes/your-theme/src/**/*.js"
  allow_failure: true

# ============ TEST STAGE ============

phpunit-plugin:
  stage: test
  services:
    - name: mysql:8.0
      alias: mysql
      variables:
        MYSQL_DATABASE: wordpress_test
        MYSQL_ROOT_PASSWORD: root_password
  variables:
    DB_HOST: mysql
  script:
    - apt-get install -yqq default-mysql-client
    - composer install --no-interaction --prefer-dist --no-progress
    - bash wp-content/plugins/your-plugin/bin/install-wp-tests.sh wordpress_test root root_password mysql latest true
    - ./vendor/bin/phpunit --configuration wp-content/plugins/your-plugin/phpunit.xml --coverage-text --colors=never
  artifacts:
    when: always
    reports:
      junit: build/logs/junit.xml
    expire_in: 7 days
  coverage: '/^\s*Lines:\s*(\d+\.\d+)%/'
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH =~ /^(develop|main)$/

# ============ BUILD STAGE ============

build-theme:
  stage: build
  image: node:18-alpine
  before_script: []
  script:
    - cd wp-content/themes/your-theme
    - npm ci --prefer-offline
    - |
      if [ "$CI_COMMIT_BRANCH" = "main" ]; then
        NODE_ENV=production npm run build
      else
        npm run build
      fi
  artifacts:
    paths:
      - wp-content/themes/your-theme/dist/
    expire_in: 2 hours
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
    - if: $CI_COMMIT_BRANCH =~ /^(develop|main)$/

# ============ DEPLOY STAGING ============

deploy-staging:
  stage: deploy_staging
  image: alpine:latest
  before_script:
    - apk add --no-cache git openssh-client rsync
    - eval $(ssh-agent -s)
    - echo "$WPE_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh && chmod 700 ~/.ssh
    - echo -e "Host git.wpengine.com\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
  script:
    - git config user.email "[email protected]"
    - git config user.name "GitLab CI"
    - |
      rm -rf node_modules tests .gitlab-ci.yml phpunit.xml phpstan.neon \
        wp-content/themes/your-theme/src \
        wp-content/themes/your-theme/package.json \
        wp-content/themes/your-theme/package-lock.json \
        wp-content/themes/your-theme/tailwind.config.js
    - git add -A
    - git commit -m "Staging build ${CI_COMMIT_SHORT_SHA}" --allow-empty
    - git remote add wpe [email protected]:staging/${WPE_INSTALL_NAME}.git
    - git push wpe HEAD:main --force
  environment:
    name: staging
    url: https://staging.yourdomain.com
  dependencies:
    - build-theme
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"

# ============ DEPLOY PRODUCTION ============

deploy-production:
  stage: deploy_production
  image: alpine:latest
  before_script:
    - apk add --no-cache git openssh-client rsync
    - eval $(ssh-agent -s)
    - echo "$WPE_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh && chmod 700 ~/.ssh
    - echo -e "Host git.wpengine.com\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
  script:
    - git config user.email "[email protected]"
    - git config user.name "GitLab CI"
    - |
      rm -rf node_modules tests .gitlab-ci.yml phpunit.xml phpstan.neon \
        wp-content/themes/your-theme/src \
        wp-content/themes/your-theme/package.json \
        wp-content/themes/your-theme/package-lock.json \
        wp-content/themes/your-theme/tailwind.config.js
    - git add -A
    - git commit -m "Production deploy ${CI_COMMIT_SHORT_SHA}" --allow-empty
    - git remote add wpe [email protected]:production/${WPE_INSTALL_NAME}.git
    - git push wpe HEAD:main --force
  environment:
    name: production
    url: https://yourdomain.com
  dependencies:
    - build-theme
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual

# ============ POST-DEPLOY ============

verify-production:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      for i in 1 2 3 4 5; do
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://yourdomain.com")
        if [ "$HTTP_CODE" = "200" ]; then
          echo "Production site is live and responding"
          exit 0
        fi
        echo "Attempt ${i}: HTTP ${HTTP_CODE}. Waiting..."
        sleep 15
      done
      echo "WARNING: Production site did not respond with 200"
      exit 1
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success

notify-slack:
  stage: .post
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
  script:
    - |
      STATUS="succeeded"
      EMOJI=":white_check_mark:"
      if [ "$CI_JOB_STATUS" = "failed" ]; then
        STATUS="failed"
        EMOJI=":x:"
      fi
      curl -X POST "$SLACK_WEBHOOK_URL" \
        -H 'Content-type: application/json' \
        --data "{\"text\": \"${EMOJI} WordPress deployment ${STATUS} for ${CI_PROJECT_NAME} (${CI_COMMIT_SHORT_SHA}) by ${GITLAB_USER_NAME}. <${CI_PIPELINE_URL}|View Pipeline>\"}"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: always

This pipeline handles the full lifecycle: lint, test, build, deploy, verify, and notify. Each stage has clear responsibilities, and the manual gate on production deployment prevents accidents.

Common Pitfalls and How to Avoid Them

After building and maintaining dozens of WordPress CI/CD pipelines across these platforms, certain problems come up repeatedly.

The MySQL Connection Race Condition

CI services start asynchronously. Your test script might try to connect to MySQL before the database server has finished initializing. This produces intermittent “Connection refused” failures that are maddening to debug.

The fix is a wait loop in your test setup:

#!/bin/bash
# wait-for-db.sh
MAX_RETRIES=30
RETRY_INTERVAL=2

for i in $(seq 1 $MAX_RETRIES); do
  if mysqladmin ping -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" --silent 2>/dev/null; then
    echo "Database is ready"
    exit 0
  fi
  echo "Waiting for database... (attempt $i/$MAX_RETRIES)"
  sleep $RETRY_INTERVAL
done

echo "Database did not become ready in time"
exit 1

Call this script before install-wp-tests.sh in your CI job.

The Composer Platform Requirements Problem

Your local machine runs PHP 8.2, but your CI container runs PHP 8.1. Composer resolves dependencies based on the PHP version it is running under. If a package requires PHP 8.2+ and your CI container runs 8.1, Composer will refuse to install it.

Two solutions:

{
  "config": {
    "platform": {
      "php": "8.0"
    }
  }
}

This tells Composer to resolve dependencies as if it were running on PHP 8.0, regardless of the actual version. Put this in your composer.json.

Or, match your CI image to your production PHP version. If production runs PHP 8.2, use php:8.2-cli in your CI pipeline.

The Git History Explosion

When deploying via Git push (WP Engine, Pantheon), each deployment creates a new commit. Over time, the deployment branch accumulates thousands of commits, each containing a full copy of your compiled assets. This bloats the repository.

The --force flag on git push helps by replacing the remote branch history each time. But for even cleaner deployment repositories, consider an orphan branch approach:

git checkout --orphan deploy-temp
git add -A
git commit -m "Deploy ${CI_COMMIT_SHORT_SHA}"
git push wpe deploy-temp:main --force
git checkout -

An orphan branch has no parent commits, so the deployment repository never accumulates history beyond the single latest commit.

The Asset Build Artifact Problem

Build artifacts need to flow from the build job to the deploy job. If your artifacts configuration is wrong, the deploy job runs but pushes source files instead of compiled files. The deployment “succeeds” but the site is broken because CSS and JavaScript are missing.

Always verify artifact presence in your deploy script:

deploy-staging:
  script:
    - |
      if [ ! -d "wp-content/themes/your-theme/dist" ]; then
        echo "ERROR: Build artifacts not found. Build step may have failed."
        exit 1
      fi
    - # ... proceed with deployment

The PHPCS Memory Limit Problem

PHPCS scanning large WordPress codebases can exceed PHP’s default memory limit. If your PHPCS job dies with a “memory exhausted” error, increase the limit:

phpcs:
  script:
    - php -d memory_limit=512M ./vendor/bin/phpcs --standard=WordPress .

Or add the memory limit to your phpcs.xml configuration file using the --runtime-set option.

Practical Recommendations

If you are starting from scratch, pick the platform that matches your existing toolchain. If your team already uses Jira and Confluence, Bitbucket Pipelines will feel natural despite its limitations. If you need self-hosted runners for compliance reasons, GitLab CI is the clear choice. If you want the largest ecosystem of pre-built integrations, GitHub Actions wins.

For WordPress specifically, the key optimization is caching. WordPress test suites install WordPress core and the test library fresh on every run by default. Caching /tmp/wordpress and /tmp/wordpress-tests-lib directories cuts test setup time from 60 seconds to under 5.

Run PHPCS and PHPStan before PHPUnit. They are fast and catch the obvious problems. Do not waste CI minutes on a 10-minute test suite when a 30-second lint check would have caught the issue.

Use manual gates for production deployments. Automated deployment to staging is fine and encouraged. Automated deployment to production is a risk that only makes sense when you have high test coverage and a reliable rollback mechanism in place.

Tag your deployment commits with timestamps and pipeline IDs. When something goes wrong at 2 AM, you need to be able to trace exactly which commit is running in production and which pipeline built it.

Finally, treat your CI configuration as production code. Review it in pull requests. Test changes on feature branches before merging. A broken pipeline blocks every developer on your team. A broken deployment pipeline can take down your production site. Give it the same care you give your application code.

Share this article

Tom Bradley

DevOps engineer focused on WordPress deployment automation. Builds CI/CD pipelines and infrastructure-as-code solutions for WordPress agencies.