Deploy your GCP Runner using our Terraform module. This guide systematically walks through each configuration variable and deployment option to help you customize your runner for your specific requirements.

Prerequisites

Before starting the deployment, ensure you have completed all requirements from the Overview guide, including:
  • GCP Project with billing enabled and sufficient quotas
  • VPC and Networking properly configured (including proxy subnet for internal LB)
  • SSL Certificate prepared for your chosen load balancer type
  • Domain Name with DNS modification capabilities
  • Terraform >= 1.3 and GCP CLI installed and authenticated

Create Runner in Ona

Start by creating a new runner in the Ona dashboard to obtain the required authentication credentials.

Access Runner Settings

Navigate to Settings → Runners in your Ona dashboard and click Setup a new runner.

Ona Runner Settings Page

Configure Runner Details

  1. Provider Selection: Choose Google Cloud Platform from the list of available providers
  2. Runner Information:
    • Name: Provide a descriptive name for your runner
    • Region: Select the GCP region where you’ll deploy the runner
  3. Configuration: Click Create to generate the runner configuration

Ona GCP Runner Create Page

The system will generate a unique Runner ID and Runner Token that you’ll need for the Terraform deployment.

Ona GCP Runner Credentials

Store the Runner Token securely. You’ll need it for the Terraform configuration and cannot retrieve it again from the dashboard.

Terraform Module Setup

Download and Initialize

# Clone the Terraform module repository
git clone <repository-url>
cd gitpod-gcp-terraform

# Copy the example configuration
cp terraform.tfvars.example terraform.tfvars

# Initialize Terraform
terraform init

Required Configuration Variables

These variables must be configured for any GCP Runner deployment:

Core Authentication Variables

Configure the basic runner authentication and identification:
VariableDescriptionExample ValueRequired
api_endpointOna management plane API endpoint (from Ona dashboard)"https://app.gitpod.io/api"✅ Yes
runner_idUnique identifier for your runner (from Ona dashboard)"runner-abc123def456"✅ Yes
runner_tokenAuthentication token for the runner (from Ona dashboard)"eyJhbGciOiJSUzI1NiIs..."✅ Yes
runner_nameDisplay name for your runner"my-company-gcp-runner"✅ Yes
runner_domainDomain name for accessing development environments"dev.yourcompany.com"✅ Yes
# Required: Core runner authentication (copy from Ona dashboard)
api_endpoint  = "https://app.gitpod.io/api"   # Ona management plane API
runner_id     = "runner-abc123def456"         # From Ona dashboard
runner_token  = "eyJhbGciOiJSUzI1NiIs..."     # From Ona dashboard  
runner_name   = "my-company-gcp-runner"       # Descriptive name
runner_domain = "dev.yourcompany.com"         # Your domain

GCP Project and Location

Specify your GCP project and deployment region:
VariableDescriptionExample ValueRequired
project_idYour GCP project ID"your-gcp-project-123"✅ Yes
regionGCP region for deployment"us-central1"✅ Yes
zonesList of availability zones (2-3 recommended for HA)["us-central1-a", "us-central1-b"]✅ Yes
# Required: GCP project and location
project_id = "your-gcp-project-123"
region     = "us-central1" 
zones      = ["us-central1-a", "us-central1-b", "us-central1-c"]

Network Configuration

Configure your existing VPC and subnet infrastructure:
VariableDescriptionExample ValueRequired
vpc_nameName of your existing VPC"your-company-vpc"✅ Yes
runner_subnet_nameSubnet where runner and environments will be deployed"dev-environments-subnet"✅ Yes
# Required: Network configuration
vpc_name           = "your-company-vpc"         # Existing VPC name
runner_subnet_name = "dev-environments-subnet" # Subnet for runner and environments
Runner Subnet Requirements:
  • This subnet hosts both the runner service and development environment VMs
  • Can use routable CIDR range for corporate network access
  • For heavy workloads with high IP usage, use non-routable CIDR range (e.g., 10.0.0.0/16)
  • Recommended CIDR masks:
    • /16 for non-routable subnets (65,534 IPs) - recommended for large deployments
    • /24 minimum for routable subnets (254 IPs) - suitable for smaller deployments

Load Balancer Configuration

Choose your load balancer type and configure the required variables:

External Load Balancer (Default)

External load balancers provide internet-accessible environments with simplified setup:
VariableDescriptionExample ValueRequired
loadbalancer_typeLoad balancer type"external"❌ Optional (default)
certificate_idCertificate from Certificate Manager"projects/.../certificates/cert"✅ Yes for external
# External load balancer configuration (default)
loadbalancer_type = "external"  # Optional, this is the default
certificate_id    = "projects/your-project/locations/global/certificates/your-cert"
Certificate Requirements for External LB:
  • Certificate must be stored in Google Certificate Manager
  • Must include both root domain and wildcard as Subject Alternative Names
  • Format: projects/{project}/locations/global/certificates/{name}
Internal load balancers provide VPC-only access with enhanced security:
VariableDescriptionExample ValueRequired
loadbalancer_typeLoad balancer type"internal"✅ Yes
routable_subnet_nameRoutable subnet for internal load balancer IP allocation"internal-lb-subnet"✅ Yes for internal
certificate_secret_idSecret Manager secret with certificate and private key"projects/.../secrets/cert-secret"✅ Yes for internal
# Internal load balancer configuration
loadbalancer_type     = "internal"
routable_subnet_name  = "internal-lb-subnet"    # Must be routable from your network
certificate_secret_id = "projects/your-project/secrets/ssl-cert-secret"
Certificate Requirements for Internal LB:
  • Certificate must be stored in Google Secret Manager
  • Must contain both certificate and private key in JSON format
  • Secret format:
    {
      "certificate": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
      "privateKey": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----"
    }
    
Internal LB Additional Requirements: ⚠️ Critical Networking Requirements:
  • Routable Subnet: routable_subnet_name must be a subnet with routes from your internal/on-premises network (recommended /28 - 16 IPs)
  • Proxy-Only Subnet: Your VPC must have a separate subnet with purpose REGIONAL_MANAGED_PROXY (recommended /27 - 32 IPs). This subnet does not need to be routable from your corporate network.
  • Corporate network connectivity to your GCP VPC (VPN, Interconnect, etc.)
  • DNS resolution from your corporate network

Optional Configuration Variables

These variables provide additional customization and enterprise features:

Enterprise Security Features

HTTP Proxy Configuration

For environments behind corporate firewalls:
VariableDescriptionExample Value
proxy_config.http_proxyHTTP proxy server URL"http://proxy.company.com:8080"
proxy_config.https_proxyHTTPS proxy server URL"https://proxy.company.com:8080"
proxy_config.no_proxyComma-separated list of hosts to bypass proxy".company.com,localhost,127.0.0.1"
proxy_config.all_proxyAll-protocol proxy server URL"http://proxy.company.com:8080"
# HTTP proxy configuration for corporate environments
proxy_config = {
  http_proxy  = "http://proxy.company.com:8080"
  https_proxy = "https://proxy.company.com:8080"
  no_proxy    = "localhost,127.0.0.1,metadata.google.internal,.company.com"
  all_proxy   = "http://proxy.company.com:8080"
}

Customer-Managed Encryption Keys (CMEK)

For compliance with organizational encryption policies:
VariableDescriptionDefault Value
create_cmekAutomatically create KMS keyring and keyfalse
kms_key_nameExisting KMS key (when create_cmek = false)null
# Option 1: Automatic CMEK setup (recommended)
create_cmek = true
# kms_key_name is ignored when create_cmek = true

# Option 2: Use existing KMS key
# create_cmek = false
# kms_key_name = "projects/your-project/locations/us-central1/keyRings/gitpod-keyring/cryptoKeys/gitpod-key"
For additional configurations when using pre-existing CMEK keys, refer to the IAM configuration guide in the Terraform module.

Pre-Created Service Accounts

For organizations with strict IAM policies that require pre-created service accounts:
# Pre-created service accounts (all optional)
pre_created_service_accounts = {
  runner           = "gitpod-runner@your-project.iam.gserviceaccount.com"
  environment_vm   = "gitpod-env@your-project.iam.gserviceaccount.com"  
  build_cache      = "gitpod-cache@your-project.iam.gserviceaccount.com"
  secret_manager   = "gitpod-secrets@your-project.iam.gserviceaccount.com"
  pubsub_processor = "gitpod-pubsub@your-project.iam.gserviceaccount.com"
  proxy_vm         = "gitpod-proxy@your-project.iam.gserviceaccount.com"
}
For additional configurations when using pre-created service accounts, refer to the IAM configuration guide in the Terraform module.

Custom Images

For enterprises using internal container registries:
VariableDescriptionExample Value
custom_images.runner_imageCustom runner container image"gcr.io/your-project/runner:v1.0"
custom_images.proxy_imageCustom proxy container image"gcr.io/your-project/proxy:v1.0"
custom_images.prometheus_imageCustom Prometheus image"gcr.io/your-project/prometheus:latest"
custom_images.docker_config_jsonDocker registry credentials (JSON)jsonencode({...})
# Custom images configuration
custom_images = {
  runner_image     = "gcr.io/your-project/custom-runner:v1.0"
  proxy_image      = "gcr.io/your-project/custom-proxy:v1.0"
  prometheus_image = "gcr.io/your-project/prometheus:latest"
  
  # Docker registry authentication (JSON format)
  docker_config_json = jsonencode({
    auths = {
      "gcr.io" = {
        auth = base64encode("_json_key:${file("service-account-key.json")}")
      }
    }
  })
  
  # Set to true for insecure registries (testing only)
  insecure = false
}
When using custom images, you need to set up pipelines to sync images from the stable channel to your internal registry (e.g., Artifactory). Contact Ona support when using this feature for guidance on image synchronization.

Deployment Process

Validate Configuration

Before deployment, validate your Terraform configuration:
# Validate Terraform syntax and configuration
terraform validate

# Plan the deployment and review all changes
terraform plan -out=tfplan

# Review the plan output carefully for:
# - Resources being created in correct project/region
# - Networking configuration matches requirements  
# - No unexpected deletions or modifications

Deploy Infrastructure

Execute the Terraform deployment:
# Apply the planned configuration
terraform apply tfplan

# Monitor deployment progress (typically 15-20 minutes)
# The Redis instance creation is usually the longest step

Post-Deployment Configuration

Retrieve Load Balancer Information

After successful deployment, get the load balancer details:
# Display all Terraform outputs
terraform output

# Key outputs:
# load_balancer_ip = "10.0.1.100" (internal) or "34.102.136.180" (external)

Configure DNS Records

Create DNS records pointing to your load balancer: For External Load Balancer:
yourdomain.com.     A       <external-ip-address>
*.yourdomain.com.   A       <external-ip-address>
For Internal Load Balancer:
yourdomain.com.     A       <internal-ip-address>
*.yourdomain.com.   A       <internal-ip-address>
For internal load balancers, ensure your DNS servers can resolve these records and your corporate network has connectivity to the internal IP address through VPN, Interconnect, or other connectivity methods.

Verification

Test Runner Health

Verify the runner is accessible and functioning:
# Test health endpoint (use -k for self-signed certificates)
curl -k https://yourdomain.com/_health

# Expected response: {"status":"ok"} with HTTP 200

Verify Runner Status in Ona

Monitor runner status in the Ona dashboard:
  1. Navigate to Settings → Runners
  2. Verify your runner shows as Connected with green status
  3. Check Last Seen timestamp is recent (within last few minutes)
  4. Confirm runner region and configuration are correct

Ona GCP Runner Online

Next Steps

With your GCP Runner successfully deployed and verified: