Deploy your GCP Runner using the Terraform module. This guide walks through prerequisites, Terraform configuration, deployment, and verification.
Prerequisites
Before starting, ensure you have:
-
GCP Project with billing enabled, sufficient quotas, and required GCP APIs enabled
-
VPC and subnet: a custom VPC with a runner subnet. The runner subnet hosts both the runner service and environment VMs. Internal load balancers require additional subnets.
Optional: Private Google Access. If your runner subnet does not have external internet access, enable Private Google Access on the subnet so VMs can reach GCP services through Google’s internal network. See GCP services and APIs required for the full list of services that need to be reachable.
-
Domain name that you control with DNS modification capabilities
-
SSL/TLS certificate with Subject Alternative Names (SANs) for both the root domain and wildcard:
yourdomain.com and *.yourdomain.com. Storage location depends on your load balancer mode.
-
Terraform >= 1.3 and gcloud CLI installed and authenticated
Create runner in Ona
- In the Ona dashboard, go to Settings → Runners and click Set up a new runner
- Select Google Cloud Platform as the provider
- Enter a name and click Create
The dashboard generates a Terraform configuration example with your Runner ID, Runner Token, and API endpoint pre-filled. Copy these values. You’ll use them in the Terraform setup below.
Store the Runner Token securely. You cannot retrieve it again from the dashboard.
Create a directory for your runner configuration and set up the following files.
main.tf references the Ona Terraform module with a GCS backend for state storage:
terraform {
backend "gcs" {
bucket = "your-terraform-state-bucket"
prefix = "ona-runner"
}
}
module "ona_runner" {
source = "gitpod-io/ona-runner/google"
version = "~> 1.0"
api_endpoint = var.api_endpoint
runner_id = var.runner_id
runner_token = var.runner_token
runner_name = var.runner_name
runner_domain = var.runner_domain
project_id = var.project_id
region = var.region
zones = var.zones
vpc_name = var.vpc_name
vpc_project_id = var.vpc_project_id
runner_subnet_name = var.runner_subnet_name
}
Using a GCS backend stores your Terraform state remotely, enabling team collaboration and protecting against local state loss. Create the bucket beforehand with versioning enabled.
variables.tf declares input variables. Add optional variables from the advanced configuration section as needed:
variable "api_endpoint" {
type = string
}
variable "runner_id" {
type = string
}
variable "runner_token" {
type = string
sensitive = true
}
variable "runner_name" {
type = string
}
variable "runner_domain" {
type = string
}
variable "project_id" {
type = string
}
variable "region" {
type = string
}
variable "zones" {
type = list(string)
}
variable "vpc_name" {
type = string
}
variable "vpc_project_id" {
type = string
default = ""
}
variable "runner_subnet_name" {
type = string
}
terraform.tfvars contains the values from the Ona dashboard and your GCP project. See the sections below for the full variable reference.
Required configuration variables
These variables are required for every GCP Runner deployment. Most of them are pre-filled in the Terraform configuration example shown in the Ona dashboard after you create the runner.
Core Authentication Variables
These values are provided by the Ona dashboard when you create the runner.
| Variable | Description | Example Value | Required |
|---|
api_endpoint | Ona management plane API endpoint (from Ona dashboard) | "https://app.gitpod.io/api" | ✅ Yes |
runner_id | Unique identifier for your runner (from Ona dashboard) | "runner-abc123def456" | ✅ Yes |
runner_token | Authentication token for the runner (from Ona dashboard) | "eyJhbGciOiJSUzI1NiIs..." | ✅ Yes |
runner_name | Display name for your runner | "my-company-gcp-runner" | ✅ Yes |
runner_domain | Domain name for accessing development environments | "dev.yourcompany.com" | ✅ Yes |
The
api_endpoint value is provided in the Ona dashboard when you create the runner. If your organization uses a
custom domain, the endpoint will reflect that domain instead.
# terraform.tfvars - Core runner authentication (copy from Ona dashboard)
api_endpoint = "https://app.gitpod.io/api" # Ona management plane API
runner_id = "runner-abc123def456" # From Ona dashboard
runner_token = "eyJhbGciOiJSUzI1NiIs..." # From Ona dashboard
runner_name = "my-company-gcp-runner" # Descriptive name
runner_domain = "dev.yourcompany.com" # Your domain
GCP Project and Location
Specify the GCP project and region where the runner infrastructure will be created.
| Variable | Description | Example Value | Required |
|---|
project_id | Your GCP project ID | "your-gcp-project-123" | ✅ Yes |
region | GCP region for deployment | "us-central1" | ✅ Yes |
zones | List of availability zones (2-3 recommended for HA) | ["us-central1-a", "us-central1-b"] | ✅ Yes |
# terraform.tfvars - GCP project and location
project_id = "your-gcp-project-123"
region = "us-central1"
zones = ["us-central1-a", "us-central1-b", "us-central1-c"]
Network and Ingress Configuration
Configure how inbound and outbound traffic reaches your runner by specifying your VPC, subnet, and load balancer settings.
| Variable | Description | Example Value | Required |
|---|
vpc_name | Name of your existing VPC | "your-company-vpc" | ✅ Yes |
runner_subnet_name | Subnet where runner and environments will be deployed | "dev-environments-subnet" | ✅ Yes |
The runner subnet hosts both the runner service and environment VMs. Recommended CIDR: /16 for non-routable ranges (large deployments) or /24 minimum for routable ranges.
External Load Balancer (Default)
# terraform.tfvars
vpc_name = "your-company-vpc"
runner_subnet_name = "dev-environments-subnet"
loadbalancer_type = "external" # Optional, this is the default
certificate_id = "projects/your-project/locations/global/certificates/your-cert"
Internal Load Balancer
# terraform.tfvars
vpc_name = "your-company-vpc"
runner_subnet_name = "dev-environments-subnet"
loadbalancer_type = "internal"
routable_subnet_name = "internal-lb-subnet"
certificate_secret_id = "projects/your-project/secrets/ssl-cert-secret"
Creating the certificate secret
The certificate_secret_id must point to a Google Secret Manager secret containing both the certificate and private key as a JSON object:
{
"certificate": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
"privateKey": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----"
}
Create the secret using gcloud, replacing the paths with your certificate and key files:
jq -n \
--arg cert "$(cat your-cert.pem)" \
--arg key "$(cat your-key.pem)" \
'{"certificate": $cert, "privateKey": $key}' \
| gcloud secrets create ssl-cert-secret \
--project=your-project \
--data-file=- \
--replication-policy=automatic
The internal load balancer also requires a proxy-only subnet in your VPC with purpose set to REGIONAL_MANAGED_PROXY. This subnet is not a Terraform module variable. It must exist in your VPC before deployment. GCP uses it internally to allocate proxy instances for the load balancer.Learn more about internal load balancer requirements →
Advanced configuration
The settings below are for organizations with additional network or security requirements, such as corporate proxies, private CAs, encryption key management, or restricted IAM policies. Most deployments do not need these.
HTTP Proxy Configuration
For environments behind corporate firewalls:
| Variable | Description | Example Value |
|---|
proxy_config.http_proxy | HTTP proxy server URL | "http://proxy.company.com:8080" |
proxy_config.https_proxy | HTTPS proxy server URL | "https://proxy.company.com:8080" |
proxy_config.no_proxy | Comma-separated list of hosts to bypass proxy | ".company.com,localhost,127.0.0.1" |
proxy_config.all_proxy | All-protocol proxy server URL | "http://proxy.company.com:8080" |
# terraform.tfvars - HTTP proxy configuration for corporate environments
proxy_config = {
http_proxy = "http://proxy.company.com:8080"
https_proxy = "https://proxy.company.com:8080"
no_proxy = "localhost,127.0.0.1,metadata.google.internal,.company.com"
all_proxy = "http://proxy.company.com:8080"
}
Custom CA Certificate
If your network uses a corporate proxy or internal services with certificates signed by a private Certificate Authority, configure the runner to trust your CA certificate:
| Variable | Description | Example Value |
|---|
ca_certificate.file_path | Path to a CA certificate file | "./certs/corporate-ca.pem" |
ca_certificate.content | CA certificate content (PEM-encoded string) | "-----BEGIN CERTIFICATE-----\n..." |
Provide either file_path or content, not both.
# terraform.tfvars - Option 1: Reference a CA certificate file
ca_certificate = {
file_path = "./certs/corporate-ca.pem"
}
# Option 2: Inline CA certificate content
# ca_certificate = {
# content = "-----BEGIN CERTIFICATE-----\nMIID...your-ca-cert...\n-----END CERTIFICATE-----"
# }
Customer-Managed Encryption Keys (CMEK)
For compliance with organizational encryption policies:
| Variable | Description | Default Value |
|---|
create_cmek | Automatically create KMS keyring and key | false |
kms_key_name | Existing KMS key (when create_cmek = false) | null |
# terraform.tfvars - Option 1: Automatic CMEK setup (recommended)
create_cmek = true
# Option 2: Use existing KMS key
# create_cmek = false
# kms_key_name = "projects/your-project/locations/us-central1/keyRings/gitpod-keyring/cryptoKeys/gitpod-key"
For additional configurations when using pre-existing CMEK keys, refer to the IAM configuration guide in the Terraform module.
Pre-Created Service Accounts
By default, the Terraform module creates and manages six service accounts for the runner components:
| Service Account | Purpose |
|---|
| runner | Runner orchestrator. Manages environment lifecycle, compute instances, networking, and secrets |
| environment_vm | Environment VMs. Reads container images, writes logs and metrics |
| build_cache | Build cache. Manages Cloud Storage objects for devcontainer layer caching |
| secret_manager | Secret manager. Creates and accesses secrets for environment injection |
| pubsub_processor | Pub/Sub processor. Consumes compute lifecycle events for environment reconciliation |
| proxy_vm | Proxy VMs. Routes traffic to environments, reads secrets and container images |
If your organization requires service accounts to be created externally (e.g., by a central IAM team), you can provide pre-created service accounts instead. When provided, the module skips creating that service account and uses the one you supply.
Each pre-created service account must have the correct IAM roles assigned before deployment. See the IAM configuration guide for the exact roles and permissions required per service account.
# terraform.tfvars - Pre-created service accounts (all optional, provide only the ones you manage externally)
pre_created_service_accounts = {
runner = "gitpod-runner@your-project.iam.gserviceaccount.com"
environment_vm = "gitpod-env@your-project.iam.gserviceaccount.com"
build_cache = "gitpod-cache@your-project.iam.gserviceaccount.com"
secret_manager = "gitpod-secrets@your-project.iam.gserviceaccount.com"
pubsub_processor = "gitpod-pubsub@your-project.iam.gserviceaccount.com"
proxy_vm = "gitpod-proxy@your-project.iam.gserviceaccount.com"
}
You can provide a subset. Any service account left empty will be created and managed by the module automatically. The Terraform deployer account will also need fewer IAM permissions when using pre-created service accounts, since it no longer needs iam.serviceAccounts.create or resourcemanager.projects.setIamPolicy for those accounts.
Custom Images
Some enterprise networks do not allow pulling container images from external registries. In these cases, you can point the runner at images hosted in your own internal registry.
Discouraged unless your network policy strictly requires it. Ona maintains all runner images in a public registry with regular CVE scanning and patching. Custom images break automatic updates and delay security patches. You must sync every release manually. If possible, allowlist Ona’s registry instead.
| Variable | Description | Example Value |
|---|
custom_images.runner_image | Custom runner container image | "gcr.io/your-project/runner:v1.0" |
custom_images.proxy_image | Custom proxy container image | "gcr.io/your-project/proxy:v1.0" |
custom_images.prometheus_image | Custom Prometheus image | "gcr.io/your-project/prometheus:latest" |
custom_images.docker_config_json | Docker registry credentials (JSON) | jsonencode({...}) |
# terraform.tfvars - Custom images configuration
custom_images = {
runner_image = "gcr.io/your-project/custom-runner:v1.0"
proxy_image = "gcr.io/your-project/custom-proxy:v1.0"
prometheus_image = "gcr.io/your-project/prometheus:latest"
# Docker registry authentication (JSON format)
docker_config_json = jsonencode({
auths = {
"gcr.io" = {
auth = base64encode("_json_key:${file("service-account-key.json")}")
}
}
})
# Set to true for insecure registries (testing only)
insecure = false
}
If you must use custom images, set up an automated pipeline to sync images from Ona’s stable channel to your internal registry (e.g., Artifactory). Contact Ona support for guidance on image synchronization and release notifications.
Shared VPC
If your organization uses a GCP Shared VPC where the VPC is hosted in a separate host project, set vpc_project_id to the project that owns the VPC. When omitted, the module assumes the VPC is in the same project as the runner (project_id).
| Variable | Description | Example Value |
|---|
vpc_project_id | Project ID where the Shared VPC is located | "shared-vpc-host-project" |
# terraform.tfvars - Shared VPC configuration
project_id = "runner-service-project" # Project where runner resources are created
vpc_project_id = "shared-vpc-host-project" # Project that hosts the Shared VPC
vpc_name = "shared-company-vpc" # VPC name in the host project
runner_subnet_name = "dev-environments-subnet" # Subnet shared to the service project
When using Shared VPC, the runner’s service project must be attached as a service project to the host project, and the subnets used by the runner must be shared with the service project. The service account running Terraform needs compute.networkUser on the shared subnets in the host project.
Labels
Apply GCP labels to all resources created by the module. Labels are useful for cost attribution, filtering in billing reports, and organizational policies.
# terraform.tfvars
labels = {
team = "platform"
environment = "production"
cost-center = "engineering"
}
For details on using labels for cost tracking and budgeting, see Costs & Budgeting: Adding labels to Compute Engine instances.
Deploy
Initialize Terraform to download the module, then validate, plan, and apply:
terraform init
terraform validate
terraform plan -out=tfplan
terraform apply tfplan
Deployment typically takes 15–20 minutes. The Redis instance creation is usually the longest step.
Post-deployment
After terraform apply completes, retrieve the load balancer IP and configure DNS:
terraform output load_balancer_ip
Create A records for both the root domain and wildcard pointing to this IP:
yourdomain.com. A <load-balancer-ip>
*.yourdomain.com. A <load-balancer-ip>
For internal load balancers, ensure your corporate DNS servers can resolve these records and your network can route to the internal IP.
Once DNS propagates, verify the runner is healthy:
curl -k https://yourdomain.com/_health
# Expected: {"status":"ok"} with HTTP 200
Then confirm the runner shows as Online in the Ona dashboard under Settings → Runners.
Next steps
With your GCP Runner successfully deployed and verified: