Skip to main content
The GCP runner updates itself automatically for application-level changes. For details on how automatic updates work and how to configure update windows, see Automated updates. This page covers GCP-specific update procedures: release notifications and Terraform infrastructure upgrades.

Release Notifications

Ona publishes Pub/Sub messages when new stable GCP runner releases are available. Subscribe to receive push notifications instead of polling for updates. This is useful for triggering CI pipelines, syncing custom images, or alerting your infrastructure team when a Terraform upgrade is needed.

Topic

PropertyValue
Projectgitpod-next-production
Topicprojects/gitpod-next-production/topics/gcp-runner-releases
Message retention7 days
AccessAny authenticated GCP user can subscribe

Subscribing with gcloud

# Create a pull subscription in your project
gcloud pubsub subscriptions create ona-runner-releases \
  --project=YOUR_PROJECT_ID \
  --topic=projects/gitpod-next-production/topics/gcp-runner-releases \
  --ack-deadline=60

# Pull messages
gcloud pubsub subscriptions pull ona-runner-releases \
  --project=YOUR_PROJECT_ID \
  --auto-ack \
  --limit=10

Subscribing with Terraform

resource "google_pubsub_subscription" "ona_runner_releases" {
  name    = "ona-runner-releases"
  project = var.project_id
  topic   = "projects/gitpod-next-production/topics/gcp-runner-releases"

  ack_deadline_seconds = 60

  # Optional: receive only CI-published messages (skip GCS notification duplicates)
  # filter = "attributes.source = \"ci_stable_promotion\""

  # Optional: retry policy
  retry_policy {
    minimum_backoff = "10s"
    maximum_backoff = "600s"
  }
}
For push subscriptions (HTTP webhook), set push_config.push_endpoint to your endpoint URL.

Message format

Each message includes attributes for filtering:
  • event_type: release.stable
  • version: release version string (e.g., 20250115.0)
  • source: ci_stable_promotion (release promoted to stable) or gcs_notification (stable manifest updated in GCS)
The payload contains the release manifest with image references, download URLs, and change context:
{
  "version": "20250115.0",
  "commit": "abc123def",
  "release_date": "2025-01-15T00:30:00Z",
  "infrastructure_version": "latest",
  "proxy_image": "us-docker.pkg.dev/gitpod-next-production/gitpod-next/gitpod-proxy:20250115.0",
  "runner_image": "us-docker.pkg.dev/gitpod-next-production/gitpod-next/gitpod-gcp-runner:20250115.0",
  "prometheus_image": "us-docker.pkg.dev/gitpod-next-production/gitpod-next/prometheus:v3.5.0",
  "terraform_changes": [
    "- Add network firewall rule for proxy health checks (a1b2c3d)",
    "- Update default machine type to n2d-standard-16 (e4f5g6h)"
  ],
  "iam_changes_detected": false
}
Key fields:
FieldDescription
terraform_changesTerraform module commits since the previous release (empty if none). When non-empty, a terraform apply may be needed.
iam_changes_detectedtrue if IAM-related files changed. Signals that IAM configuration or pre-created service accounts may need updating.

Updating Infrastructure

Certain updates, particularly those involving significant infrastructural changes, cannot be applied automatically. Follow these steps to apply updates:
  1. Navigate to the directory containing your runner Terraform configuration.
  2. Update the version constraint in your main.tf module block to allow the new version. Check the Terraform registry for available versions.
    module "ona_runner" {
      source  = "gitpod-io/ona-runner/google"
      version = "~> 1.0"  # Update this constraint as needed
      # ...
    }
    
  3. Re-initialize Terraform to fetch the new module version:
    terraform init -upgrade
    
  4. Review the planned changes before applying:
    terraform plan -out=tfplan
    
    Carefully review the output for:
    • New resources being created
    • Resources being modified or replaced
    • Any unexpected deletions
  5. Apply the updates:
    terraform apply tfplan
    
  6. Verify the runner status in the Ona dashboard under Settings → Runners to confirm the update was successful.

Delete runner

Deleting a GCP runner is a two-step process: disconnect the runner from Ona, then destroy the GCP infrastructure with Terraform.
Deletion is permanent. Active environments are stopped and discarded, and any uncommitted work or ephemeral data on environment VMs is lost. Push anything you need to keep before starting.

1. Disconnect the runner in Ona

  1. Go to Settings → Runners and select Delete from the runner’s menu.
  2. The runner enters Pending deletion and stops all environments attached to it.
  3. Once environments are fully deleted, the runner record is removed from Ona (this can take a few minutes).
This step stops Ona from scheduling new work on the runner but does not remove any GCP resources. Those remain in your project until you run terraform destroy.

2. Destroy the GCP infrastructure

From the directory containing your runner Terraform configuration:
terraform destroy
Review the plan carefully before confirming. terraform destroy removes everything the module created, including:
  • Compute Engine instance templates, managed instance groups, and autoscalers for the runner and proxy VMs
  • Load balancer components (forwarding rules, backend services, health checks, URL maps)
  • Memorystore for Redis cluster and its Private Service Connect policy
  • Cloud Storage buckets (<runner-id>-runner-assets and, if agents are enabled, <runner-id>-agent-storage)
  • Artifact Registry repository used by the runner
  • Secret Manager secrets (runner token, Redis credentials, metrics config)
  • Pub/Sub topics and subscriptions used for compute lifecycle events
  • Firewall rules and module-created service accounts
  • KMS keyring and key, if create_cmek = true
Deployment typically takes 15–20 minutes to provision; destruction is usually faster but can take several minutes, mostly waiting for the Redis cluster and load balancer to tear down.

3. Resources Terraform does not delete

A clean terraform destroy removes everything the module owns, but several categories of resources are deliberately not managed by the module and must be cleaned up manually if you no longer need them:
  • VPC, subnets, and Cloud NAT. You created these before deployment (see Setup prerequisites). They are not part of the module.
  • Proxy-only subnet (internal load balancer deployments). The REGIONAL_MANAGED_PROXY subnet you created for the internal LB.
  • DNS records for yourdomain.com and *.yourdomain.com pointing at the load balancer IP (see Post-deployment).
  • SSL/TLS certificate. The Certificate Manager certificate (external LB) or the Secret Manager secret (internal LB) you created outside the module.
  • GCS bucket holding Terraform state. If you used a remote backend, the state bucket persists after terraform destroy.
  • Pre-created service accounts. If you supplied any via pre_created_service_accounts, those are owned by your IAM team and are not destroyed.
  • Pre-existing KMS key. If you used create_cmek = false and supplied a kms_key_name, the key stays. Only keys created by the module (create_cmek = true) are destroyed.
  • Release Pub/Sub subscriptions. Any subscription you created to Ona’s release topic (see Release Notifications) lives in your project and is independent of the module.

4. Check for leftovers

After terraform destroy completes, verify no runner resources remain before closing out the project. This is the most common source of unexpected costs post-deletion. Filter by the module’s managed-by=terraform and gitpod-runner-id=<your-runner-id> labels, which are applied to all runner-owned resources:
# Compute Engine: instances, disks, instance templates, instance groups
gcloud compute instances list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

gcloud compute disks list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

# Environment snapshots (persistent disk snapshots and machine images used for
# environment persistence and restore, safe to delete once the runner is gone)
gcloud compute snapshots list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

gcloud compute images list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

# Storage buckets
gcloud storage buckets list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

# Secret Manager secrets
gcloud secrets list \
  --project=YOUR_PROJECT_ID \
  --filter="labels.gitpod-runner-id=YOUR_RUNNER_ID"

# Load balancer forwarding rules and reserved static IPs
gcloud compute forwarding-rules list \
  --project=YOUR_PROJECT_ID \
  --filter="description~YOUR_RUNNER_ID OR name~YOUR_RUNNER_ID"

gcloud compute addresses list \
  --project=YOUR_PROJECT_ID \
  --filter="name~YOUR_RUNNER_ID"
Environment VMs and their persistent disks are tagged with gitpod-type=environment rather than the runner labels above. If you see leftover disks, instances, or snapshots with gitpod-type=environment or names starting with env-, these are per-environment resources. The runner normally cleans them up when environments are deleted. If the runner was force-deleted before draining, you may need to remove them manually.
Delete any remaining resources through the GCP Console or gcloud. Pay particular attention to:
  • Persistent disks from environments that were not cleanly terminated. These continue to incur storage costs.
  • Reserved static IP addresses that were not released.
  • Forwarding rules and backend services from the load balancer.
  • Storage buckets. The runner assets bucket has uniform_bucket_level_access and public_access_prevention enforced, but it may still contain objects you need to delete first with gcloud storage rm --recursive gs://BUCKET_NAME.

5. Final verification

Confirm deletion in two places:
  1. Ona dashboard. Settings → Runners no longer lists the runner.
  2. GCP Billing console. Check that billable resources are no longer being reported against the runner’s labels after 24 hours.
If billing persists, re-run the leftover checks above and inspect the Billing Reports grouped by label to identify the resource driving the cost.