Skip to main content
The GCP Runner supports two load balancer modes. Choose the one that matches your network architecture and security requirements.
External Load BalancerInternal Load Balancer
AccessInternet-accessibleVPC and corporate network only
Certificate storageGoogle Certificate ManagerGoogle Secret Manager
Additional subnetsNoneProxy subnet + routable subnet
Corporate networkNot requiredVPN or Interconnect required
Best forTeams accessing from various locationsEnterprise with strict network controls

External Load Balancer (Default)

Environments are accessible over the internet through Google Cloud’s global load balancer. This is the simpler configuration. No additional networking infrastructure is required beyond a standard VPC subnet. Architecture diagram showing external load balancer configuration: developers access environments through the public internet via an internet-facing network load balancer, with Cloud NAT for outbound traffic

Subnet requirements

SubnetPurposeCIDR recommendation
Runner subnetHosts runner service and environment VMs/16 for non-routable ranges, /24 minimum for routable ranges
If the runner subnet does not have external internet access, enable Private Google Access so VMs can reach GCP services through Google’s internal network.

SSL/TLS certificate

  • Store your certificate in Google Certificate Manager
  • The certificate must include both the root domain and wildcard as Subject Alternative Names (SANs):
    • yourdomain.com
    • *.yourdomain.com
  • Managed certificates or uploaded certificates are both supported
  • Reference format: projects/{project}/locations/global/certificates/{name}
For the Terraform variables to configure this mode, see Setup: External Load Balancer.

Internal Load Balancer

All traffic stays within your VPC and corporate network. Environments are not accessible from the internet. This mode requires corporate network connectivity (VPN, Interconnect, etc.) to your GCP VPC. The architecture differs from the external mode because GCP’s internal passthrough Network Load Balancer does not support TLS termination. TLS is instead terminated inside the Ona proxy component, which is why the certificate must be stored in Secret Manager (accessible to the proxy) rather than Certificate Manager. This also requires additional subnets:
  • Proxy subnet: GCP provisions its own Envoy-based proxy infrastructure in this subnet to route traffic to the internal load balancer. This is a GCP requirement for regional managed proxies.
  • Routable subnet: provides a routable IP address for the internal load balancer endpoint, reachable from your corporate network via VPN or Interconnect. Architecture diagram showing internal load balancer configuration: on-prem developers connect via VPN/Interconnect through an internal proxy network load balancer, with proxy-only and routable subnets, and a web proxy for outbound internet traffic

Subnet requirements

The internal load balancer requires three subnets:
SubnetPurposeCIDR recommendationRoutable from corporate network?
Runner subnetHosts runner service and environment VMs/16 for non-routable ranges, /24 minimum for routable rangesNot required
Proxy subnetReserved for internal LB proxy instances/26 minimum (64 IPs), GCP recommends /23No
Routable subnetAllocates the internal LB IP address/28 (16 IPs)Yes, must be reachable from your network
Additional subnet requirements:
  • The proxy subnet must have its purpose set to REGIONAL_MANAGED_PROXY
  • The routable subnet must have routes from your internal/on-premises network
  • If the runner subnet lacks external internet access, enable Private Google Access

SSL/TLS certificate

  • Store your certificate in Google Secret Manager as a JSON object containing both the certificate and private key
  • The certificate must include both SANs: yourdomain.com and *.yourdomain.com
  • Reference format: projects/{project}/secrets/{secret-name}
  • See Setup: Internal Load Balancer for the expected secret JSON format
For the Terraform variables to configure this mode, see Setup: Internal Load Balancer.

Additional requirements

  • Corporate network connectivity to your GCP VPC (VPN, Interconnect, or similar)
  • DNS resolution from your corporate network for the runner domain
  • The proxy subnet and routable subnet must exist before running terraform apply

Outbound connectivity

The runner and environment VMs need outbound internet access to reach the Ona management plane, container registries, IDE downloads, and other external services. See Access Requirements: Network Connectivity for the full list of required endpoints. How you provide this access depends on your network architecture:
StrategyWhen to use
External IP addressesSimplest option. Each VM gets a public IP and routes directly to the internet. Suitable when your security policy allows direct egress.
Cloud NATVMs use private IPs only. A Cloud NAT gateway provides outbound internet access without exposing VMs publicly. Common for production deployments.
HTTP/HTTPS proxyRoute outbound traffic through a corporate proxy. Configure via the proxy_config Terraform variable. See Setup: HTTP Proxy Configuration.
Private Service ConnectAccess specific Google APIs over private endpoints without internet egress. See GCP Private Service Connect.
If your VPC has no default internet route and you’re not using Cloud NAT or a proxy, VMs will fail to pull container images, download IDE components, or connect to the Ona management plane. Ensure at least one egress path is configured before deployment.
For more on GCP egress options, see Google Cloud network connectivity overview.

Next steps