Skip to main content
Requires Enterprise plan. Currently available on AWS runners only. GCP support is coming soon.
Warm pools keep pre-initialized EC2 instances ready to claim. When a user creates an environment, Ona assigns a warm instance instead of launching a new one, reducing startup time from minutes to seconds. The pool scales dynamically between a minimum and maximum size based on demand. It scales up during peak hours and back down (optionally to zero) when idle.

When to use warm pools

Warm pools are most effective for:
  • Large or monorepo projects where EBS snapshot restoration adds noticeable startup latency. EC2 instances lazy-load data from the prebuild snapshot on first boot, and larger volumes take longer to fully hydrate. For these projects, warm pools can cut startup time from minutes to around 10 seconds.
  • Smaller projects with many users where prebuilds already bring startup to 30-50 seconds. Warm pools can reduce this further to around 10 seconds. Whether the cost is justified depends on how many engineers use the project and how often they create new environments versus reusing existing ones.
  • Latency-sensitive workflows where every second of startup time matters (e.g. PR review environments, demo environments).
Without warm pools, each environment launch provisions a new EC2 instance and restores the prebuild snapshot. With warm pools, the instance is already running and the snapshot is already loaded, skipping the most time-consuming parts of startup.
Actual startup time depends on your devcontainer configuration. Dotfiles installation and post-prebuild lifecycle hooks (postCreateCommand, postStartCommand, postAttachCommand) still run when the environment starts. Optimize these commands for the fastest experience.

How it works

  1. You enable a warm pool for a project and environment class, specifying a minimum and maximum pool size.
  2. The runner launches EC2 instances from the latest prebuild snapshot, scaling between the minimum and maximum pool size.
  3. The pool scales dynamically between your configured minimum and maximum based on demand (see Dynamic scaling).
  4. When a user creates an environment, Ona claims a warm instance from the pool instead of launching a new one. The pool immediately begins replenishing.
  5. When a new prebuild completes, the pool rotates instances to use the new snapshot.
Instances can be claimed even while still initializing. A partially warmed instance is still faster than a cold start because the EC2 instance is already running and the snapshot is partially loaded.

Dynamic scaling

Warm pools scale automatically based on demand. The runner monitors how frequently instances are claimed and adjusts the number of running instances accordingly. It scales up when environments are being created (by engineers, automations, or agents) and scales back down when demand drops.
  • Scale-out happens when sustained demand exceeds what the current running instances can serve. Stopped instances are started to meet demand (see Stopped instances), so scaling out takes roughly 1–2 minutes rather than the 5+ minutes of a cold boot.
  • Scale-in happens when demand drops. The pool waits for demand to stay low before removing instances, so brief idle periods don’t cause unnecessary churn.
  • The pool never scales below min-size or above max-size.

Stopped instances

In addition to running instances, the pool maintains stopped instances that are pre-provisioned but not actively running. These act as a buffer for fast scale-out: when demand increases, a stopped instance can be started in roughly 1–2 minutes instead of provisioning a new one from scratch (~5 minutes). The number of stopped instances equals the remaining capacity between the current number of running instances and max-size. For example, with max-size = 5 and 3 running instances, there are 2 stopped instances ready to start. Stopped instances save on cost because you only pay for their EBS storage, not compute. This means the pool can maintain headroom for burst demand without paying for idle running instances.
Stopped instances only help when the pool scales out (adding more running instances). They do not reduce startup time for the first environment when the pool is scaled to zero, because there are no running instances to claim. In that case, the first environment is a cold start.

Scale to zero

Setting min-size to 0 allows the pool to scale down to zero running instances when there is no demand. This is useful for reducing costs during off-hours, weekends, or for projects with intermittent usage. Tradeoffs:
min-size = 0min-size ≥ 1
Off-hours costNo running instances, no compute costAt least one instance running 24/7
First environment after idleCold start (similar to not having a warm pool)Near-instant (~10 seconds)
Subsequent environmentsNear-instant once the pool has scaled upNear-instant
How quickly does the pool scale down to zero? The pool requires at least 30 minutes of inactivity (no environments created) before it begins scaling down to zero. This idle guard prevents aggressive scale-down during brief lulls, such as a lunch break. After the idle guard expires, AWS completes the scale-in shortly after.
Use min-size = 0 for projects where occasional slower startups are acceptable in exchange for lower cost. Use min-size = 1 (or higher) for projects where instant startup is always expected.

Prerequisites

Enable warm pools

Only project admins can configure warm pools.

Via the dashboard

  1. Navigate to your project settings
  2. In the Prebuilds section, find the environment class list
  3. Expand an environment class row. A Warm Pool toggle appears below each class that has prebuilds enabled
  4. Toggle Warm Pool on
  5. Set the Min Size and Max Size to define the scaling range
The warm pool toggle only appears for environment classes on runners that support warm pools and have prebuilds enabled.

Via the CLI

# Create a warm pool with scaling bounds
ona prebuild warm-pool create <project-id> \
  --environment-class-id <class-id> \
  --min-size 1 \
  --max-size 3

# Create a warm pool that can scale to zero
ona prebuild warm-pool create <project-id> \
  --environment-class-id <class-id> \
  --min-size 0 \
  --max-size 3

# List warm pools for a project
ona prebuild warm-pool list --project-id <project-id>

# Check warm pool status
ona prebuild warm-pool get <warm-pool-id>

# Update scaling bounds
ona prebuild warm-pool update <warm-pool-id> --min-size 0 --max-size 5

# Delete a warm pool
ona prebuild warm-pool delete <warm-pool-id>
All commands support --format json, --format yaml, and --format table output.

Pool lifecycle

Warm pools go through these phases:
PhaseDescription
PendingPool created, waiting for a prebuild snapshot to be assigned
ReadyInstances are available for claiming (running count may vary based on current demand)
DegradedThe runner reported a problem (e.g. failed to launch instances). Check the failure message for details
DeletingPool is being deleted, instances are draining
DeletedAll instances terminated, cleanup complete
When you disable prebuilds or delete the warm pool, instances drain gracefully. The pool enters the Deleting phase and transitions to Deleted once all instances are terminated.

Cost

Warm pool instances are regular EC2 instances in your AWS account. With dynamic scaling, you pay only for instances that are actually running. The pool scales down during low-demand periods. No Ona credits are consumed. You pay only for the AWS infrastructure. Running instances (actively serving or waiting to be claimed) are billed at standard EC2 on-demand rates. Stopped instances (pre-provisioned for fast scale-out) are not billed for compute. You pay only for their EBS volumes.

Cost estimates

Costs depend on the environment class (instance type) and AWS region. Estimates below use us-east-1 on-demand pricing:
Environment ClassInstance TypeApprox. cost per running instance/month
Smallm6i.large (2 vCPU, 8 GB)~$70
Regularm6i.xlarge (4 vCPU, 16 GB)~$140
Largem6i.2xlarge (8 vCPU, 32 GB)~$280
Example: A warm pool configured with min-size = 0 and max-size = 3 using m6i.xlarge (Regular) instances might average 1 running instance during business hours and 0 outside of them. That costs roughly $70–100/month instead of $280/month for a fixed pool of 2. EBS volume costs are additional but typically small relative to compute, roughly $0.08/GB/month for gp3 volumes.
Start with min-size = 1 and max-size = 2. Increase max-size if your team frequently hits cold starts during peak hours. Switch to min-size = 0 once you’re comfortable with the startup tradeoff during off-hours.

Viewing warm pool costs in AWS

Filter AWS Cost Explorer by the tag gitpod.dev/warm-pool to isolate warm pool instance costs from regular environment costs. See Costs & budgeting for general cost tracking.

Sizing guidance

Team sizeRecommended minRecommended maxRationale
1-10 engineers0-12Low concurrency; scale-to-zero saves cost for small teams
10-30 engineers13Keeps one instance always ready; scales for burst patterns
30+ engineers1-23-5Higher baseline for peak-hour concurrency
These are starting points. The right configuration depends on how large the project is (larger projects take longer to replenish), how often engineers create new environments versus reusing existing ones, and whether off-hours cost savings matter. Monitor your pool’s claim hit rate and adjust.

Viewing warm pool usage

You can view warm pools configured in your organization from the dashboard or CLI.
Organization admins see all warm pools. Other members only see warm pools for projects they have access to.

Via the dashboard

Navigate to a runner’s details page and select the Warm Pools tab. This shows all warm pools on that runner, including the associated project, environment class, pool size, and current phase.

Via the CLI

# List all warm pools in the organization
ona prebuild warm-pool list

# Filter by a specific project
ona prebuild warm-pool list --project-id <project-id>

# Filter by environment class
ona prebuild warm-pool list --environment-class-id <class-id>

# Output as JSON for scripting
ona prebuild warm-pool list --format json

# Get details for a specific warm pool
ona prebuild warm-pool get <warm-pool-id>

Monitoring

Warm pools expose Prometheus metrics through the runner’s metrics endpoint. Use these to track pool utilization, claim hit rate, and scaling behavior. See Custom metrics pipeline for setup instructions and the full list of available metrics. Key metrics to watch:
  • warm_pool_claims_total: Track the instance_not_found result to see how often users hit cold starts. If this happens frequently, increase max-size or min-size (the pool may be scaling down too aggressively).
  • warm_pool_claim_instance_age_seconds: Shows how long instances waited before being claimed. Very short ages may indicate the pool is undersized.
  • warm_pool_instances_by_state: Compare running vs stopped instance counts to verify scaling behavior.

Limitations

  • AWS only. GCP support is coming soon.
  • No spot instances. Warm pools require on-demand environment classes. If you enable a warm pool on a spot environment class, the pool enters the Degraded phase. Use a non-spot class instead.