This feature is only available on the Enterprise tier
Prerequisites
- You must have an AWS account with Bedrock enabled and model access approved.
- You must have an Ona enterprise license.
- Your enterprise runner must be deployed in the same AWS account and have IAM permissions to invoke Bedrock.
Set up AWS permissions and model access
- Go to the AWS Bedrock console
- Open Model access and request access to the Anthropic Claude model(s) you plan to use. We recommend using Claude Sonnet 4.
- Wait for approval
Configure the Bedrock endpoint
Endpoint format
Add the configuration to Ona
You can configure a Bedrock integration in two ways:- Gitpod CLI
- Through the UI
Option 1: Gitpod CLI
Option 2: Through the UI
- Go to the Runners settings page
- Select an enterprise runner where you want to enable LLM integration
- Scroll down to the “LLM Providers” section
- Click the Configure button

LLM Provider configuration screen
- In “Bedrock Model ID”, enter the model ID for your region (for example,
eu.anthropic.claude-sonnet-4-20250514-v1:0
). Tip: Verify the identifier for your region in the AWS Bedrock model IDs documentation. - “API Key”: Not required for Bedrock (AWS credentials are used).
- If a model preset selector is shown, choose “Claude 4 Sonnet”.
- Click Create Integration

LLM Provider configuration screen
Verify the integration
- Create a new environment with the configured runner
- Open Ona Agent and start a conversation
- If Ona Agent responds, the integration is working. Test with a simple code generation request
Supported models
AWS Bedrock supports Anthropic Claude models:- Claude Sonnet 4 (Highly recommended)
- Claude Sonnet 3.7 (Supported but not recommended - only use when Sonnet 4 is not available in your region)
Identifying available models
There are two ways to identify models: foundation models and inference profiles. Inference profiles are resources that define a model and one or more regions for routing requests, enabling cross-region inference, usage tracking, and cost monitoring. Availability varies by region — some regions only support foundation models, while others support both foundation models and inference profiles. Check model availability per region in the AWS documentation. You can check availability using AWS CLI commands from your environment, assuming you have proper authentication and region environment variables set:Testing model connectivity
To run a simple smoke test and verify a model works:Troubleshooting
Common issues
Model not available in region- Use a model ID that is available in your runner’s region
- For regionalized models, use the correct prefix (for example,
us.anthropic.*
)
- Ensure the runner role has
bedrock:InvokeModel
andbedrock:InvokeModelWithResponseStream
- Confirm model access is approved in the Bedrock console
- Use
bedrock://<model-id>
and include the full version (for example,...-v1:0
)
- Lower the maximum tokens with
gitpod runner config llm-integration set-max-tokens ...
- Review Bedrock service quotas and request increases if needed
Getting help
If you encounter issues:- Check Ona Agent logs for detailed error messages
- Verify AWS quotas and Bedrock model access
- Contact your account manager for additional support
Next steps
- Review the AWS Bedrock User Guide
- See Anthropic Claude on Bedrock
- Monitor usage and performance in CloudWatch