Ona Agent supports integration with Portkey, an AI gateway that provides unified access to multiple LLM providers with advanced features like load balancing, fallbacks and observability. This guide shows you how to configure Portkey as your LLM gateway.

Prerequisites

Create API credentials in Portkey

Step 1: Get your Portkey API Key

  1. Go to the Portkey Dashboard
  2. Navigate to API Keys in the left sidebar
  3. Click Create API Key
  4. Enter a name for your API key (e.g., “ona-agent-integration”)
  5. Click Create
  6. Copy the generated API key and store it safely in your vault
Virtual Keys in Portkey allow you to configure specific LLM providers, models, and settings:
  1. In the Portkey Dashboard, go to Virtual Keys
  2. Click Create Virtual Key
  3. Configure your preferred settings:
    • Provider: Select your LLM provider (e.g., Anthropic, Vertex)
    • Model: Choose the specific model you want to use
    • Additional settings: Configure any provider-specific parameters
  4. Click Create Virtual Key
  5. Copy the Virtual Key ID

Step 3: Configure the LLM Integration on ONA

  1. Login into your gitpod account:
$ gitpod login
  1. Switch the organization, if needed, and select the one that you want to configure the integration on:
$ gitpod organization switch
  1. Find the runner on which you want to configure the integration:
$ gitpod runner list
ID             NAME             KIND      STATUS    VERSION
<runner-id>    <runner-name>    remote    active    20250826.48
  1. Create the LLM integration:
$ gitpod runner config llm-integration create \
	<runner-id> SUPPORTED_MODEL_SONNET_4 $PORTKEY_GATEWAY_URL "$PLACEHOLDER"
  • PORTKEY_GATEWAY_URL should be https://api.portkey.ai/v1/messages or the url of a self-hosted solution.
  • PLACEHOLDER can be any non-empty value (like gitpod) as this is actually not used for the integration. The API key is either managed by Portkey or passed as a header. We need to pass a non-empty value here otherwise the command would be invalid.
  1. Check if your integration was created as expected:
$ gitpod runner config llm-integration list
ID                  RUNNER_ID      MODEL                       ENDPOINT                              MAX_TOKENS    PHASE      PHASE_REASON
<integration-id>    <runner-id>    SUPPORTED_MODEL_SONNET_4    https://api.portkey.ai/v1/messages    model         default    available
  1. Configure Portkey headers: Portkey leverages headers and configurations to access different LLM Providers customise its behaviour. Both can be used to decide how you use Portkey. Ona allows to configure headers to be passed to LLM integrations, in this key to Portkey, to customise its behaviour.
6a. Leverage the configuration (recommended): By choosing this option, you can centralise your API Keys and LLM Integration customisations in Portkey. First create a new configuration and note down its id. Then, configure the necessary headers on our integration:
$ gitpod runner config llm-integration set-header <integration-id> x-portkey-config $PORTKEY_CONFIG_ID

$ gitpod runner config llm-integration set-header <integration-id> x-portkey-api-key $PORTKEY_API_KEY
Where:
  • PORTKEY_CONFIG_ID is the ID of the configuration that you set up earlier (it looks like pc-xxx).
  • PORTKEY_API_KEY: is Portkey’s API Key
6b. Passthrough: In this mode Portkey acts as a simple gateway but cannot be configured for more advanced use in the same way that we offer with the configuration. You can configure the integration in this way:
gitpod runner config llm-integration set-header <integration-id> Authorization "Bearer $LLM_API_KEY"

gitpod runner config llm-integration set-header <integration-id> x-portkey-api-key $PORTKEY_API_KEY

gitpod runner config llm-integration set-header <integration-id> x-portkey-provider $PROVIDER
Where:
  • LLM_API_KEY: is the API key of the integration that you want Portkey to talk to; for example an anthropic API Key
  • PORTKEY_API_KEY: is Portkey’s API Key
  • PROVIDER: is the name of the LLM provider to be used, for example antrophic
  1. Removing or Updating headers (Optional) You might want to customise the configuration further or remove some headers added by mistake.
  • You can use the set-header command to create or overwrite headers.
  • You can use the remove-header command to delete a header.
  • You can use the list-headers command to list the headers enabled on the integration.

Monitoring and Observability

Portkey provides comprehensive observability for your LLM requests:
  • Request Logs: View all requests made through Portkey
  • Analytics: Monitor usage, costs, and performance metrics
  • Alerts: Set up notifications for errors or usage thresholds
Access these features in your Portkey Dashboard under the Observability section.

Verify the integration

  1. Create a new environment with an enabled runner
  2. Open Ona Agent and confirm it can access your configured LLM models through Portkey
  3. Test with a simple code generation request
  4. Check your Portkey Dashboard to verify requests are being logged

Troubleshooting

Common Issues

Authentication Errors
  • Verify your Portkey API key is correct and active
  • Ensure your Virtual Key (if used) is properly configured
  • Check that your underlying LLM provider credentials in Portkey are valid
Model Access Issues
  • Confirm the model specified in your Virtual Key or Config is available
  • Verify your underlying LLM provider has sufficient quota/credits
  • Check Portkey’s status page for any service disruptions
Request Failures
  • Review request logs in your Portkey Dashboard for detailed error messages
  • Ensure your Portkey Config (if used) has valid fallback providers configured
  • Verify network connectivity between your Gitpod environment and Portkey

Getting Help

  • Check the Portkey Documentation for detailed configuration guides
  • Contact Portkey support through their dashboard for gateway-specific issues
  • Reach out to your Gitpod account manager for Ona Agent integration support

Next Steps

Your Portkey LLM gateway is now configured and ready to use with Ona Agent. You can:
  • Configure additional LLM providers in Portkey for redundancy
  • Set up advanced routing rules and fallbacks
  • Monitor your AI usage and costs through Portkey’s analytics
  • Explore Portkey’s prompt management and caching features
For more advanced Portkey features, refer to the Portkey Documentation.