top of page
Search
Writer's pictureAnshul Jain

How to Use Portkey.ai with HARPA AI to Access ANY LLM Provider

So I recently discovered Portkey.ai, thanks to a HARPA user (@Mist) on Discord. I'd long been searching for a way to use different LLMs from APIs like Gemini, Azure, AWS, etc., with HARPA AI to use its powerful automation capabilities. And this is the perfect solution. 


Portkey.ai is an AI gateway that routes almost any LLM provider into an OpenAI format, so it’s natively compatible with most apps and services that use this format, including HARPA AI. 


Setting it up was a bit tougher than I expected though, so I figured I’d write a quick guide for anyone else looking to do the same. While they have some brilliant docs and guides, it’s a lot to navigate just to set up a simple thing like this. 


Note that there are other AI gateway options you can consider: Cloudflare AI Gateway and LiteLLM. I checked them out too, but I had already set up Portkey and it works great for me. 


Why Set Up Portkey in HARPA AI? 


To state the benefits quickly:


  • Use HARPA AI with any LLM provider that Portkey supports, which is almost everything. 

  • Log and analyze your API performance, cost, usage, etc. easily. 

  • Load balance and cache your requests if needed (not covered in this guide). 


Steps to Integrate Portkey.ai With HARPA AI


1) Sign up at Portkey


Just go to Portkey website and sign up for a free account. You may have to use a work email address or one with your own domain, as it didn’t let me sign up with my main GMail address. 


2) Add a Virtual Key With Your API Provider


Go to Virtual Keys on the left sidebar. Add a new one. 


Choose your LLM provider: this could be anything that Portkey supports (and they support nearly everything!) 


Screenshot of Portkey Virtual Keys Menu

For example, Gemini, Azure, Mistral, Jina, Vertex, etc. Full list here.


You just need a working API key from any of these LLM providers, and Portkey will route your requests accordingly. 


You can even run local AI models using Ollama, but that’s a bit of an extra step and a little more difficult to do. 


3) Copy these Settings in HARPA AI


Next, you just have to set up your Portkey settings in HARPA like this: 


Screenshot of HARPA AI showing Portkey Settings

API Key: Input your Portkey.ai API keys here (You can get them from your Settings page)


Model: This is the name of the model from the LLM provider. 


API Proxy URL: https://api.portkey.ai (note that there should be no /v1 or other paths; the base path is necessary for HARPA)


Headers:


{
  "x-portkey-provider": "<provider slug>",
  "x-portkey-api-key": "<Portkey API Key>",
  "x-portkey-virtual-key": "<Portkey Virtual Key>"
}

And voila! You should be able to use your configured provider inside HARPA. 


There’s a lot more you can set up if you want to, like configs for load balancing, caching, etc. I won’t get into that here since I don’t use them, but the Portkey Docs are a good place for that. 


How it works:


As far as I understand, and for anyone curious, here’s what happens: 


HARPA sends the requests to Portkey in OpenAI format as usual > Portkey translates this into the LLM provider’s format and sends the request > Portkey receives the request from the provider, logs it for analytics, and sends it back to HARPA. 


Bonus: Ollama in HARPA AI using Portkey and Ngrok


In case you want to set up Ollama inside HARPA AI to run local models like Llama3, Qwen2, Phi3, Gemma, or any others, here’s what you have to do: 


  1. Set up Portkey properly as before. But there is no virtual key needed here. 

  2. Set up and install Ollama, and make sure it’s running and accessible properly. I did it on the same machine for simplicity. 

  3. Set up Ngrok. You can use a free account. They generally give a static domain with free accounts too. Get your static domain address. 


Then use this command to run Ngrok and connect it to Ollama:


ngrok http 11434 --host-header="localhost:11434" --domain=<input Ngrok static domain here>

Next, configure your HARPA AI settings like this:


Screenshot of HARPA AI showing Ollama Settings

API Key, model, and Proxy URL are same as before. For model, make sure it’s the same name that Ollama recognizes. 


Under headers, the ‘Provider’ changes to ‘ollama’, and you have to add a custom host, which is your Ngrok static domain you set up earlier. 


And now you should be able to use any Ollama model inside HARPA AI. 


Portkey has an Ollama guide too, but it’s focused on the SDK and that was a bit confusing to me initially. The Headers guide is also a must if you want to customize your Portkey setup inside HARPA. 


FAQs


Why Use Portkey and not OpenRouter?


HARPA does support OpenRouter natively, which supports nearly all popular and even lesser known LLM models. However, a lot of people would prefer to implement the API natively from the original provider. While that isn’t possible in HARPA yet, Portkey is the next best thing: you do use the original provider in this case. 


Can I use Portkey.ai for free?


Portkey.ai does have a generous free plan that should be enough for testing and getting to see if it’s a right fit for you. It might even be enough for some small companies or personal usage. After that, its monthly plan is decently priced if you need it. And if you’ve got the tech skills, you could self-host its AI gateway since it’s open-source. Though I don’t know why you’d be here in that case ;)


Will Portkey affect the quality and speed of the LLM? 


There’s no reason for any drop in quality, since the request is going to original LLM provider. And in my experience, I’ve not seen any degradation in speed either. There might be a bit more latency, but it’s probably not noticeable in normal usage. 


Conclusion

So if you followed this guide and set up Portkey in HARPA, you can use practically any LLM available online. Combine that with HARPA AI’s powerful automation capabilities, and you can run a lot more automations and custom commands.


P.S. No I'm not sponsored by HARPA, Portkey, or anybody else really. Just wrote this at 6am after doing all this myself as I figured it'd help somebody :)

55 views2 comments

2 Comments


Jochen Gererstorfer
Jochen Gererstorfer
Jun 24

Great tutorial. Thank you!

Like
Anshul Jain
Anshul Jain
Jun 24
Replying to

Thank you! Happy to help :)

Like
Post: Blog2_Post
bottom of page