Back to Networking Knowledge Hub

Connect Multiple Ollama GPUs to OpenWebUI with NetBird

Here we will run a large AI model (like DeepSeek R1) using Ollama on a powerful remote GPU instance. We will then connect directly to OIlama with OpenWebUI on a local machine to interact with it, without exposing the Ollama API to the public internet.

 

Whether if you’re a developer, AI researcher, or even a hobbyist, your workflow is often fragmented. You might have a powerful, expensive GPU is in a cloud VPC, a bare-metal server, or a machine in the office. Your development environment, data, and apps are often on your laptop or some different server. The biggest hurdle isn't the code; it's the network.

How do you securely connect your local machine or lower resource servers to a remote GPU to run AI workloads?

Traditionally this would involve complex firewall rules, public IP whitelisting, or configuring a legacy VPN. What if you could make that remote GPU—no matter where it is—feel like it's on your local network?

This is where NetBird comes in. It creates a secure, zero-trust mesh network using WireGuard that makes your scattered AI infrastructure feel like a single, unified system. Today we will show you how to use NetBird to get simple, secure access to your remote GPUs, enabling faster development and more flexible AI workflows.

  1. It's Secure by Default: No public IPs are needed. No ports need to be opened on your firewall. NetBird bypasses the need for public exposure entirely. Your remote GPU server can be completely "dark" to the public internet but still fully accessible to you and your other authorized machines.
  2. It's Simple: You install the client, log in with SSO, and it just works. Both machines get a stable, private IP address on your NetBird network (e.g., ). You use this IP to connect after defining access policies.
  3. It's Location-Independent: It doesn't matter if your laptop is at home, on an airplane, or using a cellular network. As long as it can access the internet, it can maintain its secure, private connection to your remote GPU.

Prerequisites and NetBird Setup

Here we will run a large AI model (like DeepSeek R1) using Ollama on a powerful remote GPU instance. We will then connect directly to OIlama with OpenWebUI on a local machine to interact with it, without exposing the Ollama API to the public internet.

  • The Setup:
    • Machine A (Remote GPU): A KVM/VM instance on Vultr, running Linux and Ollama.
    • Machine B (Local): Local VM running OpenWebUI in Docker. If you don’t have OpenWebUI already setup, checkout this video .
  1. Create Your Network: Sign up for a free NetBird account.
  2. Install NetBird: Install the NetBird client on Machine A (the GPU server) and Machine B (local vm). The easiest way to to this will be by creating a setup key and allowing the same key to be used twice. If you click "Install NetBird" after generating a key it will show the the commands to install and connect to NetBird.
  3. Create access policies so what the two machines can communicate with eachother. In my example I added both machines to a group called llm-mesh and give bidirectional access.
  4. Get Your Private IP: Your remote GPU server (Machine A) will now have a stable private IP, like . You can see this in your admin panel. Alternatively, you can use the NetBird DNS names.

Ollama Setup

  1. Install Ollama (Machine A): Ollama should be installed on the same machine with the GPU. This is just as easy as installing NetBird! Checkout their installation page here and the installer should detect a GPU and notify you in the log. Also, pick and model and run it. We will see the model later in OpenWebUI.
  2. Configure Ollama (Machine A): By default, Ollama only listens on . You need to tell it to listen for requests from your private NetBird network.
  • The easiest way is to set an environment variable for the Ollama service: .

  • On Linux, you can do this by editing the systemd service:

    • Add the following line under , then save and restart the service:
  1. After editing a service file, you must reload the daemon and restart the service.
  • On the Ollama VPS, run these commands:Bash
  • Verify Ollama is listening correctly. This is the most important step.Bash
    • Bad Output: (This means it's still only listening on localhost)
    • Good Output: or (This means it's listening on all interfaces, including the NetBird one)
  • If you still see , your edit didn't apply correctly.

Connect with Ollama API

  1. Connect OpenWebUI (Machine B):
  • Open OpenWebUI on your laptop.
  • In the OpenWebUI settings, (Admin Settings > Connections) click on add connection under Ollama API. it will ask for the "Ollama API Base URL."
  • Instead of , you simply use the NetBird IP or DNS name of your remote server: ie. http://100.64.10.1:11434

That's it. Your local OpenWebUI is now securely communicating with your high-powered remote GPU over an encrypted, peer-to-peer connection. You've achieved a local-like experience with zero public exposure and minimal configuration.

Creating a Flexible AI Workflow

This concept of secure, simple access extends to your entire AI development lifecycle:

  • Remote Development: Use VS Code Remote to connect directly to your GPU instance using its private NetBird IP. No SSH tunneling or bastion host required.
  • Accessing Internal Tools: Run a Jupyter Notebook or a TensorBoard instance on your remote machine and access it securely from your laptop's browser (e.g., ).
  • Secure Data Transfer: Move massive datasets to and from your machine with or over the stable, encrypted NetBird IP.
  • Connecting Hybrid Workloads: This same model works for connecting a cloud VM (running your model) to an on-premise database (holding your data).

Easy Migrations and Cost Savings

Once you've simplified network access, other benefits naturally follow.

  • Effortless Migration: Found a new, more powerful GPU on a different cloud? Just spin up the new machine, install NetBird, and your LLM of choice, and get its new private IP. Update your OpenWebUI config to point to the new IP. You're done in minutes. There are no firewall rules, VPCs, or DNS records to migrate.
  • Freedom to Choose: This simplicity enables you to be cost-effective. You are no longer locked into one provider's network. You can "shop around" for the cheapest or most available GPUs—whether it's a spot instance on AWS, a bare-metal server from a niche provider, or even a machine in your office—and add it to your secure network just as easily.

By removing the network as a barrier, NetBird lets you stop worrying about how to connect and focus on what you're trying to build.

Need help? Refer to these official guides:

We are using cookies

We use our own cookies as well as third-party cookies on our websites to enhance your experience, analyze our traffic, and for security and marketing. View our Privacy Policy for more information.