Configure secure ingress for MCP servers on Kubernetes
In this tutorial, you'll configure secure ingress for MCP servers running on a Kubernetes cluster using the ToolHive Operator. By the end, you'll have a working MCP server accessible via a secure HTTPS endpoint that your team can use.
You'll use the ngrok Kubernetes Operator to create secure tunnels to your MCP servers. This setup ensures that your MCP servers are accessible over HTTPS, providing a secure way to interact with them. While this tutorial uses ngrok for simplicity, you can apply similar principles with other Kubernetes gateway solutions that work with the Kubernetes Gateway API, such as Traefik, Istio, and many others.
What you'll learn
This tutorial demonstrates how to make MCP servers available centrally for teams and enterprises. By deploying MCP servers on Kubernetes with secure ingress, you create a shared pool of capabilities that any team member can access. This approach is valuable for organizations that want to provide standardized tools and resources to developers while maintaining control over security, access policies, and resource usage.
Once your MCP servers are accessible via HTTPS, users can quickly connect their
AI clients using the ToolHive CLI (thv run) or UI without needing to install
or configure individual MCP servers locally. This centralized model simplifies
deployment, improves consistency, and makes it easier to manage updates and
security patches.
In this tutorial, you'll learn:
- How to install and configure the ngrok Kubernetes Operator.
- How to create secure tunnels for MCP servers.
- How to access MCP servers via HTTPS.
Prerequisites
Before you begin, ensure you have the following:
- A Kubernetes cluster with the ToolHive Operator installed. See the Kubernetes quickstart guide for instructions.
kubectlcommand-line tool configured to interact with your cluster.- An ngrok account to obtain an authentication token. A free account is sufficient for this tutorial.
- The ToolHive CLI to interact with the remote MCP server and connect it to your AI clients.
Step 1: Install the ngrok Kubernetes Operator
These steps are a simplified version of the instructions found in the ngrok documentation.
First, you'll need to install the ngrok Kubernetes Operator in your cluster using Helm:
helm repo add ngrok https://charts.ngrok.com
helm repo update
Obtain your authentication token and create an API key from the ngrok dashboard, then set them as environment variables:
export NGROK_AUTHTOKEN="your_ngrok_auth_token"
export NGROK_API_KEY="your_ngrok_api_key"
Install the Kubernetes Gateway API CRDs and a GatewayClass resource:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
kubectl apply -f -<<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: ngrok
spec:
controllerName: ngrok.com/gateway-controller
EOF
Then, install the ngrok Operator using Helm:
helm install ngrok-operator ngrok/ngrok-operator \
--namespace ngrok-operator \
--create-namespace \
--set defaultDomainReclaimPolicy=Retain \
--set credentials.apiKey=$NGROK_API_KEY \
--set credentials.authtoken=$NGROK_AUTHTOKEN
Step 2: Deploy an MCP server
Next, deploy an MCP server using the ToolHive Operator. This example deploys the "MKP" MCP server to manage the Kubernetes cluster:
kubectl apply -f https://raw.githubusercontent.com/stacklok/toolhive/refs/heads/main/examples/operator/mcp-servers/mcpserver_mkp.yaml
Get the MCP server's service details to identify the port to expose. The service
name follows the format mcp-<MCP_SERVER_NAME>-proxy. For the MKP example, the
service is mcp-mkp-proxy in the toolhive-system namespace on port 8080:
kubectl get service mcp-mkp-proxy -n toolhive-system
The output should look similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mcp-mkp-proxy ClusterIP 10.96.106.88 <none> 8080/TCP 2m19s
Note the service name and port number for the next step.
Step 3: Create an ngrok Gateway resource
Now, create a Gateway and HTTPRoute resource to expose the MCP server securely via ngrok.
For this step, you'll need to obtain your dev domain or custom domain from the
ngrok dashboard. If you have a free
account, it will be in the format <RANDOM_NAME>.ngrok-free.app. Replace
<YOUR_NGROK_DOMAIN> with your actual domain in the YAML below.
Create a file named ngrok-mcp-gateway.yaml with the following content:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: ngrok-gateway
namespace: toolhive-system
spec:
gatewayClassName: ngrok
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: <YOUR_NGROK_DOMAIN>
allowedRoutes:
namespaces:
from: All
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: mkp-mcp-route
namespace: toolhive-system
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: ngrok-gateway
namespace: toolhive-system
hostnames:
- <YOUR_NGROK_DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: mcp-mkp-proxy # Replace with your service name from Step 2
port: 8080 # Replace with your port number from Step 2
Apply the configuration to your cluster:
kubectl apply -f ngrok-mcp-gateway.yaml
Step 4: Access the MCP server
After a few moments, the ngrok Operator will create a secure tunnel to your MCP server. You can access it using the domain you specified in the Gateway resource.
Use the ToolHive CLI to verify connectivity to the MCP server:
thv mcp list tools --server https://<YOUR_NGROK_DOMAIN>/mcp
The output should display a list of tools managed by the MCP server, confirming that you have successfully set up secure ingress using ngrok. For the "MKP" MCP server, you should see output similar to this:
TOOLS:
NAME DESCRIPTION
get_resource Get a Kubernetes resource or its subresource
list_resources List Kubernetes resources
Use the ToolHive CLI or UI to connect your AI clients to the MCP server:
thv run --name mkp https://<YOUR_NGROK_DOMAIN>/mcp
The MKP MCP server is now available to AI clients configured with
thv client setup via a secure HTTPS endpoint.
Optional: Tunnel multiple MCP servers with URL rewrites
If you have multiple MCP servers and want to expose them via the same ngrok
Gateway, you can use URL rewrites in the HTTPRoute resource. This allows you
to route requests to different MCP servers based on path prefixes.
This option consumes $1 of the $5 credit included with the free ngrok account to enable the traffic policies feature or requires a paid ngrok plan.
Run a second MCP server, for example:
kubectl apply -f https://raw.githubusercontent.com/stacklok/toolhive/refs/heads/main/examples/operator/mcp-servers/mcpserver_fetch.yaml
Then, update the ngrok-mcp-gateway.yaml file. Update the rules section of
the existing HTTPRoute resource to give the MKP MCP server a path prefix of
/mkp and add a URLRewrite filter.
# ... existing HTTPRoute resource ...
spec:
# ... existing spec ...
rules:
- matches:
- path:
type: PathPrefix
value: /mkp
backendRefs:
- name: mcp-mkp-proxy
port: 8080
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: ''
Then, add a new rule for the Fetch MCP server with a path prefix of /fetch,
again replacing <YOUR_NGROK_DOMAIN> with your actual domain:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: fetch-mcp-route
namespace: toolhive-system
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: ngrok-gateway
namespace: toolhive-system
hostnames:
- <YOUR_NGROK_DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /fetch
backendRefs:
- name: mcp-fetch-proxy
port: 8080
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: ''
At this point, your ngrok-mcp-gateway.yaml file should contain one Gateway
resource and two HTTPRoute resources.
Full example of updated ngrok-mcp-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: ngrok-gateway
namespace: toolhive-system
spec:
gatewayClassName: ngrok
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: <YOUR_NGROK_DOMAIN>
allowedRoutes:
namespaces:
from: All
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: mkp-mcp-route
namespace: toolhive-system
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: ngrok-gateway
namespace: toolhive-system
hostnames:
- <YOUR_NGROK_DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /mkp
backendRefs:
- name: mcp-mkp-proxy
port: 8080
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: ""
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: fetch-mcp-route
namespace: toolhive-system
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: ngrok-gateway
namespace: toolhive-system
hostnames:
- <YOUR_NGROK_DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /fetch
backendRefs:
- name: mcp-fetch-proxy
port: 8080
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: ""
Apply the updated configuration to your cluster:
kubectl apply -f ngrok-mcp-gateway.yaml
You can now access both MCP servers using the same ngrok domain with different path prefixes:
thv mcp list tools --server https://<YOUR_NGROK_DOMAIN>/mkp/mcp
thv mcp list tools --server https://<YOUR_NGROK_DOMAIN>/fetch/mcp
Use the ToolHive CLI or UI to connect your AI clients to either MCP server:
thv run --name mkp https://<YOUR_NGROK_DOMAIN>/mkp/mcp
thv run --name fetch https://<YOUR_NGROK_DOMAIN>/fetch/mcp
Clean up
To remove the ngrok resources from your cluster and ngrok account, run the following:
# Delete the Gateway and HTTPRoute resources
kubectl delete -f ngrok-mcp-gateway.yaml
# Delete the ngrok CRDs
kubectl delete $(kubectl get crd -o name | grep "ngrok")
# Uninstall the ngrok Operator
helm uninstall ngrok-operator -n ngrok-operator
kubectl delete namespace ngrok-operator
What's next?
Now that you have secure ingress configured for your MCP servers, consider these next steps:
- Explore authentication and authorization to control access to your MCP servers.
- Learn about observability to monitor your MCP server usage and performance.
- Try other gateway solutions like Traefik or Istio if they're already part of your infrastructure.