Skip to main content

Declare remote MCP server entries

Overview

MCPServerEntry is a zero-infrastructure catalog entry that declares a remote MCP server endpoint for Virtual MCP Server (vMCP) discovery and routing. Unlike MCPRemoteProxy, it creates no pods, services, or deployments. Use MCPServerEntry when you want to include a remote server in vMCP routing without the overhead of running a proxy.

MCPServerEntry is part of an MCPGroup, which groups related backend MCP servers together for vMCP discovery. When vMCP starts in discovered mode, it queries all MCPServer, MCPRemoteProxy, and MCPServerEntry resources in the referenced group and connects to them directly.

Prerequisites

  • A Kubernetes cluster (current and two previous minor versions are supported)
  • Permissions to create resources in the cluster
  • kubectl configured to communicate with your cluster
  • The ToolHive operator installed in your cluster (see Deploy the operator)
  • A remote MCP server that supports HTTP transport (SSE or Streamable HTTP)

When to use MCPServerEntry vs. MCPRemoteProxy

MCPServerEntryMCPRemoteProxy
InfrastructureNo pods, services, or deploymentsCreates a proxy pod and service
Use caseLightweight catalog entries for well-known remote serversProxied connections requiring request transformation, caching, or the full middleware chain
DiscoveryDiscovered by VirtualMCPServer through MCPGroup membershipDiscovered by VirtualMCPServer through MCPGroup membership
AuthenticationToken exchange via externalAuthConfigRefFull OIDC validation of incoming client requests
AuthorizationNot applicable (no proxy layer)Cedar policy enforcement on every request
Audit loggingNot applicable (no proxy layer)Structured audit logs with user identity
TelemetryNot applicable (no proxy layer)OpenTelemetry tracing and Prometheus metrics
SSRF protectionBuilt-in URL validation blocks internal and metadata endpointsN/A (proxy runs inside the cluster)

Choose MCPServerEntry when:

  • You trust the remote server and don't need per-request policy enforcement
  • You want the simplest possible configuration with no workload resources (pods, services, deployments)
  • The remote server handles its own authentication

Choose MCPRemoteProxy when:

  • You need to validate incoming client tokens with OIDC
  • You need Cedar authorization policies on tool calls
  • You need audit logging with user identity
  • You need tool filtering or renaming at the proxy layer

Create an MCPServerEntry

MCPServerEntry resources must be part of an MCPGroup. Create the group first if it doesn't exist:

my-group.yaml
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPGroup
metadata:
name: my-group
namespace: toolhive-system
spec:
description: Group of backend MCP servers for vMCP aggregation

Then create a basic MCPServerEntry:

my-entry.yaml
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServerEntry
metadata:
name: my-remote-tool
namespace: toolhive-system
spec:
groupRef: my-group
remoteURL: https://mcp.example.com/mcp
transport: streamable-http

Apply both resources:

kubectl apply -f my-group.yaml -f my-entry.yaml
What's happening?

When you apply an MCPServerEntry resource:

  1. The ToolHive operator detects the new resource
  2. The operator validates the spec: checks that the referenced MCPGroup exists, validates the remote URL against SSRF patterns, and verifies any referenced auth or TLS resources
  3. The operator sets the entry's phase to Valid if all checks pass, or Failed with a descriptive condition if something is wrong
  4. When a VirtualMCPServer in discovered mode starts, it discovers the entry through its MCPGroup membership and connects directly to the remote URL

Required fields

FieldDescriptionValidation
remoteURLURL of the remote MCP serverMust match ^https?://
transportTransport protocol for the remote serversse or streamable-http
groupRefName of the MCPGroup this entry belongs toRequired, minimum length 1

Configure authentication

When the remote MCP server requires authentication, reference an MCPExternalAuthConfig resource to configure token exchange. The MCPExternalAuthConfig must exist in the same namespace as the MCPServerEntry.

auth-entry.yaml
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPExternalAuthConfig
metadata:
name: my-auth-config
namespace: toolhive-system
spec:
type: tokenExchange
tokenExchange:
tokenUrl: https://auth.company.com/protocol/openid-connect/token
clientId: remote-mcp-client
clientSecretRef:
name: remote-mcp-secret
key: client-secret
audience: https://mcp.example.com
---
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServerEntry
metadata:
name: internal-tool
namespace: toolhive-system
spec:
groupRef: my-group
remoteURL: https://internal-mcp.corp.example.com/mcp
transport: streamable-http
externalAuthConfigRef:
name: my-auth-config

When vMCP discovers this entry, it uses the referenced MCPExternalAuthConfig to perform token exchange before forwarding requests to the remote server.

Configure custom TLS certificates

If the remote server uses a certificate signed by an internal CA, provide a custom CA bundle so that vMCP can verify the TLS connection.

First, create a ConfigMap containing the CA certificate:

kubectl create configmap internal-ca-bundle \
--from-file=ca.crt=/path/to/ca-certificate.pem \
-n toolhive-system

Then reference it in the MCPServerEntry:

tls-entry.yaml
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServerEntry
metadata:
name: internal-tool
namespace: toolhive-system
spec:
groupRef: my-group
remoteURL: https://internal-mcp.corp.example.com/mcp
transport: streamable-http
caBundleRef:
configMapRef:
name: internal-ca-bundle
key: ca.crt

Inject custom headers

Some remote MCP servers require custom headers for tenant identification, API keys, or other purposes. Use the headerForward field to inject headers into requests forwarded to the remote server.

header-entry.yaml
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServerEntry
metadata:
name: my-remote-tool
namespace: toolhive-system
spec:
groupRef: my-group
remoteURL: https://mcp.example.com/mcp
transport: streamable-http
headerForward:
addPlaintextHeaders:
X-Custom-Header: my-value

For sensitive values like API keys, use addHeadersFromSecret instead. See the Inject custom headers section of the MCPRemoteProxy guide for the full syntax, which MCPServerEntry shares.

Complete example

This example creates the MCPServerEntry-related resources for authentication and custom TLS. If you reference a CA bundle ConfigMap such as partner-ca-bundle, it must already exist or be created separately:

complete-entry.yaml
---
# 1. Create the MCPGroup
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPGroup
metadata:
name: engineering-tools
namespace: toolhive-system
spec:
description: Engineering team MCP servers

---
# 2. Create authentication config for token exchange
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPExternalAuthConfig
metadata:
name: remote-auth
namespace: toolhive-system
spec:
type: tokenExchange
tokenExchange:
tokenUrl: https://auth.company.com/protocol/openid-connect/token
clientId: remote-mcp-client
clientSecretRef:
name: remote-mcp-secret
key: client-secret
audience: https://mcp.partner.example.com

---
# 3. Create the MCPServerEntry
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServerEntry
metadata:
name: partner-tools
namespace: toolhive-system
spec:
groupRef: engineering-tools
remoteURL: https://mcp.partner.example.com/mcp
transport: streamable-http
externalAuthConfigRef:
name: remote-auth
caBundleRef:
configMapRef:
name: partner-ca-bundle
key: ca.crt
headerForward:
addPlaintextHeaders:
X-Tenant-ID: engineering

Apply all resources:

kubectl apply -f complete-entry.yaml

Check MCPServerEntry status

To check the status of your entries:

kubectl get mcpserverentries -n toolhive-system

The status shows the current phase of each entry:

PhaseDescription
ValidAll validations passed and the entry is usable
PendingInitial state before the first reconciliation
FailedOne or more referenced resources are missing or the URL is invalid

For more details about a specific entry:

kubectl describe mcpserverentry partner-tools -n toolhive-system

Check the Conditions section for specific validation results:

kubectl get mcpserverentry partner-tools -n toolhive-system -o yaml

SSRF protection

MCPServerEntry URLs are validated against Server-Side Request Forgery (SSRF) patterns. The operator rejects URLs that target:

  • Loopback addresses: 127.0.0.0/8, ::1
  • Link-local addresses: 169.254.0.0/16, fe80::/10
  • Cloud metadata endpoints: 169.254.169.254 (AWS, GCP, Azure)
  • Private network ranges: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16

If a URL fails SSRF validation, the entry's phase is set to Failed with a condition describing the rejection reason.

Next steps

Troubleshooting

MCPServerEntry stuck in Pending phase

If an MCPServerEntry remains in Pending phase after creation:

# Check the entry status
kubectl describe mcpserverentry <NAME> -n toolhive-system

# Check operator logs
kubectl logs -n toolhive-system -l app.kubernetes.io/name=toolhive-operator

Common causes:

  • Operator not running: Verify the ToolHive operator pod is healthy
  • RBAC issues: The operator may not have permission to reconcile MCPServerEntry resources
MCPServerEntry in Failed phase

If the entry's phase is Failed, check the conditions for the specific reason:

kubectl get mcpserverentry <NAME> -n toolhive-system \
-o jsonpath='{.status.conditions}' | jq

Common causes:

  • SSRF validation failure: The remoteURL targets a blocked address range (loopback, link-local, private network, or cloud metadata). Use an externally routable URL
  • Missing MCPGroup: The group referenced in groupRef doesn't exist. Create the MCPGroup first
  • Missing MCPExternalAuthConfig: The auth config referenced in externalAuthConfigRef doesn't exist in the same namespace
  • Missing CA ConfigMap: The ConfigMap referenced in caBundleRef doesn't exist or the specified key is missing
MCPServerEntry not appearing in vMCP backends

If a Valid MCPServerEntry doesn't appear in the VirtualMCPServer's discovered backends:

# Verify the entry is Valid
kubectl get mcpserverentry -n toolhive-system

# Check the VirtualMCPServer status
kubectl get virtualmcpserver <NAME> -n toolhive-system \
-o jsonpath='{.status.discoveredBackends}' | jq

# Check vMCP pod logs
kubectl logs -n toolhive-system deployment/vmcp-<NAME>

Common causes:

  • Group mismatch: The entry's groupRef doesn't match the VirtualMCPServer's config.groupRef
  • vMCP not restarted: Backend changes require a pod restart to be discovered. Restart the vMCP deployment:
    kubectl rollout restart deployment vmcp-<NAME> -n toolhive-system
  • Inline mode: The VirtualMCPServer uses outgoingAuth.source: inline, which doesn't discover backends at runtime. Switch to discovered mode or add the backend explicitly to config.backends
Remote server connection failures

If vMCP discovers the entry but can't connect to the remote server:

# Check vMCP logs for connection errors
kubectl logs -n toolhive-system deployment/vmcp-<NAME> | grep -i error

Common causes:

  • TLS certificate errors: If the remote server uses an internal CA, add a caBundleRef pointing to the CA certificate
  • Authentication failures: Verify the MCPExternalAuthConfig references valid credentials and the token exchange endpoint is reachable
  • Network policies: Ensure egress from the vMCP pod to the remote server is allowed
  • Transport mismatch: Verify the transport field matches the remote server's actual transport protocol