Skip to the content.

RFC 0002: Multi-Port and Protocol Routing

Status: Draft

Problem

Currently all traffic enters through port 8080 and is routed by URL path prefix. This works for HTTP apps but doesn’t cover:

Proposal

Port-based routing

dstack CVMs expose ports as subdomains: <app-id>-<port>.dstack-pha-prod7.phala.network

The daemon should support binding projects to specific host ports, not just path prefixes. A project manifest could look like:

{
  "name": "my-api",
  "runtime": "deno",
  "entry": "server.ts",
  "listen": {
    "port": 3000,
    "protocol": "http"
  }
}

The daemon would expose port 3000 and route all traffic on that port to the project’s runtime container. Multiple projects can coexist, each on their own port.

TCP routing

For non-HTTP TCP services:

{
  "name": "my-socket",
  "runtime": "dockerfile",
  "listen": {
    "port": 5000,
    "protocol": "tcp"
  }
}

The daemon proxies raw TCP connections to the container. No HTTP parsing, just bidirectional byte streaming.

UDP routing (future)

UDP is harder because there’s no connection to track. Possible approaches:

The VPN approach is more complex but solves the general case. Worth exploring once the TCP/HTTP routing is solid.

Routing table

The daemon maintains a routing table:

Host port Protocol Project Backend
8080 http (ingress) path-based routing
3000 http my-api deno runtime
5000 tcp my-socket container:5000
51820 udp vpn wireguard adapter

Port 8080 remains the default ingress with path-based routing. Additional ports are opt-in per project.

Manifest changes

The listen field is optional. If omitted, the project is only accessible via path-based routing on port 8080 (current behavior).

{
  "listen": null,                          // path-only on :8080 (default)
  "listen": {"port": 3000},                // HTTP on :3000
  "listen": {"port": 5000, "protocol": "tcp"},  // raw TCP on :5000
  "listen": {"port": 5353, "protocol": "udp"}   // UDP on :5353 (future)
}

Implementation Notes

WebSocket support

The current ingress proxy in proxy/ingress.py does NOT support WebSocket upgrades. It eagerly reads the full request/response body and returns a web.Response, which is fundamentally incompatible with the 101 upgrade handshake and persistent bidirectional connection that WebSocket requires.

To fix this, _proxy() needs to:

  1. Check for Upgrade: websocket headers on the incoming request
  2. Return a web.WebSocketResponse (aiohttp’s WS type) instead of web.Response
  3. Relay frames bidirectionally between the visitor and the backend

This is needed for apps like the tunnel (RFC 0003) which currently uses long-polling as a workaround. See also apps/tunnel/ for the long-poll implementation that works without daemon changes.

Out of Scope

Next Steps

  1. Port binding in project manifest: Add listen.port and listen.protocol fields to project config, default to 8080/http
  2. Port-based routing: When a project specifies a port, bind that port and route all traffic directly to the container (no path-based routing)
  3. TCP proxy mode: Add raw TCP stream proxy for non-HTTP services, bidirectional byte forwarding
  4. Routing table endpoint: GET /_api/routes returns the current routing table with port, protocol, project mapping
  5. Port conflict detection: Prevent two projects from binding the same port, return clear error on deploy