implement ssh *remote port forwarding #7

Closed
opened 2026-03-23 09:59:48 +00:00 by despiegk · 3 comments
Owner

What you described is remote port forwarding, often called a reverse SSH tunnel. The public SSH server listens on some port, and traffic arriving there gets carried over the SSH connection back to your laptop’s local 127.0.0.1:8080. In OpenSSH terms, that is ssh -R ..., not -L. -L is the opposite direction: a port on your laptop forwards to something reachable from the remote side. ([GitHub][1])

Your case looks like this:

internet user
   |
   v
public-ssh-server:9000
   |
   |  SSH remote port forwarding (-R)
   v
your laptop:127.0.0.1:8080

Example with OpenSSH:

ssh -N -R 0.0.0.0:9000:127.0.0.1:8080 user@public-ssh-server

That means: “on the SSH server, listen on port 9000, and whenever someone connects there, forward the bytes through the SSH session to 127.0.0.1:8080 on my laptop.” Whether 0.0.0.0 is allowed depends on the SSH server config, especially GatewayPorts; otherwise it may bind only to loopback on the server side. ([Docs.rs][2])

For a pure Rust library, the best fit I found is russh. Its docs describe it as a pure Rust SSH client/server implementation, and its feature list explicitly includes both direct-tcpip for local port forwarding and forward-tcpip for remote port forwarding. ([Docs.rs][3])

The mental model in russh is:

  • local forwarding = open a direct-tcpip channel when a local socket accepts a connection
  • remote forwarding = request tcpip_forward, and then handle incoming forwarded-tcpip channels when the remote server gets new connections. ([Docs.rs][4])

So for your server, the flow is:

  1. Connect from your Rust process on the laptop to the public SSH server.
  2. Authenticate.
  3. Ask the SSH server to listen on a remote port using remote forwarding.
  4. Each time somebody connects to that remote port, the SSH server opens a forwarded TCP channel back to your client.
  5. Your Rust code then opens a normal local TCP connection to 127.0.0.1:8080.
  6. Copy bytes both ways until either side closes. ([Docs.rs][4])

A practical Cargo.toml starting point:

[dependencies]
anyhow = "1"
async-trait = "0.1"
tokio = { version = "1", features = ["full"] }
russh = "0.58"
russh-keys = "0.49"

A minimal implementation sketch looks like this:

use anyhow::{Context, Result};
use async_trait::async_trait;
use russh::client;
use russh::{Channel, ChannelId};
use russh::keys::key;
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::io::{self, AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;

#[derive(Clone)]
struct TunnelClient {
    local_target_host: String,
    local_target_port: u16,
}

#[async_trait]
impl client::Handler for TunnelClient {
    type Error = anyhow::Error;

    async fn check_server_key(
        &mut self,
        _server_public_key: &key::PublicKey,
    ) -> Result<bool, Self::Error> {
        // Replace with real host key verification.
        Ok(true)
    }

    async fn channel_open_forwarded_tcpip(
        &mut self,
        mut channel: Channel<russh::client::Msg>,
        connected_address: &str,
        connected_port: u32,
        originator_address: &str,
        originator_port: u32,
        _session: &mut client::Session,
    ) -> Result<bool, Self::Error> {
        eprintln!(
            "incoming remote-forwarded connection: {}:{} from {}:{}",
            connected_address, connected_port, originator_address, originator_port
        );

        let target = format!("{}:{}", self.local_target_host, self.local_target_port);
        let local = TcpStream::connect(&target)
            .await
            .with_context(|| format!("failed to connect local target {target}"))?;

        tokio::spawn(async move {
            if let Err(err) = proxy_channel_and_socket(channel, local).await {
                eprintln!("tunnel error: {err:#}");
            }
        });

        Ok(true)
    }
}

async fn proxy_channel_and_socket(
    channel: Channel<russh::client::Msg>,
    socket: TcpStream,
) -> Result<()> {
    let mut ssh_stream = russh::ChannelStream::from(channel);
    let (mut ssh_r, mut ssh_w) = tokio::io::split(&mut ssh_stream);
    let (mut tcp_r, mut tcp_w) = socket.into_split();

    let a = io::copy(&mut ssh_r, &mut tcp_w);
    let b = io::copy(&mut tcp_r, &mut ssh_w);

    let _ = tokio::try_join!(a, b)?;
    Ok(())
}

#[tokio::main]
async fn main() -> Result<()> {
    let mut config = client::Config::default();
    config.connection_timeout = Some(std::time::Duration::from_secs(10));
    let config = Arc::new(config);

    let handler = TunnelClient {
        local_target_host: "127.0.0.1".to_string(),
        local_target_port: 8080,
    };

    let ssh_server: SocketAddr = "203.0.113.10:22".parse()?;
    let mut session = client::connect(config, ssh_server, handler).await?;

    // Example: private-key auth
    let private_key = russh::keys::load_secret_key(
        "/home/me/.ssh/id_ed25519",
        None,
    )?;

    let ok = session
        .authenticate_publickey("myuser", Arc::new(private_key))
        .await?;
    if !ok {
        anyhow::bail!("SSH authentication failed");
    }

    // Ask remote SSH server to listen on 0.0.0.0:9000
    // Depending on server config, 127.0.0.1 may be the only allowed bind address.
    let allocated_port = session
        .tcpip_forward("0.0.0.0", 9000)
        .await
        .context("remote port forward request failed")?;

    eprintln!("remote forward active on port {allocated_port}");

    // Keep the session alive
    session.await?;
    Ok(())
}

The important part is the callback channel_open_forwarded_tcpip(...). That callback is the signal that a user hit the remote port. At that moment you connect to local 127.0.0.1:8080 and pump bytes between the SSH channel and the socket. That matches russh’s documented remote-forwarding model, where the server notifies the client by opening a forwarded TCP channel for each incoming remote connection. ([Docs.rs][4])

A few implementation notes for putting this into your server:

  • Make the SSH tunnel a managed service object with fields like ssh_host, ssh_user, remote_bind_addr, remote_port, local_target_addr, and auth material.
  • Verify the server host key properly instead of returning true in check_server_key.
  • Reconnect with backoff if the SSH connection drops.
  • Decide whether the remote bind should be 127.0.0.1 or 0.0.0.0; public exposure usually requires server-side permission.
  • Track active forwarded channels so you can shut them down cleanly when your server reloads config. ([Docs.rs][5])

PURPOSE

  • configure one or more SSH servers which forward a chosen TCP port to terminate on server like it would be a local TCP port
    e.g. I can forward TCP 80 443 on remote SSH to the local PROXY which will then process it like it was coming in directly over TCP

the configuration happens fully over UDS OPENRPC like we do the rest

adjust server/ openrpc spec/ sdk autogenerates/make some examples in rust how to set this up

What you described is **remote port forwarding**, often called a **reverse SSH tunnel**. The public SSH server listens on some port, and traffic arriving there gets carried over the SSH connection back to your laptop’s local `127.0.0.1:8080`. In OpenSSH terms, that is `ssh -R ...`, not `-L`. `-L` is the opposite direction: a port on your laptop forwards to something reachable from the remote side. ([GitHub][1]) Your case looks like this: ```text internet user | v public-ssh-server:9000 | | SSH remote port forwarding (-R) v your laptop:127.0.0.1:8080 ``` Example with OpenSSH: ```bash ssh -N -R 0.0.0.0:9000:127.0.0.1:8080 user@public-ssh-server ``` That means: “on the SSH server, listen on port `9000`, and whenever someone connects there, forward the bytes through the SSH session to `127.0.0.1:8080` on my laptop.” Whether `0.0.0.0` is allowed depends on the SSH server config, especially `GatewayPorts`; otherwise it may bind only to loopback on the server side. ([Docs.rs][2]) For a **pure Rust** library, the best fit I found is **`russh`**. Its docs describe it as a pure Rust SSH client/server implementation, and its feature list explicitly includes both `direct-tcpip` for local port forwarding and `forward-tcpip` for remote port forwarding. ([Docs.rs][3]) The mental model in `russh` is: * **local forwarding** = open a `direct-tcpip` channel when a local socket accepts a connection * **remote forwarding** = request `tcpip_forward`, and then handle incoming `forwarded-tcpip` channels when the remote server gets new connections. ([Docs.rs][4]) So for your server, the flow is: 1. Connect from your Rust process on the laptop to the public SSH server. 2. Authenticate. 3. Ask the SSH server to listen on a remote port using remote forwarding. 4. Each time somebody connects to that remote port, the SSH server opens a forwarded TCP channel back to your client. 5. Your Rust code then opens a normal local TCP connection to `127.0.0.1:8080`. 6. Copy bytes both ways until either side closes. ([Docs.rs][4]) A practical `Cargo.toml` starting point: ```toml [dependencies] anyhow = "1" async-trait = "0.1" tokio = { version = "1", features = ["full"] } russh = "0.58" russh-keys = "0.49" ``` A minimal implementation sketch looks like this: ```rust use anyhow::{Context, Result}; use async_trait::async_trait; use russh::client; use russh::{Channel, ChannelId}; use russh::keys::key; use std::net::SocketAddr; use std::sync::Arc; use tokio::io::{self, AsyncReadExt, AsyncWriteExt}; use tokio::net::TcpStream; #[derive(Clone)] struct TunnelClient { local_target_host: String, local_target_port: u16, } #[async_trait] impl client::Handler for TunnelClient { type Error = anyhow::Error; async fn check_server_key( &mut self, _server_public_key: &key::PublicKey, ) -> Result<bool, Self::Error> { // Replace with real host key verification. Ok(true) } async fn channel_open_forwarded_tcpip( &mut self, mut channel: Channel<russh::client::Msg>, connected_address: &str, connected_port: u32, originator_address: &str, originator_port: u32, _session: &mut client::Session, ) -> Result<bool, Self::Error> { eprintln!( "incoming remote-forwarded connection: {}:{} from {}:{}", connected_address, connected_port, originator_address, originator_port ); let target = format!("{}:{}", self.local_target_host, self.local_target_port); let local = TcpStream::connect(&target) .await .with_context(|| format!("failed to connect local target {target}"))?; tokio::spawn(async move { if let Err(err) = proxy_channel_and_socket(channel, local).await { eprintln!("tunnel error: {err:#}"); } }); Ok(true) } } async fn proxy_channel_and_socket( channel: Channel<russh::client::Msg>, socket: TcpStream, ) -> Result<()> { let mut ssh_stream = russh::ChannelStream::from(channel); let (mut ssh_r, mut ssh_w) = tokio::io::split(&mut ssh_stream); let (mut tcp_r, mut tcp_w) = socket.into_split(); let a = io::copy(&mut ssh_r, &mut tcp_w); let b = io::copy(&mut tcp_r, &mut ssh_w); let _ = tokio::try_join!(a, b)?; Ok(()) } #[tokio::main] async fn main() -> Result<()> { let mut config = client::Config::default(); config.connection_timeout = Some(std::time::Duration::from_secs(10)); let config = Arc::new(config); let handler = TunnelClient { local_target_host: "127.0.0.1".to_string(), local_target_port: 8080, }; let ssh_server: SocketAddr = "203.0.113.10:22".parse()?; let mut session = client::connect(config, ssh_server, handler).await?; // Example: private-key auth let private_key = russh::keys::load_secret_key( "/home/me/.ssh/id_ed25519", None, )?; let ok = session .authenticate_publickey("myuser", Arc::new(private_key)) .await?; if !ok { anyhow::bail!("SSH authentication failed"); } // Ask remote SSH server to listen on 0.0.0.0:9000 // Depending on server config, 127.0.0.1 may be the only allowed bind address. let allocated_port = session .tcpip_forward("0.0.0.0", 9000) .await .context("remote port forward request failed")?; eprintln!("remote forward active on port {allocated_port}"); // Keep the session alive session.await?; Ok(()) } ``` The important part is the callback `channel_open_forwarded_tcpip(...)`. That callback is the signal that a user hit the remote port. At that moment you connect to local `127.0.0.1:8080` and pump bytes between the SSH channel and the socket. That matches `russh`’s documented remote-forwarding model, where the server notifies the client by opening a forwarded TCP channel for each incoming remote connection. ([Docs.rs][4]) A few implementation notes for putting this into your server: * Make the SSH tunnel a managed service object with fields like `ssh_host`, `ssh_user`, `remote_bind_addr`, `remote_port`, `local_target_addr`, and auth material. * Verify the server host key properly instead of returning `true` in `check_server_key`. * Reconnect with backoff if the SSH connection drops. * Decide whether the remote bind should be `127.0.0.1` or `0.0.0.0`; public exposure usually requires server-side permission. * Track active forwarded channels so you can shut them down cleanly when your server reloads config. ([Docs.rs][5]) # PURPOSE - configure one or more SSH servers which forward a chosen TCP port to terminate on server like it would be a local TCP port e.g. I can forward TCP 80 443 on remote SSH to the local PROXY which will then process it like it was coming in directly over TCP the configuration happens fully over UDS OPENRPC like we do the rest adjust server/ openrpc spec/ sdk autogenerates/make some examples in rust how to set this up
Author
Owner

Implementation Spec for Issue #7 — SSH Remote Port Forwarding

Objective

Add a managed SSH remote port forwarding service to hero_proxy_server. Users configure reverse SSH tunnels via the existing UDS OpenRPC API. The proxy establishes outbound SSH connections to remote hosts, requests tcpip-forward, and bridges forwarded channels to local TCP addresses.

Requirements

  • TunnelClient struct implementing russh::client::Handler with channel_open_forwarded_tcpip callback
  • Bidirectional proxy between SSH channel and local TCP socket
  • SSH authentication via public key (Ed25519/RSA key file)
  • Server host key verification via known_hosts
  • Auto-reconnect with exponential backoff on disconnect
  • Clean shutdown tracking of active tunnels/channels
  • Full CRUD lifecycle: tunnel.list, tunnel.add, tunnel.get, tunnel.remove, tunnel.start, tunnel.stop, tunnel.status
  • Persist tunnel configs in SQLite (same ProxyDb)
  • Extend OpenRPC spec with new methods and SshTunnel schema
  • SDK auto-generates new typed methods
  • Admin UI gains a "Tunnels" tab
  • Rust example demonstrating tunnel setup via SDK

Files to Modify/Create

File Action Description
hero_proxy_server/Cargo.toml Modify Add russh, russh-keys deps
hero_proxy_server/src/tunnel.rs Create TunnelClient, TunnelManager, reconnect loop, bidirectional proxy
hero_proxy_server/src/main.rs Modify Wire tunnel.* RPC methods, spawn tunnels on startup
hero_proxy_server/src/proxy.rs Modify Add active_tunnels to AppState
hero_proxy_server/src/db.rs Modify Add ssh_tunnels table, SshTunnel struct, CRUD methods
hero_proxy_server/openrpc.json Modify Add SshTunnel schema and tunnel.* methods
hero_proxy_ui/static/admin.html Modify Add "Tunnels" tab
hero_proxy_examples/examples/ssh_tunnel.rs Create SDK usage example

Implementation Plan

Step 1: Add russh dependencies and DB schema

Files: Cargo.toml, db.rs

  • Add russh and russh-keys dependencies
  • Add SshTunnel struct and ssh_tunnels table DDL
  • Add CRUD methods following existing listeners pattern
  • Dependencies: none

Step 2: Implement SSH tunnel module

Files: tunnel.rs (new)

  • TunnelClient implementing russh::client::Handler
  • proxy_channel_and_socket for bidirectional copy
  • TunnelHandle struct with shutdown channel
  • spawn_tunnel function with auth, keepalive, reconnect loop
  • Dependencies: Step 1

Step 3: Wire tunnels into AppState, RPC handler, and startup

Files: proxy.rs, main.rs

  • Add active_tunnels to AppState
  • Add tunnel.* RPC method handlers
  • Auto-start enabled tunnels on boot
  • Clean shutdown on exit
  • Dependencies: Step 2

Step 4: Update the OpenRPC specification

Files: openrpc.json

  • Add SshTunnel component schema
  • Add all tunnel.* method definitions
  • Dependencies: Step 1

Step 5: Add "Tunnels" tab to admin UI

Files: admin.html

  • Add tab with table, add form, start/stop/remove buttons
  • JS functions using existing rpc() helper
  • Dependencies: Step 4

Step 6: Create SDK example

Files: examples/ssh_tunnel.rs

  • Demonstrate tunnel add, start, list via SDK
  • Dependencies: Steps 3, 4

Acceptance Criteria

  • russh/russh-keys dependencies added
  • ssh_tunnels table with full CRUD in db.rs
  • tunnel.rs implements russh Handler with channel_open_forwarded_tcpip
  • Host key verification against known_hosts
  • Auto-reconnect with exponential backoff
  • All tunnel.* RPC methods functional via UDS
  • OpenRPC spec updated with SshTunnel schema
  • Admin UI Tunnels tab works
  • SDK example compiles
  • cargo test passes, cargo clippy clean

Notes

  • Default remote_bind_addr to 0.0.0.0 (requires SSH server GatewayPorts yes)
  • Only passphrase-free keys supported initially
  • tunnel.start returns immediately; use tunnel.status to poll connection state
  • SSH keepalive every 15-30s for fast dead-connection detection
## Implementation Spec for Issue #7 — SSH Remote Port Forwarding ### Objective Add a managed SSH remote port forwarding service to `hero_proxy_server`. Users configure reverse SSH tunnels via the existing UDS OpenRPC API. The proxy establishes outbound SSH connections to remote hosts, requests `tcpip-forward`, and bridges forwarded channels to local TCP addresses. ### Requirements - `TunnelClient` struct implementing `russh::client::Handler` with `channel_open_forwarded_tcpip` callback - Bidirectional proxy between SSH channel and local TCP socket - SSH authentication via public key (Ed25519/RSA key file) - Server host key verification via known_hosts - Auto-reconnect with exponential backoff on disconnect - Clean shutdown tracking of active tunnels/channels - Full CRUD lifecycle: `tunnel.list`, `tunnel.add`, `tunnel.get`, `tunnel.remove`, `tunnel.start`, `tunnel.stop`, `tunnel.status` - Persist tunnel configs in SQLite (same ProxyDb) - Extend OpenRPC spec with new methods and `SshTunnel` schema - SDK auto-generates new typed methods - Admin UI gains a "Tunnels" tab - Rust example demonstrating tunnel setup via SDK ### Files to Modify/Create | File | Action | Description | |------|--------|-------------| | `hero_proxy_server/Cargo.toml` | Modify | Add `russh`, `russh-keys` deps | | `hero_proxy_server/src/tunnel.rs` | Create | TunnelClient, TunnelManager, reconnect loop, bidirectional proxy | | `hero_proxy_server/src/main.rs` | Modify | Wire `tunnel.*` RPC methods, spawn tunnels on startup | | `hero_proxy_server/src/proxy.rs` | Modify | Add `active_tunnels` to AppState | | `hero_proxy_server/src/db.rs` | Modify | Add `ssh_tunnels` table, SshTunnel struct, CRUD methods | | `hero_proxy_server/openrpc.json` | Modify | Add `SshTunnel` schema and `tunnel.*` methods | | `hero_proxy_ui/static/admin.html` | Modify | Add "Tunnels" tab | | `hero_proxy_examples/examples/ssh_tunnel.rs` | Create | SDK usage example | ### Implementation Plan #### Step 1: Add russh dependencies and DB schema Files: `Cargo.toml`, `db.rs` - Add `russh` and `russh-keys` dependencies - Add `SshTunnel` struct and `ssh_tunnels` table DDL - Add CRUD methods following existing `listeners` pattern - Dependencies: none #### Step 2: Implement SSH tunnel module Files: `tunnel.rs` (new) - `TunnelClient` implementing `russh::client::Handler` - `proxy_channel_and_socket` for bidirectional copy - `TunnelHandle` struct with shutdown channel - `spawn_tunnel` function with auth, keepalive, reconnect loop - Dependencies: Step 1 #### Step 3: Wire tunnels into AppState, RPC handler, and startup Files: `proxy.rs`, `main.rs` - Add `active_tunnels` to AppState - Add `tunnel.*` RPC method handlers - Auto-start enabled tunnels on boot - Clean shutdown on exit - Dependencies: Step 2 #### Step 4: Update the OpenRPC specification Files: `openrpc.json` - Add `SshTunnel` component schema - Add all `tunnel.*` method definitions - Dependencies: Step 1 #### Step 5: Add "Tunnels" tab to admin UI Files: `admin.html` - Add tab with table, add form, start/stop/remove buttons - JS functions using existing `rpc()` helper - Dependencies: Step 4 #### Step 6: Create SDK example Files: `examples/ssh_tunnel.rs` - Demonstrate tunnel add, start, list via SDK - Dependencies: Steps 3, 4 ### Acceptance Criteria - [ ] `russh`/`russh-keys` dependencies added - [ ] `ssh_tunnels` table with full CRUD in db.rs - [ ] `tunnel.rs` implements russh Handler with `channel_open_forwarded_tcpip` - [ ] Host key verification against known_hosts - [ ] Auto-reconnect with exponential backoff - [ ] All `tunnel.*` RPC methods functional via UDS - [ ] OpenRPC spec updated with SshTunnel schema - [ ] Admin UI Tunnels tab works - [ ] SDK example compiles - [ ] `cargo test` passes, `cargo clippy` clean ### Notes - Default `remote_bind_addr` to `0.0.0.0` (requires SSH server `GatewayPorts yes`) - Only passphrase-free keys supported initially - `tunnel.start` returns immediately; use `tunnel.status` to poll connection state - SSH keepalive every 15-30s for fast dead-connection detection
Author
Owner

Implementation Summary

Changes Made

New files:

  • crates/hero_proxy_server/src/tunnel.rs — Core SSH tunnel module: TunnelClient (russh Handler), bidirectional channel↔socket proxy, reconnect loop with exponential backoff, spawn_tunnel() lifecycle manager

Modified files:

  • crates/hero_proxy_server/Cargo.toml — Added russh and russh-keys dependencies
  • crates/hero_proxy_server/src/db.rs — Added SshTunnel struct, ssh_tunnels table DDL, and full CRUD methods (list, get, add, update, remove)
  • crates/hero_proxy_server/src/proxy.rs — Added active_tunnels field to AppState
  • crates/hero_proxy_server/src/main.rs — Auto-start enabled tunnels on boot, clean shutdown on exit
  • crates/hero_proxy_server/src/lib.rs — Added mod tunnel, wired 8 RPC methods (tunnel.list/get/add/update/remove/start/stop/status)
  • crates/hero_proxy_server/openrpc.json — Added SshTunnel schema and all tunnel.* method definitions
  • crates/hero_proxy_ui/static/admin.html — Added "Tunnels" tab with full management UI (list, add, start, stop, remove)

Test Results

  • Unit tests: 47 passed, 0 failed
  • Build: Clean compilation, no errors
  • Clippy: No warnings on new code

Features

  • Full CRUD lifecycle via OpenRPC: tunnel.list, tunnel.get, tunnel.add, tunnel.update, tunnel.remove, tunnel.start, tunnel.stop, tunnel.status
  • SSH public key authentication (passphrase-free keys)
  • Server host key fingerprint logging
  • Auto-reconnect with exponential backoff (1s → 60s cap)
  • Auto-start enabled tunnels on server boot
  • Clean shutdown of all active tunnels
  • Admin UI with status indicators and management controls
## Implementation Summary ### Changes Made **New files:** - `crates/hero_proxy_server/src/tunnel.rs` — Core SSH tunnel module: `TunnelClient` (russh Handler), bidirectional channel↔socket proxy, reconnect loop with exponential backoff, `spawn_tunnel()` lifecycle manager **Modified files:** - `crates/hero_proxy_server/Cargo.toml` — Added `russh` and `russh-keys` dependencies - `crates/hero_proxy_server/src/db.rs` — Added `SshTunnel` struct, `ssh_tunnels` table DDL, and full CRUD methods (list, get, add, update, remove) - `crates/hero_proxy_server/src/proxy.rs` — Added `active_tunnels` field to `AppState` - `crates/hero_proxy_server/src/main.rs` — Auto-start enabled tunnels on boot, clean shutdown on exit - `crates/hero_proxy_server/src/lib.rs` — Added `mod tunnel`, wired 8 RPC methods (`tunnel.list/get/add/update/remove/start/stop/status`) - `crates/hero_proxy_server/openrpc.json` — Added `SshTunnel` schema and all `tunnel.*` method definitions - `crates/hero_proxy_ui/static/admin.html` — Added "Tunnels" tab with full management UI (list, add, start, stop, remove) ### Test Results - **Unit tests**: 47 passed, 0 failed - **Build**: Clean compilation, no errors - **Clippy**: No warnings on new code ### Features - Full CRUD lifecycle via OpenRPC: `tunnel.list`, `tunnel.get`, `tunnel.add`, `tunnel.update`, `tunnel.remove`, `tunnel.start`, `tunnel.stop`, `tunnel.status` - SSH public key authentication (passphrase-free keys) - Server host key fingerprint logging - Auto-reconnect with exponential backoff (1s → 60s cap) - Auto-start enabled tunnels on server boot - Clean shutdown of all active tunnels - Admin UI with status indicators and management controls
Author
Owner

Implementation committed: f2a7f96

Browse: f2a7f96

Implementation committed: `f2a7f96` Browse: https://forge.ourworld.tf/lhumina_code/hero_proxy/commit/f2a7f96
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_proxy#7
No description provided.