Step-by-step instructions for setting up ChainLightning on a client router and VPS server.
Your LAN
│
┌────────┴────────┐
│ Client Router │
│ (Linux box) │
└──┬──┬──┬──┬──┬──┘
│ │ │ │ │
WAN0 WAN1 WAN2 WAN3 WAN4
(ADSL)(Star)(ADSL)(Star)(ADSL)
│ │ │ │ │
└──┴──┴──┴──┘
│
Internet
│
┌────────┴────────┐
│ VPS Server │
│ (Hetzner, DO, │
│ Vultr, etc.) │
└─────────────────┘
The client router has multiple WAN interfaces. ChainLightning bonds them into a single tunnel interface (tun-bond) that appears as a regular network interface to your LAN.
- Linux (Debian/Ubuntu recommended, any distro works)
- Rust toolchain 1.75+ (install:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh) - TUN kernel module:
sudo modprobe tun - Build tools:
sudo apt install build-essential pkg-config
- Public IP address
- UDP ports 9001-9005 open in firewall
- IP forwarding enabled:
sysctl -w net.ipv4.ip_forward=1
- Multiple WAN interfaces (physical or VLAN)
- Root access or CAP_NET_ADMIN capability
- IP forwarding enabled for LAN routing
On both machines:
git clone https://github.com/cronos3k/chainlightning.git
cd chainlightning
cargo build --releaseOpen the UDP ports used for bonding (default: 9001-9005):
# UFW
sudo ufw allow 9001:9005/udp
# iptables
sudo iptables -A INPUT -p udp --dport 9001:9005 -j ACCEPT
# nftables
sudo nft add rule inet filter input udp dport 9001-9005 acceptEnable forwarding and masquerading for tunnel traffic:
# Enable IP forwarding
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-chainlightning.conf
sudo sysctl -p /etc/sysctl.d/99-chainlightning.conf
# NAT for tunnel traffic (replace eth0 with your server's internet interface)
sudo iptables -t nat -A POSTROUTING -s 10.99.0.0/24 -o eth0 -j MASQUERADECopy config.example.yaml to config.yaml and adjust link capacities to match your client's WAN links.
sudo ./target/release/serverThe server creates a tun-bond interface with IP 10.99.0.1/24 and listens on UDP ports 9001-9005.
Edit config.yaml:
- Set
SERVER_IPin the client binary configuration (or environment variable) to your VPS public IP - Adjust
link_tiersto match your WAN interfaces:- Set correct
capacity_down_bpsandcapacity_up_bpsfor each link - Set
link_type("adsl", "starlink", "fiber", "lte") - Set
realtime_eligiblebased on latency characteristics
- Set correct
List your network interfaces:
ip link show
ip route show table allNote which interface connects to which ISP. The client binds to each WAN interface in order (L0, L1, L2, ...).
sudo ./target/release/clientThe client creates a tun-bond interface with IP 10.99.0.2/24, connects to the server on each WAN link, and starts bonding.
# Check tunnel interface exists
ip addr show tun-bond
# Ping through tunnel
ping 10.99.0.1
# Check link status in logs
tail -f /tmp/bonding.log | grep RateCtrl# Default route via tunnel
sudo ip route add default via 10.99.0.1 dev tun-bond table 100
sudo ip rule add from 10.99.0.0/24 table 100
# Make sure WAN links still use their own gateways for tunnel traffic
# (Add routes for each WAN's gateway so tunnel UDP packets don't loop)
sudo ip route add <SERVER_PUBLIC_IP>/32 via <WAN0_GATEWAY> dev <WAN0_IFACE># Only route LAN traffic through tunnel
sudo ip route add default via 10.99.0.1 dev tun-bond table 100
sudo ip rule add from 192.168.1.0/24 table 100 # Your LAN subnetFor selective routing, use iptables marks:
# Mark traffic from specific hosts
sudo iptables -t mangle -A PREROUTING -s 192.168.1.100 -j MARK --set-mark 1
sudo ip rule add fwmark 1 table 100
sudo ip route add default via 10.99.0.1 dev tun-bond table 100Create /etc/systemd/system/chainlightning-server.service:
[Unit]
Description=ChainLightning Bonding Server
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/chainlightning/server
WorkingDirectory=/opt/chainlightning
Restart=always
RestartSec=5
LimitNOFILE=65536
# NAT setup
ExecStartPre=/sbin/iptables -t nat -A POSTROUTING -s 10.99.0.0/24 -o eth0 -j MASQUERADE
ExecStopPost=/sbin/iptables -t nat -D POSTROUTING -s 10.99.0.0/24 -o eth0 -j MASQUERADE
[Install]
WantedBy=multi-user.targetCreate /etc/systemd/system/chainlightning-client.service:
[Unit]
Description=ChainLightning Bonding Client
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/chainlightning/client
WorkingDirectory=/opt/chainlightning
Restart=always
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable chainlightning-server # or client
sudo systemctl start chainlightning-server
sudo systemctl status chainlightning-server# From client, ping server through tunnel
ping -c 10 10.99.0.1
# Check all links are active
grep "RateCtrl" /tmp/bonding.log | tail -1
# Expected: all links show "RUN" stateInstall iperf3 on both machines:
sudo apt install iperf3Run tests:
# On server
iperf3 -s -B 10.99.0.1
# On client - download test
iperf3 -c 10.99.0.1 -R -P 4 -t 30
# On client - upload test
iperf3 -c 10.99.0.1 -P 4 -t 30
# On client - bidirectional test
iperf3 -c 10.99.0.1 --bidir -t 30Watch the rate controller status:
# Real-time monitoring
tail -f /tmp/bonding.log | grep RateCtrl
# Example output:
# RateCtrl: L0[60/60Mbps|30ms|SL0.0%|RL0.0%|c1.00|RUN|w:60]
# L1[198/220Mbps|36ms|SL0.0%|RL1.6%|c1.00|RUN|w:198]
#
# Fields: Link[currentRate/maxRate|RTT|SendLoss|RecvLoss|confidence|state|weight]# Check TUN module
lsmod | grep tun
sudo modprobe tun
# Check permissions
ls -la /dev/net/tun
# Should be: crw-rw-rw- 1 root root 10, 200 ...# Test UDP connectivity on each port
for port in 9001 9002 9003 9004 9005; do
echo "test" | nc -u -w1 <SERVER_IP> $port && echo "Port $port: OK"
done
# Check server is listening
ss -ulnp | grep 900- Check if all links show "RUN" state in logs
- Verify link capacities in config match actual ISP speeds
- Run
iperf3directly on each WAN interface (without tunnel) to baseline - Check for packet loss:
ping -c 100 10.99.0.1should show 0% loss - Increase
reorder_timeout_msif links have very different latencies
If links frequently go DOWN and recover:
- Check physical connectivity on that WAN interface
- Increase
down_timeout_msin rate control config - Check if the ISP has intermittent connectivity issues
- Look at RTT values - sudden RTT spikes indicate link problems
Some retransmits are normal with multi-path bonding due to reordering. If excessive:
- Enable sync:
enable_sync: true - Increase
reorder_timeout_ms - Try
strategy: "tiered_fill"which prefers low-latency links first