Add 25/04/25: Managed to sync the (archive) chain on a node from scratch to a little over 20-million blocks using the method below (and wrote the article as it seemed to be working fine). Peers however then have seemed to drop off again. I'm unsure if this is due to the nature of the local hardware setup being used. If anyone else has used this method as well, please feel free to comment with your experience.
Just a quick note for the XDC Knowledge Base.
Perhaps as a developer you want multiple RPCs so you want to run multiple nodes on a LAN. Running multiple nodes on an existing LAN with a single public IP had some challenges. A solution is shown below (using remote public IP for P2P, and then public IP of current LAN along with the relevant port to provide RPC access).
Attempt 1 (unsuccessful)
In start-node.sh, changed "--port 30303" to a different port and set it to forward that port through the LAN router's NAT.
Also changed port bindings in docker-compose.yml to match.
Outcome was only 1 peer at best and that dropped off as well.
Noted the bootnodes.list in the new node was smaller than older ones so tried using the bootnodes.list from the older nodes instead. No improvement. Same issue.
Solution in the end
Use 2 servers
- Continue running the Node on the existing LAN; but
- Also get an external VPS with its own public IP
- Make sure ssh is installed on both
- Set up ssh key on the LAN machine and ssh-copy-id to the VPS
Set up 2 reverse SSH tunnels from the LAN machine to the VPS (one for TCP, and one for UDP).
Ensure to disallow remote command execution on the tunnels.
For TCP
- Use port 30303 for the TCP tunnel to keep it simple.
For UDP
- SSH tunnels dont allow UDP normally so we manage this by using socat to temporarily convert UDP to TCP for transmission through the 2nd reverse ssh tunnel.
- First we install socat on the VPS to convert the incoming UDP on 30303 at the VPS -> to TCP on the VPS end of the second reverse SSH tunnel.
- Then we also set up socat on the LAN machine to convert incoming TCP on the 2nd reverse SSH tunnel port -> back to UDP directed at 30303 on the local machine.
Automation for reboot
- Set up autossh and also socat as systemd services on the LAN machine.
- Set up socat as a systemd service on the VPS.
- (And make sure to have the XDC client auto start on reboot via an entry in the root crontab on the LAN machine).
Firewalls
- Make sure ufw is set correctly to allow required ports on the VPS and the local LAN machine.
- Make sure the router firewall is not interfering.
- Make sure the VPS provider isn't blocking outgoing SSH.
Then in the XDC node start-node.sh
- Continue to use "--port 30303"
- Add "--nat extip:VPS_IP_ADDRESS" so the node will advertise its P2P availability at the VPS public IP address on port 30303
- No need to modify the docker-compose.yml from default as all ports on the local machine will remain the same and dont need any modification
In effect with the instructions above, the end result is an XDC node running on the LAN behind a local NAT but for all intents and purposes, the rest of the XDC network sees its public IP as that of the VPS we are tunnelling to. This allows the node's P2P networking component to be done through the VPS IP address.
Notes
- To access the RPC, just forward a port via the NAT and then access the RPC using the LAN’s public IP address (along with the port your forwarding via your NAT).
- Oracle Cloud free-tier has free VPSs with public IP addresses.
- Hands are a bit full at present so I'll only look at publishing the further details for the system services setup later if there's demand.
Discussion (0)