<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Developers Forum for XinFin XDC Network: s4njk4n</title>
    <description>The latest articles on Developers Forum for XinFin XDC Network by s4njk4n (@s4njk4n).</description>
    <link>https://www.xdc.dev/s4njk4n</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://www.xdc.dev/feed/s4njk4n"/>
    <language>en</language>
    <item>
      <title>Hardened XDC Full Node 2026</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sat, 28 Feb 2026 07:22:51 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/hardened-xdc-full-node-2026-notes-to-self-3g2d</link>
      <guid>https://www.xdc.dev/s4njk4n/hardened-xdc-full-node-2026-notes-to-self-3g2d</guid>
      <description>&lt;p&gt;&lt;em&gt;This is just a note to myself. Posting here in case the thought bubbles and code snippets are useful for anyone else. I removed the article about the updated bootstrap script as I believe the version I created may no longer be current and don't want anyone to get stuck. If anyone really wants it, its still in the XDC Library on &lt;a href="https://xdcoutpost.xyz/"&gt;https://xdcoutpost.xyz/&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Lay and Secure the Server Foundations
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Update the OS
&lt;/h4&gt;

&lt;p&gt;To Install a New Node we first update and secure the OS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
sudo apt autoremove
sudo apt clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;Then to install prerequisites, there is an appendix at the bottom of this article but if thinking of using anything in there, please read the appendix first as I have not rechecked it since having slow peer pickup on another node installed using the instructions in the Appendix. For the moment, if unsure, they just check the XinFin Github repo for official prerequisites installation.&lt;/p&gt;




&lt;h4&gt;
  
  
  Create the Client's User
&lt;/h4&gt;

&lt;p&gt;After updating the OS, we'll add a specific user to install the node under.&lt;/p&gt;

&lt;p&gt;For the specific user's username, use up to 32 characters. Mix of numbers and lower case letters. First character must be a lower case letter.&lt;/p&gt;

&lt;p&gt;For the specific users password, use up to 40 characters. Mix of numbers, upper and lower case letters, and symbols. Be careful with using $ as a character in the password in the useradd command below as it can be interpreted as a string variable even if its in the middle of the password.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo groupadd my_new_user
sudo useradd -p $(openssl passwd -6 my_new_password) my_new_user -m -s /bin/bash -g my_new_user  -G sudo

sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  SSH-Key Authentication
&lt;/h4&gt;

&lt;p&gt;The benefit of using SSH key authentication as a sole means of access is that the VAST majority of port-scan to brute-force attempts are no longer even possible as the bots just move on if there's not even an ability to enter a password.&lt;/p&gt;

&lt;p&gt;If you've not already done so and plan on using SSH key authentication to login to the server, remember to do these from your local terminal you'll be connecting to the VPS from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen
AND THEN
ssh-copy-id -p&amp;lt;yourcustomSSHport&amp;gt; login@serverip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Lock Down SSH Access
&lt;/h4&gt;

&lt;p&gt;Then secure the ssh access to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll want to uncomment the line "#Port 22", and change the port number to something custom.&lt;br&gt;
Also set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PermitRootLogin no
PasswordAuthentication yes
PubkeyAuthentication yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "root" user is a weakness on the server as it is an easy username for hackers to "guess". If they have the username, they only need to guess the password. However if we take away the easy "root" username as an option, we create another whole level of pain for hackers. That's why the "PermitRootLogin no" is set.&lt;/p&gt;

&lt;p&gt;You also need to determine whether password logins are even required for any user (or if you'll just manage with SSH key authentication). If using SSH key authentication only (MUCH MUCH MUCH safer, then set "PasswordAuthentication no" instead and just use the SSH keys to connect. If your local machine is ever damaged or you lose the keys, you can just access your VPS provider's hypervisor console in their dashboard and login to your node via that to either add a new key directly/manually, or temporarily open up SSH to copy a new key in, before locking it down to ssh key authentication only again.&lt;/p&gt;

&lt;p&gt;Then restart the SSH service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service ssh restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If SSH changes are failing, you can check if your VPS provider has additional override setings and where they are. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo grep -rE '^\s*PermitRootLogin' /etc/ssh/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then modify what you need to.&lt;/p&gt;




&lt;h4&gt;
  
  
  Firewall
&lt;/h4&gt;

&lt;p&gt;Then we establish the firewall:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 30303
sudo ufw allow &amp;lt;yourSSHport&amp;gt;
sudo ufw enable
sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you allow your SSH port before you reboot, otherwise you won't be able to connect to your VPS by SSH after rebooting. You may still be able to get in through a virtual terminal in your VPS provider's dashboard in that case and can then hopefully allow the port that way so you can get back in via SSH.&lt;/p&gt;




&lt;h4&gt;
  
  
  Fail2Ban Intrusion Protection System
&lt;/h4&gt;

&lt;p&gt;Then to protect against repetitive automated brute-force intrusion attempts we set up fail2ban:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install fail2ban
sudo cp -p /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then put these lines in the sshd section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;enabled = true
filter = sshd
port = ssh
banaction = iptables-multiport
findtime = 86400
# 86400 seconds = 1 day
bantime = -1
# -1 = ban forever
maxretry = 3
# 3 attempts in 1 day = ban
logpath = %(sshd_log)s
backend = %(sshd_backend)s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then complete the fail2ban process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart fail2ban
sudo systemctl enable fail2ban
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check who is banned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo fail2ban-client status sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To unban an IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo fail2ban-client set sshd unban &amp;lt;ip address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Download the Chain Tarball
&lt;/h4&gt;

&lt;p&gt;Now we download the chain tarball to avoid the slow sync from genesis. Make sure you're SSH'd in as the specific user, not root. We'll use a "screen" session to avoid broken pipes and will use "aria2c" to optimise a multiconnection download and ensure we can "resume" the download if it is broken for whatever reason.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install screen
sudo apt install aria2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run a screen session with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;screen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To detach a screen session but keep it running in the background, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ctrl+A
and then press
D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see a list of available screen sessions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;screen -ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To reattach a screen session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;screen -r &amp;lt;sessionname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To permanently close/exit a screen session just use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So.. Now that we're in our screen session, lets download the XDC chain tarball:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/chaindl
cd ~/chaindl
aria2c -x 8 -s 8 -k 1M https://download.xinfin.network/xdcchain.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If for whatever reason it is interrupted and needs to resume, we can add the --continue flag to the aria2c command.&lt;/p&gt;

&lt;p&gt;Detach the screen session and come back later. You can peek in on it every now and then with the commands above. When it is finished, we need to decompress it. Once again, we can do this via a screen session and then detach it to protect the whole process from interruption.&lt;/p&gt;

&lt;p&gt;To decompress it in the screen session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tar -xvf xdcchain.tar  # Creates XDC directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then clean it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd XDC
sudo rm -rf nodekey  # Remove old node key
sudo rm -rf transactions.rlp  # Clean up pending transactions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Install the XDC Node Client
&lt;/h4&gt;

&lt;p&gt;Now to install the node using method 3 (from here &lt;a href="https://github.com/XinFinOrg/XinFin-Node"&gt;https://github.com/XinFinOrg/XinFin-Node&lt;/a&gt;) and customise it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
git clone https://github.com/XinFinOrg/XinFin-Node.git
cd XinFin-Node/mainnet
sudo nano .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the .env file, set your node name, email and set gcmode to "full" instead of "archive".&lt;br&gt;
Then save and exit.&lt;br&gt;
Note: there is also a gcmode setting in start_node.sh but this is a backup/default if the environment variable isn't already set. It defaults to "archive" if not set.&lt;/p&gt;

&lt;p&gt;Then start the node briefly for 10 seconds or so and then shut it down again.. This creates some of the directories we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash docker-up.sh
sudo bash docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now clean up the new directories by removing the chain files we dont need etc:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet/xdcchain
sudo rm -rf XDC
sudo rm -rf *.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we move the earlier decompressed chain files into the new node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv ~/chaindl/XDC .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we restart the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash docker-up.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Troubleshoot Peer Issues
&lt;/h4&gt;

&lt;p&gt;Let the node sync.&lt;br&gt;
If you're not picking up peers, then you can run the peer.sh script (while the node is running):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get an error, then check the container name. I have noted that the docker containers have had different names at different stages or perhaps with different methods I'm not sure. To check the container name use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then edit the peer.sh script and update the hard-coded container name in there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you had to do the container-name update in the script then run the peer.sh script again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The node should now pick up peers relatively quickly (ie it should be up to perhaps 15-20 peers within 10-15 minutes).&lt;/p&gt;




&lt;h4&gt;
  
  
  Residual "Open" Ports
&lt;/h4&gt;

&lt;p&gt;The Docker port bindings will not be affected by ufw. This means that you will still see ports 8888 and 8989 showing as open if you scan the VPS ports with nmap from a terminal external to the VPS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -Pn -p 8888 Your.VPS.IP.ADDRESS
nmap -Pn -p 8989 Your.VPS.IP.ADDRESS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, by default in the .env file, the environment variable "ENABLE_RPC=false" is set and this makes the start_node.sh script not expose the RPC/WS in the docker container. The ports 8888 and 8989 still show as open however as the docker proxy answers the handshake. So, even if someone tries to access those "open" ports, sorry nobody is home.&lt;/p&gt;

&lt;p&gt;To actually decrease the attack surface even further we can simply disable to docker port bindings by editing the docker-compose.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo nano docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you've opened the file, just use # to comment out the 2 port lines that handle the container port bindings for 8888 and 8989 as show here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ports:
      - "30303:30303"
      # - "8989:8545"
      # - "8888:8546"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now just stop and restart the node. Then try the nmap scans again and those ports will now show as closed.&lt;/p&gt;




&lt;h4&gt;
  
  
  Node Migrations
&lt;/h4&gt;

&lt;p&gt;If this is a node migration, once syncing has completed, remember to docker-down.sh the old and new nodes. Then delete the keystore file on the new node. Then scp the keystore file across from the old node to put it in the keystore directory on the new node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo scp -P &amp;lt;OldNodePortNumber&amp;gt; username@IPaddress:~/XinFin-Node/xdcchain/keystore/UTC* xdcchain/keystore/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just docker-up.sh the new node.&lt;/p&gt;

&lt;p&gt;Voila. New node!&lt;/p&gt;




&lt;h4&gt;
  
  
  Android/iOs Push Notifications
&lt;/h4&gt;

&lt;p&gt;To receive Android/iOs push-notifications if your client/node goes offline, set up the free open-source &lt;a href="https://xdcoutpost.xyz/"&gt;XDC Sentinel&lt;/a&gt; tool.&lt;/p&gt;

&lt;p&gt;To receive Android/iOs push-notifications of changes in Governance status of staked nodes + arrival alerts for Masternode Educational Rewards, set up the free open-source &lt;a href="https://xdcoutpost.xyz/"&gt;XDC Tycoon&lt;/a&gt; tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  Appendix details just for me
&lt;/h2&gt;

&lt;p&gt;Installing the XDC client on machines that have Intel CPUs, there are some considerations with one of the new Docker packages (docker-ce-rootless-extras) that must be removed.&lt;/p&gt;

&lt;p&gt;The following code is not to be used. Something in the new customised bootstrap script I wrote (not the official XinFin version) resulted in a node I just installed not picking up peers so all the below steps when installing prerequisites etc need to be double checked to see if they may have been related. This section is a mental note to me just until I've finished playing next time I install a new node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo apt-get update
    sudo apt-get install \
            apt-transport-https ca-certificates curl git jq \
            software-properties-common -y

    echo "Setting up Docker repository and installing Docker"

    # Remove any old Docker installations
    sudo apt remove docker docker-engine docker.io containerd runc docker-compose -y
    sudo rm -f /usr/local/bin/docker-compose

    # Add Docker's official GPG key and repository
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release &amp;amp;&amp;amp; echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null

    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

    # Handle Intel compatibility issue by removing and holding the problematic package
    sudo apt remove docker-ce-rootless-extras -y
    sudo apt-mark hold docker-ce-rootless-extras
    sudo systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>XDC Node Peering Guide</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Fri, 02 Jan 2026 12:11:10 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/xdc-node-peering-a-comprehensive-guide-1ibc</link>
      <guid>https://www.xdc.dev/s4njk4n/xdc-node-peering-a-comprehensive-guide-1ibc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Deploying a node on the XDC Network is a straightforward process, but users occasionally encounter challenges where the node fails to connect to peers effectively. This can manifest as the blockchain downloading slowly or halting, with peer counts remaining stuck at 0–1 for extended periods. This issue prevents the node from fully syncing with the network, participating in consensus (and earning rewards).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/xaAD2npUnHBd3j56GtvBzL3clIsEBUInDU0qjcs72og/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDo5NjAvZm9ybWF0/OndlYnAvMSp6cGVi/cXZIQzUtSG1IM1c2/Wm1NMEZ3LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/xaAD2npUnHBd3j56GtvBzL3clIsEBUInDU0qjcs72og/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDo5NjAvZm9ybWF0/OndlYnAvMSp6cGVi/cXZIQzUtSG1IM1c2/Wm1NMEZ3LnBuZw" alt="XDC Network Stats page showing no peers" width="480" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first time we encountered this, we tested further by deploying nodes across multiple data centers from a specific VPS provider in different geographic locations, and the problem persisted. Initial checks confirmed that essential configurations were generally ok, such as disabling firewalls (e.g., ufw not enabled on mainnet nodes). The issue for us turned out to be an externally blocked port 30303 but there are several other potential causes that need to be considered whenever this issue arises.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/jVD4RPiaCqI0FDmphlaobp3XKuxAngvvBd1AdLdeC6s/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDo1MjQvZm9ybWF0/OndlYnAvMSpMaHBE/SGY2clpwU2xpQ1BF/M0VnaWlRLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/jVD4RPiaCqI0FDmphlaobp3XKuxAngvvBd1AdLdeC6s/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDo1MjQvZm9ybWF0/OndlYnAvMSpMaHBE/SGY2clpwU2xpQ1BF/M0VnaWlRLnBuZw" alt="nmap showing port 30303 is open" width="262" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide expands on our original troubleshooting steps by incorporating additional diagnostic methods and updated best practices as of 2026. We’ll cover how XDC nodes acquire peers, some common root causes of issues and their detailed solutions as well as some preventive measures you can take. While this focuses on mainnet XDC nodes, similar principles apply to Apothem testnet nodes (which use port 30304 instead of 30303).&lt;/p&gt;




&lt;h2&gt;
  
  
  How XDC Nodes Acquire Peers
&lt;/h2&gt;

&lt;p&gt;Understanding peer acquisition is key to diagnosing issues. When an XDC node starts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Initial Bootstrapping:&lt;/strong&gt; The node connects to a predefined set of “bootnodes” listed in the &lt;code&gt;bootnodes.list&lt;/code&gt; file. These are trusted, always-online nodes that serve as entry points to the network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/CMYKn9YjoZ9C6zJtorzWkk-9IR7ZV7hmenYaAt--E3g/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxMjE0L2Zvcm1h/dDp3ZWJwLzEqUEFu/RWdyejVjS3lNbnlH/eFJDWXlVUS5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/CMYKn9YjoZ9C6zJtorzWkk-9IR7ZV7hmenYaAt--E3g/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxMjE0L2Zvcm1h/dDp3ZWJwLzEqUEFu/RWdyejVjS3lNbnlH/eFJDWXlVUS5wbmc" alt="Initial peers are from the bootnodes.list file" width="607" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Peer Discovery and Propagation:&lt;/strong&gt; Once connected to the bootnodes, the node then queries them for additional peers. Incoming connections are also accepted if the node is discoverable.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Networking Protocols:&lt;/strong&gt; Peering occurs over TCP/UDP on port 30303 for mainnet (or 30304 for testnet). The node advertises itself via UDP for discovery and uses TCP (RLPx) for data exchange. The default maximum peer count (&lt;code&gt;maxpeers&lt;/code&gt;) is set to 25 in XDC configurations, so counts above this are rare unless customized.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/kQdIovKGfgPNeE5q6tEh3tTdPiCEzg19gI8_yun9tf4/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqcVhp/azRhdlFPQi04SkFT/STdzQ0JUQS5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/kQdIovKGfgPNeE5q6tEh3tTdPiCEzg19gI8_yun9tf4/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqcVhp/azRhdlFPQi04SkFT/STdzQ0JUQS5wbmc" alt="Mainnet node using TCP(RLPx transport)/UDP on port 30303" width="720" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Ongoing Maintenance:&lt;/strong&gt; Peers are dynamically added and dropped based on factors like latency, reliability, and network health. The node also supports UPnP (Universal Plug and Play) for automatic port mapping on compatible routers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any step fails due to network blocks, configuration errors or faulty bootnodes, then the peer count remains low, leading to isolation from the network.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Causes of Low or Zero Peers
&lt;/h2&gt;

&lt;p&gt;In our experience, the most frequent culprits of low or zero peers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Port 30303 Blocked or Not Forwarded:&lt;/strong&gt; This was the cuplrit for us that we mentioned above. Even if local checks show the port as open, external factors like firewalls, VPS provider restrictions, router NAT settings, or ISP blocks can prevent incoming connections.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Firewall or Security Group Misconfigurations:&lt;/strong&gt; Tools like &lt;code&gt;ufw&lt;/code&gt;, &lt;code&gt;iptables&lt;/code&gt;, or cloud provider security groups (e.g., AWS EC2, Google Cloud) can block traffic despite appearing inactive.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Resource Constraints:&lt;/strong&gt; High memory usage from peering surges can cause nodes to slow or crash, indirectly affecting peer connections. Older hardware or unoptimized setups exacerbate this.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Outdated or Faulty Bootnodes List:&lt;/strong&gt; The &lt;code&gt;bootnodes.list&lt;/code&gt; file may contain inactive, unreachable, or deprecated nodes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Network or Sync Issues:&lt;/strong&gt; Slow internet, geographic latency, or incomplete blockchain sync can deter peers. Testnet/mainnet port mismatches can also contribute to this.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Nodekey or Cache Problems:&lt;/strong&gt; Corrupted &lt;code&gt;nodekey&lt;/code&gt; files or stale peer databases can hinder fresh connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For completeness, we always recommend ruling out port issues first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Comprehensive Solutions
&lt;/h2&gt;

&lt;p&gt;Follow these steps in order. We’ll start first with port verification, then address bootnodes, followed by using an official script &lt;code&gt;peer.sh&lt;/code&gt; included in the XDC nodes that allows forcing addition of peers. Then we'll finish off with restarts and monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Verify and Open Port 30303
&lt;/h3&gt;

&lt;p&gt;Even if you believe the port is open, double-check externally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools Needed&lt;/strong&gt;&lt;br&gt;
Install &lt;code&gt;nmap&lt;/code&gt; on a separate machine on a network external to the node (not the VPS or server the XDC node is running on):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install nmap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check Port Status&lt;/strong&gt;&lt;br&gt;
Run &lt;code&gt;nmap&lt;/code&gt; from that separate external machine, replacing &lt;code&gt;YOUR.NODE.IP&lt;/code&gt; with your XDC node’s public IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -p 30303 YOUR.NODE.IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;STATE&lt;/code&gt; shows “closed” or “filtered,” the port is blocked.&lt;br&gt;
Alternative: If your node is running on a LAN that is local to you, for example in your office, you can instead use online tools like &lt;a href="https://canyouseeme.org/"&gt;https://canyouseeme.org/&lt;/a&gt; (enter port 30303), but prefer external &lt;code&gt;nmap&lt;/code&gt; for security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/gFdT9htdrcBydeQuWS4fe9viJtmdbw3rboLoFbP_U-k/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqaW1S/VjhOYmNjVVMtU3Fu/R0lKS3VzQS5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/gFdT9htdrcBydeQuWS4fe9viJtmdbw3rboLoFbP_U-k/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqaW1S/VjhOYmNjVVMtU3Fu/R0lKS3VzQS5wbmc" alt="Using canyouseeme.org to check port 30303" width="720" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Address Firewalls (e.g., ufw)&lt;/strong&gt;&lt;br&gt;
SSH into your node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -l root -p 22 YOUR.NODE.IP # Adjust username/port/IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check firewall:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw status verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If active and blocking 30303, allow it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 30303/tcp
sudo ufw allow 30303/udp
sudo ufw reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;code&gt;iptables&lt;/code&gt; or other firewalls, consult your OS docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For VPS Providers (e.g., AWS, DigitalOcean)&lt;/strong&gt;&lt;br&gt;
Log into your provider’s console and check security groups/inbound rules. Add rules for TCP/UDP on 30303 from any IP (0.0.0.0/0).&lt;br&gt;
If blocked at the provider level, submit a support ticket to unblock it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Local Servers/Routers&lt;/strong&gt;&lt;br&gt;
Enable port forwarding on your router: Forward TCP/UDP 30303 to your node’s LAN IP. You can find your node’s LAN IP by running either of the following commands in the terminal of your node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ifconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip addr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
&lt;a href="https://www.xdc.dev/images/Ekg3txX4ywPQnebTe4oK-E5Dg04SBkEG5JoYhAdmTfg/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqSHNK/Rldia1FualJIdm9D/My1WUDFkZy5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/Ekg3txX4ywPQnebTe4oK-E5Dg04SBkEG5JoYhAdmTfg/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxNDAwL2Zvcm1h/dDp3ZWJwLzEqSHNK/Rldia1FualJIdm9D/My1WUDFkZy5wbmc" alt="Setup your NAT port forwarding" width="720" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Power cycle the router after changes.&lt;br&gt;
Check with ISP for port blocks; request unblocking if needed.&lt;/p&gt;

&lt;p&gt;After changes, retest with &lt;code&gt;nmap&lt;/code&gt; as above and then reboot the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Update the Bootnodes List
&lt;/h3&gt;

&lt;p&gt;If ports are open but peers remain low, replace &lt;code&gt;bootnodes.list&lt;/code&gt;.&lt;br&gt;
SSH into your node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -l root -p 22 YOUR.NODE.IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the mainnet directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stop the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Backup the old file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv bootnodes.list bootnodes.list.old
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download the latest official list (as of 2026):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo wget -O bootnodes.list https://raw.githubusercontent.com/XinFinOrg/XinFin-Node/master/mainnet/bootnodes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: This is the official XDC Network GitHub version.&lt;/p&gt;

&lt;p&gt;For quicker peer pickup, delete the &lt;code&gt;nodekey&lt;/code&gt; (forces fresh identity):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf ../xdcchain/XDC/nodekey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-up.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Force Peer Connections with peer.sh
&lt;/h3&gt;

&lt;p&gt;Before resorting to restarts or resyncs, we can try the &lt;code&gt;peer.sh&lt;/code&gt; script. This utility manually adds peers from your &lt;code&gt;bootnodes.list&lt;/code&gt; to kickstart connections, often resolving issues without data loss or extended downtime. It’s especially useful for temporary glitches or outdated discovery processes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Ensure your XDC node is running (start it with &lt;code&gt;sudo bash ./docker-up.sh&lt;/code&gt; if necessary).&lt;/li&gt;
&lt;li&gt;  Navigate to the mainnet directory (or testnet for Apothem nodes):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  For best results, always update &lt;code&gt;bootnodes.list&lt;/code&gt; before running &lt;code&gt;peer.sh&lt;/code&gt; (as outlined in Step 2).&lt;/li&gt;
&lt;li&gt;  Execute the &lt;code&gt;peer.sh&lt;/code&gt; script:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Monitor the output for confirmation of added peers. You can then check the peer count in the node console by using &lt;code&gt;sudo bash ./xdc-attach.sh&lt;/code&gt; and running &lt;code&gt;net.peerCount&lt;/code&gt;, or &lt;code&gt;admin.peers&lt;/code&gt;, or by viewing the &lt;a href="https://xinfin.network"&gt;XDC Network Stats&lt;/a&gt; page. You can use the &lt;code&gt;exit&lt;/code&gt; command to exit the console when needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step can avoid more invasive actions like container restarts or full resyncs. If successful, proceed to verification. For more details on how &lt;code&gt;peer.sh&lt;/code&gt; works, see the dedicated Appendix at the end of this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Handle Resource or Sync Issues
&lt;/h3&gt;

&lt;p&gt;Monitor memory/CPU with &lt;code&gt;htop&lt;/code&gt; or &lt;code&gt;top&lt;/code&gt;. If high, upgrade hardware or optimize (e.g., limit &lt;code&gt;maxpeers&lt;/code&gt; in config).&lt;/p&gt;

&lt;p&gt;If your XDC node (mainnet or testnet) is stuck during synchronization, you can either restart the Docker container (a quicker fix that preserves existing data) or resync from scratch (note: a thorough reset can also be done using the latest chain snapshot and this will be much quicker than manually syncing from scratch). Always back up critical data first to avoid losing access to your wallet, staked tokens or node configuration. Step by step instructions are as follows. For testnet issues, repeat steps but use port 30304 and Apothem configs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup Data (Mandatory Before Any Restart or Resync)&lt;/strong&gt;&lt;br&gt;
Before proceeding, back up your node’s key files and data to prevent irreversible loss. This is crucial because subsequent steps can involve deleting/overwriting directories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Locate Your Node’s Directory:&lt;/strong&gt; Navigate to your XinFin-Node installation folder (e.g., &lt;code&gt;~/XinFin-Node/mainnet&lt;/code&gt; for mainnet or &lt;code&gt;~/XinFin-Node/testnet&lt;/code&gt; for testnet).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Backup the Keystore:&lt;/strong&gt; The keystore contains your private keys. Copy the entire &lt;code&gt;keystore&lt;/code&gt; folder (or specific files like &lt;code&gt;UTC — [DateTime] — [Address]&lt;/code&gt;) to a backup location and preferably also keep a copy on a secure external device.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/backup_XDCnode
sudo cp -r xdcchain/XDC/keystore ~/backup_XDCnode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backup your Coinbase address&lt;/strong&gt;: Stored in &lt;code&gt;coinbase.txt&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp xdcchain/XDC/coinbase.txt ~/backup_XDCnode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Backup Chain Data (Optional but Recommended for Quick Restoration)&lt;/strong&gt;: If you have enough drive space and want to preserve your current blockchain state (e.g., for debugging later), tar and copy the entire &lt;code&gt;xdcchain/XDC&lt;/code&gt; folder.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tar -czvf xdc_chain_backup.tar.gz ~/XinFin-Node/mainnet/xdcchain/XDC
sudo cp xdc_chain_backup.tar.gz ~/backup_XDCnode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Backup Configuration Files:&lt;/strong&gt; Copy &lt;code&gt;.env&lt;/code&gt; (environment variables) and any custom configs.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp .env ~/backup_XDCnode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;For Testnet (Apothem):&lt;/strong&gt; The process is similar, but files will be in the &lt;code&gt;testnet&lt;/code&gt; subfolder instead of &lt;code&gt;mainnet&lt;/code&gt;. Use port 30304 and Apothem-specific configs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Store backups offline and securely. Test restoring them in a safe environment to ensure they’re valid.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1: Restart the Container (Quick Fix for Stuck Sync)&lt;/strong&gt;&lt;br&gt;
This preserves your existing data and is ideal if the issue is temporary (e.g., high resource usage). It restarts the Docker container without deleting chain data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Stop the Container:&lt;/strong&gt; Gracefully stop it to avoid data corruption.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Optionally inspect Logs for Issues:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -50 xdcchain/&amp;lt;logfile_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Common fixes:&lt;/strong&gt; Optimize config (e.g., set maxpeers to limit connections) or free up resources (close other processes).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Restart the Container:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-up.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Monitor Sync Progress:&lt;/strong&gt; Attach the node console.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./xdc-attach.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the console, use &lt;code&gt;eth.syncing&lt;/code&gt; command to check block height sync status, use &lt;code&gt;admin.peers&lt;/code&gt; command to list connected peers, use &lt;code&gt;net.peerCount&lt;/code&gt; to see how many peers are connected, and use &lt;code&gt;exit&lt;/code&gt; command to exit the console. You can also view overall network stats at &lt;a href="https://xinfin.network/%5D(https://xinfin.network/)"&gt;https://xinfin.network/&lt;/a&gt; or &lt;a href="https://apothem.network/%5D(https://apothem.network/)"&gt;https://apothem.network/&lt;/a&gt; for testnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Testnet:&lt;/strong&gt; Repeat using port 30304 and Apothem configs.&lt;/p&gt;

&lt;p&gt;This should resolve minor sync hangs without full data loss. If unresolved, proceed to resync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2: Resync from Scratch (Thorough Reset for Persistent Issues)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If restarting doesn’t help (e.g., corrupted data), resync by wiping the chain data and starting over. Use a snapshot for faster sync (days instead of weeks); otherwise, it syncs from genesis (very slow). Backups as described above are essential here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Stop the Node/Container:&lt;/strong&gt; As in Option 1 above.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Delete Existing Chain Data:&lt;/strong&gt; Remove the corrupted or stuck data (but ensure backups are done first as described above in Option 1!).
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf xdcchain/XDC  # DANGER: MAKE SURE YOU HAVE BACKUPS FIRST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For full resync without snapshot, skip the snapshot step below. This will sync from block 0 (expect 1–2 weeks on decent hardware).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Download a Snapshot (Recommended for Speed):&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Snapshots update ~every 20 days.&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Mainnet:&lt;/strong&gt;&lt;br&gt;
Full node: &lt;a href="https://download.xinfin.network/xdcchain.tar"&gt;https://download.xinfin.network/xdcchain.tar&lt;/a&gt;&lt;br&gt;
Archive node: &lt;a href="http://downloads.xinfin.network/xdcchain_archive.tar"&gt;http://downloads.xinfin.network/xdcchain_archive.tar&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testnet (Apothem):&lt;/strong&gt; &lt;br&gt;
Snapshots may not always be available; check &lt;a href="https://download.xinfin.network/apothem.tar"&gt;https://download.xinfin.network/apothem.tar&lt;/a&gt;. If not available, check with XinFin or community forums. If unavailable, resync from genesis.&lt;br&gt;
&lt;/p&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://download.xinfin.network/xdcchain.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Extract and Clean Up the Snapshot (and move it into the correct directory):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tar -xvf xdcchain.tar  # Creates XDC directory
cd XDC
sudo rm -rf nodekey  # Remove old node key
sudo rm -rf transactions.rlp  # Clean up pending transactions
cd ..
sudo mv XDC ~/XinFin-Node/mainnet/xdcchain/ # Use testnet/devnet if needed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For archive nodes, edit &lt;code&gt;start-node.sh&lt;/code&gt; or &lt;code&gt;.env&lt;/code&gt; to add &lt;code&gt;gcmode=archive&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Configure and Start the Node:&lt;/strong&gt; Edit &lt;code&gt;.env&lt;/code&gt; if needed (e.g., &lt;code&gt;ENABLE_RPC=true&lt;/code&gt; etc). Copy back keystore and coinbase.txt from backups if wiped. Start:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-up.sh.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Monitor and Verify:&lt;/strong&gt; As in Option 1 above, the node should now sync from the snapshot’s block height. If peers are low:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;For Testnet:&lt;/strong&gt; Use Apothem snapshot if available, port 30304, and configs (e.g., &lt;code&gt;NETWORK=apothem&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt;). Repeat steps but with testnet folder/scripts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional Tips&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Resource Monitoring:&lt;/strong&gt; As mentioned, use &lt;code&gt;htop&lt;/code&gt; or &lt;code&gt;top&lt;/code&gt; to watch CPU/memory. If high during resync, limit peers or upgrade the server.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security:&lt;/strong&gt; After resync, verify your node on the stats page. Enable firewalls (e.g., open only ports 30303 for mainnet, 30304 for testnet).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;If Issues Persist:&lt;/strong&gt; Check &lt;a href="https://xdc.dev"&gt;XDC community forums&lt;/a&gt;, &lt;a href="https://github.com/XinFinorg"&gt;GitHub repos&lt;/a&gt;, or &lt;a href="https://xinfin.network"&gt;stats page&lt;/a&gt; for known network problems.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Time Estimates:&lt;/strong&gt; Restart: Minutes. Resync with snapshot: Hours to days. From scratch: Weeks.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Verification and Monitoring
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Wait 1–2 hours post-restart.&lt;/li&gt;
&lt;li&gt;  Check peer count on: &lt;a href="https://xinfin.network"&gt;XDC Network Stats&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  If peers climb to 10–25, the issue is resolved. Congratulations! If problems continue, check XDC forums for further information/help.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.xdc.dev/images/v9M4t2n_6F3gOfiZV5yeQsf8dip7KEArf2m2ivMCXCQ/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxMDkwL2Zvcm1h/dDp3ZWJwLzEqOTNv/Q3VKRTJHQ2dLbmVG/aWtSWGdkZy5wbmc" class="article-body-image-wrapper"&gt;&lt;img src="https://www.xdc.dev/images/v9M4t2n_6F3gOfiZV5yeQsf8dip7KEArf2m2ivMCXCQ/w:880/mb:500000/ar:1/aHR0cHM6Ly9taXJv/Lm1lZGl1bS5jb20v/djIvcmVzaXplOmZp/dDoxMDkwL2Zvcm1h/dDp3ZWJwLzEqOTNv/Q3VKRTJHQ2dLbmVG/aWtSWGdkZy5wbmc" alt="Congratulations your peer count has been fixed!" width="545" height="540"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Preventive Measures
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Regularly update your node software via official repos.&lt;/li&gt;
&lt;li&gt;  Monitor XDC announcements for updates (e.g., bootnode changes).&lt;/li&gt;
&lt;li&gt;  Join community channels: &lt;a href="https://xdc.dev/"&gt;XDC.Dev Forum&lt;/a&gt;, &lt;a href="https://github.com/XinFinOrg"&gt;GitHub&lt;/a&gt;, &lt;a href="https://x.com/XDCNetwork"&gt;X.com&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Consider running multiple nodes for redundancy.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Appendix: The peer.sh Script
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What it is Used For
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;peer.sh&lt;/code&gt; script is a helper utility included in the XDC node installation (found in the &lt;code&gt;~/XinFin-Node/mainnet&lt;/code&gt; directory). It is designed to assist in troubleshooting peer connection issues by manually forcing the node to connect to known peers. This is particularly useful when automatic peer discovery fails due to network configuration problems, outdated bootnodes, or temporary network glitches, helping to kickstart the syncing process and increase the peer count.&lt;/p&gt;

&lt;h3&gt;
  
  
  What it Does
&lt;/h3&gt;

&lt;p&gt;The script automates the process of adding peers to your XDC node. Since the current XDC client is based on Go-Ethereum (geth), it leverages the node’s JavaScript console (via &lt;code&gt;xdc-attach.sh&lt;/code&gt;) to execute the &lt;code&gt;admin.addPeer()&lt;/code&gt; command for each enode listed in the &lt;code&gt;bootnodes.list&lt;/code&gt; file. This manually establishes connections to trusted bootnodes, bypassing potential discovery failures. Once connected, these bootnodes can propagate additional peers, improving overall network participation and sync speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Use It
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Ensure your XDC node is running (start it with &lt;code&gt;sudo bash ./docker-up.sh&lt;/code&gt; if necessary).&lt;/li&gt;
&lt;li&gt;  Navigate to the mainnet directory (or testnet for Apothem nodes)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  For best results, always update &lt;code&gt;bootnodes.list&lt;/code&gt; before running &lt;code&gt;peer.sh&lt;/code&gt;. See above article for instructions on how to do this.&lt;/li&gt;
&lt;li&gt;  Then execute the &lt;code&gt;peer.sh&lt;/code&gt; script:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./peer.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Monitor the output for confirmation of added peers. You can then check the peer count in the node console by using &lt;code&gt;sudo bash ./xdc-attach.sh&lt;/code&gt; and running &lt;code&gt;net.peerCount&lt;/code&gt;, or &lt;code&gt;admin.peers&lt;/code&gt;, or by viewing the &lt;a href="https://xinfin.network"&gt;XDC Network Stats&lt;/a&gt; page. You can use the &lt;code&gt;exit&lt;/code&gt; command to exit the console when needed.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;In case of any technical queries on XDC Network, feel free to drop your queries on &lt;a href="https://xdc.dev/"&gt;XDC.Dev forum&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://xinfin.org/"&gt;XinFin.org&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.xdc.network/"&gt;XDC Chain Network Tools and Documents&lt;/a&gt;&lt;br&gt;
&lt;a href="https://xdcscan.com/"&gt;XDC Network Explorer&lt;/a&gt;&lt;br&gt;
&lt;a href="https://xdc.dev/"&gt;XDC Dev Forum&lt;/a&gt;&lt;br&gt;
&lt;a href="https://faucet.blocksscan.io/"&gt;XDC Testnet/Devnet Faucet — Blocksscan&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  XDC Social Links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://x.com/XDCNetwork"&gt;X.com&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/XinFinorg"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://t.me/XDC_Network_Updates"&gt;Telegram&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.facebook.com/XDCNetworkBlockchain"&gt;Facebook&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/company/xdcnetwork/"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/channel/UCQaL6FixEQ80RJC0B2egX6g"&gt;YouTube&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to run a 2nd XDC Network node on a LAN that has only a single existing public IP address</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sat, 12 Apr 2025 23:28:41 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/how-to-run-a-2nd-xdc-network-node-on-a-lan-with-a-single-existing-public-ip-address-3ib7</link>
      <guid>https://www.xdc.dev/s4njk4n/how-to-run-a-2nd-xdc-network-node-on-a-lan-with-a-single-existing-public-ip-address-3ib7</guid>
      <description>&lt;p&gt;&lt;em&gt;2025-12-29 Update: This article is no longer current. To run a second xdc node on a single IP at present, all that is required is:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Set start-node.sh --port setting to whatever P2P port you want to use (other than 30303 which is presumably already taken on your LAN's NAT)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Set P2P port mappings in the docker-compose.yml for whatever P2P port you want to use&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Firewall/UFW port opening for the P2P port may not be required as Docker port bindings in the docker-compose.yml I think supercede firewall settings. If unsure and the node isnt picking up peers, open the port in the firewall.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Internet facing router NAT settings should be set to forward the P2P port you have chosen to the LAN IP address of the machine that is operating the second node&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;That's about it! Good luck!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can safely ignore the article below.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Just a quick note for the XDC Knowledge Base.&lt;/p&gt;

&lt;p&gt;Perhaps as a developer you want multiple RPCs so you want to run multiple nodes on a LAN. Running multiple nodes on an existing LAN with a single public IP had some challenges. A solution is shown below (using remote public IP for P2P, and then public IP of current LAN along with the relevant port to provide RPC access).&lt;/p&gt;

&lt;h2&gt;
  
  
  Attempt 1 (Unsuccessful)
&lt;/h2&gt;

&lt;p&gt;In start-node.sh, changed "--port 30303" to a different port and set it to forward that port through the LAN router's NAT.&lt;br&gt;
Also changed port bindings in docker-compose.yml to match.&lt;br&gt;
Outcome was only 1 peer at best and that dropped off as well.&lt;br&gt;
Noted the bootnodes.list in the new node was smaller than older ones so tried using the bootnodes.list from the older nodes instead. No improvement. Same issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attempt 2 (Worked!)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Use 2 servers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Continue running the Node on the existing LAN; but&lt;/li&gt;
&lt;li&gt;Also get an external VPS with its own public IP&lt;/li&gt;
&lt;li&gt;Make sure ssh is installed on both&lt;/li&gt;
&lt;li&gt;Set up ssh key on the LAN machine and ssh-copy-id to the VPS&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Set up 2 reverse SSH tunnels from the LAN machine to the VPS (one for TCP, and one for UDP).
&lt;/h3&gt;

&lt;p&gt;Ensure to disallow remote command execution on the tunnels.&lt;/p&gt;

&lt;h4&gt;
  
  
  For TCP
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Use port 30303 for the TCP tunnel to keep it simple.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  For UDP
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SSH tunnels dont allow UDP normally so we manage this by using socat to temporarily convert UDP to TCP for transmission through the 2nd reverse ssh tunnel.&lt;/li&gt;
&lt;li&gt;First we install socat on the VPS to convert the incoming UDP on 30303 at the VPS -&amp;gt; to TCP on the VPS end of the second reverse SSH tunnel.&lt;/li&gt;
&lt;li&gt;Then we also set up socat on the LAN machine to convert incoming TCP on the 2nd reverse SSH tunnel port -&amp;gt; back to UDP directed at 30303 on the local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automation for reboot
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Set up autossh and also socat as systemd services on the LAN machine.&lt;/li&gt;
&lt;li&gt;Set up socat as a systemd service on the VPS.&lt;/li&gt;
&lt;li&gt;(And make sure to have the XDC client auto start on reboot via an entry in the root crontab on the LAN machine).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Firewalls
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Make sure ufw is set correctly to allow required ports on the VPS and the local LAN machine.&lt;/li&gt;
&lt;li&gt;Make sure the router firewall is not interfering.&lt;/li&gt;
&lt;li&gt;Make sure the VPS provider isn't blocking outgoing SSH.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Then in the XDC node start-node.sh
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Continue to use "--port 30303"&lt;/li&gt;
&lt;li&gt;Add "--nat extip:VPS_IP_ADDRESS" so the node will advertise its P2P availability at the VPS public IP address on port 30303&lt;/li&gt;
&lt;li&gt;No need to modify the docker-compose.yml from default as all ports on the local machine will remain the same and dont need any modification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In effect with the instructions above, the end result is an XDC node running on the LAN behind a local NAT but for all intents and purposes, the rest of the XDC network sees its public IP as that of the VPS we are tunnelling to. This allows the node's P2P networking component to be done through the VPS IP address.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;To access the RPC, just forward a port via the NAT and then access the RPC using the LAN’s public IP address (along with the port your forwarding via your NAT).&lt;/li&gt;
&lt;li&gt;Oracle Cloud free-tier has free VPSs with public IP addresses.&lt;/li&gt;
&lt;li&gt;Hands are a bit full at present so I'll only look at publishing the further details for the system services setup later if there's demand.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>[Informative] New Send-Offline Helper for XDC Network Deployed</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Wed, 05 Mar 2025 23:24:30 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/informative-new-send-offline-helper-for-xdc-network-deployed-4m35</link>
      <guid>https://www.xdc.dev/s4njk4n/informative-new-send-offline-helper-for-xdc-network-deployed-4m35</guid>
      <description>&lt;p&gt;Can be accessed for live usage here: &lt;a href="https://s4njk4n.github.io/XDC_Send_Offline_Helper/"&gt;&lt;/a&gt;&lt;a href="https://s4njk4n.github.io/XDC_Send_Offline_Helper/"&gt;https://s4njk4n.github.io/XDC_Send_Offline_Helper/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;General information on XDC Network Send-Offline-Functionality can be found here: &lt;a href="https://medium.com/xinfin/send-offline-functionality-sof-a-secure-way-to-transact-xdc-coins-3734eaa81365"&gt;&lt;/a&gt;&lt;a href="https://medium.com/xinfin/send-offline-functionality-sof-a-secure-way-to-transact-xdc-coins-3734eaa81365"&gt;https://medium.com/xinfin/send-offline-functionality-sof-a-secure-way-to-transact-xdc-coins-3734eaa81365&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Designed to work as the ONLINE half of the system for offline transactions (but with the additional flexibility of being able to choose a specific RPC if you want). Simple to use via a mobile phone browser. You can save the bookmark on your mobile web browser or even "Save to Homescreen" on iPhones to save it as a webapp with its own desktop icon on your phone. It is set to use the XDC logo for the icon on your phone in this case.&lt;/p&gt;

&lt;h2&gt;
  
  
  To use this send offline helper:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Specify an RPC (if you want to use a different one to the default Ankr one specified). You can find a list of current public XDC RPCs at &lt;a href="https://chainlist.org/chain/50"&gt;&lt;/a&gt;&lt;a href="https://chainlist.org/chain/50"&gt;https://chainlist.org/chain/50&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Enter the XDC address you are sending from. Page accepts both 0x and xdc prefix formats. Then press "Get Transaction Parameters".&lt;/li&gt;
&lt;li&gt;You will then be presented with the "Gas Limit Suggestion", "Nonce", and "Suggested Gas Price". These are the numbers to use in your offline transaction builder (the OFFLINE half of the Send Offline Functionality).&lt;/li&gt;
&lt;li&gt;Build the transaction with your offline transaction builder.&lt;/li&gt;
&lt;li&gt;Once completed/signed on the OFFLINE device, you just need to get the signed transaction data into the "Broadcast Signed Transaction" box at the bottom of the Send Offline Helper page we created.&lt;/li&gt;
&lt;li&gt;One way to do this is to manually enter it.&lt;/li&gt;
&lt;li&gt;A simpler way, if your offline transaction builder presents a QR code, is to just point your camera at the QR code, THEN press the "Scan QR Code" button. If it has been scanned, you will see your same transaction populate the box in the Send Offline Helper.&lt;/li&gt;
&lt;li&gt;Then you just press the "Broadcast Transaction" button and you're done.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Notes:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If your QR scan did not work, you can manually turn off the scanner by pressing the red "Stop Scanner" button that has appeared at the bottom of the page.&lt;/li&gt;
&lt;li&gt;GitHub Repo is at &lt;a href="https://github.com/s4njk4n/XDC_Send_Offline_Helper"&gt;&lt;/a&gt;&lt;a href="https://github.com/s4njk4n/XDC_Send_Offline_Helper"&gt;https://github.com/s4njk4n/XDC_Send_Offline_Helper&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;As always, please make sure to read and fully understand what this does and how it works before deploying your own for your usage&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>[Informative] Backing up XDC Geth Client Chain Database (Chain Snapshot)</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sat, 29 Jun 2024 06:07:29 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/backing-up-xdc-geth-client-chain-database-58oh</link>
      <guid>https://www.xdc.dev/s4njk4n/backing-up-xdc-geth-client-chain-database-58oh</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/s4njk4n/Chain_Backup_xdcchain.xyz"&gt;Our GitHub repo here&lt;/a&gt;&lt;/strong&gt; shows how to regularly take an automated snapshot of the chain database from the XDC Geth Client and move it to a webserver with (semi)-dynamic access gating.&lt;/p&gt;

&lt;p&gt;We had this running as a Rapid-Snapshot-as-a-Service but will shortly decommission the server and are releasing this information into the wild :) We've already done the legwork so if you want to learn how to backup your own node's chain database or learn how to create a semi-dynamic access gated webserver please read on!&lt;/p&gt;

&lt;h2&gt;
  
  
  General Points:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It is based on there being 2 servers&lt;/li&gt;
&lt;li&gt;We assume you have set up ssh-key authentication for Server 1 to be able to access Server 2&lt;/li&gt;
&lt;li&gt;For access-gating in this instance there are NO personal or private details on the webserver at all so we don't care too much about storing the user_credentials.csv file in plain text. Each username/password combination can almost be considered as a unique access token that has just been split into 2 pieces&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Server 1 - XDC_Client_Server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Server 1 is running the XDC Client (Contents are in the XDC_Client_Server folder in this repo.&lt;/li&gt;
&lt;li&gt;On Server 1, the XDC Client is located as normal at /root/XinFin-Node/ (We had this one installed as root)&lt;/li&gt;
&lt;li&gt;On Server 1, the Chain_Backup directory is located at /root/Chain_Backup&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In the Chain_Backup directory:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;chain_backup_bot.sh is the script that performs all the steps of shutting down the XDC node, creating a timestamp, creating a tarball containing the current chain database from the client, cleaning up, securely copying the snapshot to the Download server (Server 2)&lt;/li&gt;
&lt;li&gt;events.log is the log file that the bash scripts log events to. Useful to know where it was up to in case it fails for some reason. Also useful for determining how long it takes your server to perform certain tasks. How long does it take to create the tarball? How long does it take the copy the tarball to Server 2 given your available bandwidth?&lt;/li&gt;
&lt;li&gt;generate_credentials.sh is used to generate new sets of download credentials along with an expiry date and how many IP addresses they will each be valid for. Each time it runs it copies the current version of user_credentials.csv from the download server to use as its base. (So the version on Server 2 is your source of truth). After generating credentials, it copies it back to Server 2.&lt;/li&gt;
&lt;li&gt;user_credentials.csv is the generated file containing user credentials&lt;/li&gt;
&lt;li&gt;"snapshot" subdirectory - this is where your snapshot tar files are created&lt;/li&gt;
&lt;li&gt;the chain_backup_bot.sh execution is controlled by a root crontab entry of:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 23 * * * /root/Chain_Backup/chain_backup_bot.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Server 2 - Snapshot_Download_Server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Server 2 has Nginx installed to gate access to our snapshot files&lt;/li&gt;
&lt;li&gt;LetsEncrypt has been used to provide SSL/TLS encryption. Some information on SSL/TLS for Nginx (but using OpenSSL) can be found in this article and still shows where the relevant SSL certificate and private key information is within Nginx: &lt;a href="https://www.xdc.dev/s4njk4n/ssltls-encryption-for-xdc-node-rpcs-k15"&gt;https://www.xdc.dev/s4njk4n/ssltls-encryption-for-xdc-node-rpcs-k15&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Htpasswd has been used to provide basic authentication credentials to Nginx. Some information on using htpasswd with Nginx can be found here: &lt;a href="https://www.xdc.dev/s4njk4n/controlling-access-to-xdc-node-rpc-endpoints-3en3"&gt;https://www.xdc.dev/s4njk4n/controlling-access-to-xdc-node-rpc-endpoints-3en3&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Nginx IP ACL's have also been used. Further information about Nginx IP ACLs can be found via a link in this article: &lt;a href="https://www.xdc.dev/s4njk4n/controlling-access-to-xdc-node-rpc-endpoints-3en3"&gt;https://www.xdc.dev/s4njk4n/controlling-access-to-xdc-node-rpc-endpoints-3en3&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;/var/log/nginx/ contains access.log , error.log , custom.log . These log files are defined in /etc/nginx.nginx.conf&lt;/li&gt;
&lt;li&gt;/etc/nginx/nginx.conf contains nginx configuration including the log files mentioned above&lt;/li&gt;
&lt;li&gt;/etc/nginx/sites-available/default contains Nginx server blocks for default server, SSL server, and http redirect to https server&lt;/li&gt;
&lt;li&gt;SSL server block just mentioned contains SSL information and location blocks. The location block for the webserver’s /snapshot location is restricted by Nginx IP address ACL &amp;amp; also gated by basic authentication. The "#Add-new-IPs-here" line is used by our scripts to locate whereabouts in the file to insert new IP addresses that should be authorised to access the /snapshot location.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In the Chain_Backup_DL_Server_Control directory:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The "users" directory contains a plain text file for each username. The filename is the username. Within the file, the first line is the username, the second line shows how many IP addresses that username is authorised to access the snapshot from, and the lines below that show any IP addresses that have been used and authorised so far by that user.&lt;/li&gt;
&lt;li&gt;The "users/expired" directory is where expired user files are move to once the associated username's credentials have expired.&lt;/li&gt;
&lt;li&gt;access_manager_bot-pause.sh sets a flag that pauses the actions of access_manager_bot.sh&lt;/li&gt;
&lt;li&gt;access_manager_bot-restart.sh deletes the pause flag created by the script above. This effectively restarts the looping an actions of the access_manger_bot.sh&lt;/li&gt;
&lt;li&gt;access_manager_bot-start_looping.sh starts a loop that keeps running the access_manager_bot.sh script every few seconds&lt;/li&gt;
&lt;li&gt;access_manager_bot.sh is where some of the magic happens. It is our user access management script that controls user management/expiry and dynamic access gating to the snapshot&lt;/li&gt;
&lt;li&gt;events.log is the log file our scripts log events to. Useful for troubleshooting if any issues occur you can figure out where things were up to&lt;/li&gt;
&lt;li&gt;last_expiry_check is a script-generated plain text file containing the date that the last expiry check was run on users&lt;/li&gt;
&lt;li&gt;snapshot_rotation_bot.sh is the script that switches out the old snapshot for the new one (if one exists) when it is run&lt;/li&gt;
&lt;li&gt;user_credentials.csv is our source of truth for user credentials. Even the script on our other server that generates user credentials uses this file as its base to work from when adding more user credentials&lt;/li&gt;
&lt;li&gt;working_custom.log is one of our script’s working files when processing access logs and implementing dynamic user access gating&lt;/li&gt;
&lt;li&gt;the looping execution of our access-management script and timing of snapshot-switching are both initiated by the following entries in the root crontab:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@reboot /bin/bash /root/Chain_Backup_DL_Server_Control/access_manager_bot-start_looping.sh
0 23 * * * /bin/bash /root/Chain_Backup_DL_Server_Control/snapshot_rotation_bot.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Access our &lt;a href="https://github.com/s4njk4n/Chain_Backup_xdcchain.xyz"&gt;GitHub repo here&lt;/a&gt; for the scripts etc&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Good luck and I hope this server scripting comes in useful for somebody in the ecosystem whether it be how to identify and manage the steps to backup an XDC Geth Node's chain database, or alternately how to use log-based (semi)-dynamic access gating to webserver resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a class="mentioned-user" href="https://www.xdc.dev/s4njk4n"&gt;@s4njk4n&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>[Informative] PublicNexus: Service Update</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sat, 29 Jun 2024 00:18:07 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/publicnexus-service-update-5cin</link>
      <guid>https://www.xdc.dev/s4njk4n/publicnexus-service-update-5cin</guid>
      <description>&lt;p&gt;&lt;em&gt;This information is being posted for historical record information sake only in case it comes in handy for a future developer who needs something similar for a project. It only relates to &lt;u&gt;our initial Alpha/Proof-Of-Concept&lt;/u&gt;. Enterprise development work that was done isn’t solely ours and will remain private.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/s4njk4n/publicnexus_updated_endpoints"&gt;Updated Github content here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We had actually already onboarded a new team member to manage enterprise RPC deployments (currently employed as a senior solutions architect by a large US multinational tech provider and who's day-to-day job at present involves enterprise deployments with Cloudflare, Kubernetes, load balancers, redundant ingress etc and who's clients include the largest bank in our country, several government departments and multi-billion dollar corporates in Asia Pacific. Even better is that he's already familiar with the XDC ecosystem :) As of yesterday, our expected delivery date for a fully-redundant native-XDC Enterprise-grade service WAS 3 weeks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will have further discussions with other developers in the ecosystem before making a final decision on what to do with our actively running servers, however it is looking very much like we will discontinue the service and pause further development as it isn't really required to add ourselves as another service provider in the space (and resources may deliver more ecosystem benefit if allocated elsewhere).&lt;/p&gt;

&lt;h2&gt;
  
  
  Progress Update
&lt;/h2&gt;

&lt;p&gt;The initial intent of PublicNexus was to fill a perceived need by the general community and basic developers however on further discussions and examining the publicly available RPCs on the XDC Network, it has become apparent that this need has already been filled by Ankr (and so PublicNexus isn't really required).&lt;/p&gt;

&lt;p&gt;This is based on the following as of today (29/06/2024):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Ankr site offers public RPC access allowing up to 20requests/sec. This is more than the average community member and basic developer would need for general transactions via their Web3 wallet.&lt;/li&gt;
&lt;li&gt;If actually signing up for their "Freemium" service then this rate limit is increased to 30requests/sec. This is also more than the average community member and basic developer would need for general transactions via their Web3 wallet.&lt;/li&gt;
&lt;li&gt;Their Premium service offers a rate limit of 1500requests/sec at a cost (at present for Node API) of USD$0.02/1000requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's assume that an XDC node providing an RPC is running on a VPS costing $100/month (arbitrary figure plucked out of the air; there are better more expensive VPSs available and similarly the XDC client can also run on cheaper VPSs if expecting less load).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;$100/month = 5M requests/month via Ankr.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The generous limit shown above means that developers working with an active commercial product may be better off using the Ankr paid service.&lt;/p&gt;

&lt;p&gt;A load balancer covering all public RPC's doesn't seem to be required.&lt;/p&gt;

&lt;p&gt;As one developer from a prominent project on the XDC network indicated in a public forum regarding how they operate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Their project runs its own Archive node (as others don't need it)&lt;/li&gt;
&lt;li&gt;For all Full node RPC requests they send them to Ankr&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For general community and basic developers, it seems that the best option is Ankr as a first point of contact.&lt;/p&gt;

&lt;p&gt;As mentioned above, the content in this repo is provided more as a record in case of future need and only consists of &lt;u&gt;early Alpha content&lt;/u&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Things to remember if deploying
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;WSS support will require use of
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a2enmod mod_proxy_wstunnel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The "Origin_Check" directory we used was located at /root/Origin_Check.&lt;/li&gt;
&lt;li&gt;The "tmp" directory is literally the /tmp location to hold the temp file for calculations and the file lock to prevent concurrency of instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/s4njk4n/publicnexus_updated_endpoints"&gt;Updated Github content here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Wishing everyone well in the ecosystem!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a class="mentioned-user" href="https://www.xdc.dev/s4njk4n"&gt;@s4njk4n&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>[Informative] PublicNexus RPC/WSS Now Available at Chainlist.org</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Tue, 25 Jun 2024 06:06:22 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/publicnexus-rpcwss-now-available-at-chainlistorg-hb5</link>
      <guid>https://www.xdc.dev/s4njk4n/publicnexus-rpcwss-now-available-at-chainlistorg-hb5</guid>
      <description>&lt;p&gt;Hands-full and full-steam-ahead so just a quick community update. Life continues to get simpler and the landscape stronger for XDC Network users and devs :)&lt;/p&gt;

&lt;p&gt;&lt;a href="//xdcchain.xyz"&gt;Xdcchain.xyz&lt;/a&gt; PublicNexus RPC/WSS stable access points are now listed and available on Chainlist.org!&lt;/p&gt;

&lt;p&gt;Chainlist.org makes it easy to connect your wallet to PublicNexus stable access points&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XDC Mainnet:&lt;/strong&gt; &lt;a href="https://chainlist.org/chain/50"&gt;https://chainlist.org/chain/50&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apothem Testnet:&lt;/strong&gt; &lt;a href="https://chainlist.org/chain/51"&gt;https://chainlist.org/chain/51&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Helpful hint: if you ever see a PublicNexus access point showing as “down” on Chainlist.org, just refresh the page and Kaboom 💥 PublicNexus has been “magically” fixed! 😉&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note for existing users: Please note that we have removed the trailing slash from the addresses of our access points&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>[Informative] PublicNexus RPC/WSS Upgrade Completed - Mainnet/Apothem, RPC/WSS, xdc/0x prefixes all now supported</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Wed, 19 Jun 2024 04:21:04 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/publicnexus-upgraded-mainnetapothem-rpcwss-xdc0x-prefixes-supported-ge1</link>
      <guid>https://www.xdc.dev/s4njk4n/publicnexus-upgraded-mainnetapothem-rpcwss-xdc0x-prefixes-supported-ge1</guid>
      <description>&lt;p&gt;Quick update to the community that we have completed several days of dev work and deployed our planned upgrades to PublicNexus.&lt;/p&gt;

&lt;h3&gt;
  
  
  New PublicNexus Features
&lt;/h3&gt;

&lt;p&gt;PublicNexus is now live with access points for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mainnet Standard RPC&lt;/li&gt;
&lt;li&gt;Mainnet 0x-enabled RPC&lt;/li&gt;
&lt;li&gt;Mainnet Standard WSS&lt;/li&gt;
&lt;li&gt;Mainnet 0x-enabled WSS&lt;/li&gt;
&lt;li&gt;Apothem Standard RPC&lt;/li&gt;
&lt;li&gt;Apothem 0x-enabled RPC&lt;/li&gt;
&lt;li&gt;Apothem Standard WSS&lt;/li&gt;
&lt;li&gt;Apothem 0x-enabled WSS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://xdcchain.xyz/xdc-public-nexus.html"&gt;Full details for each access point are available here&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We have tightened up the timing of automated tests run to identify public RPC/WSS's that fail or start to lag, such that the maximum exposure is now 15sec before PublicNexus fixes itself by removing the problematic RPC/WSS (instead of 1min as previously).&lt;/p&gt;

&lt;p&gt;We will be adding support for Mainnet and Apothem ARCHIVE nodes shortly.&lt;/p&gt;

&lt;h3&gt;
  
  
  PublicNexus and xdcchain.xyz
&lt;/h3&gt;

&lt;p&gt;Since first becoming involved with the XDC Project in July 2017, the operators of xdcchain.xyz and PublicNexus have been committed to providing quality information, services, infrastructure and other contributions to the decentralised XDC Network.&lt;/p&gt;

&lt;p&gt;Our approach has been to privately foster growth by providing appropriate solutions where necessary and possible. This has led to the birth of xdcchain.xyz and PublicNexus (amongst other initiatives).&lt;/p&gt;

&lt;p&gt;As part of our commitment to providing quality offerings to the XDC Network, PublicNexus servers have been colocated in a data centre that services both government and industry clients, and offers 1Gb/10Gb fibre pair connectivity.&lt;/p&gt;

&lt;p&gt;The highest user-reported data-throughput from our servers (when downloading our &lt;strong&gt;&lt;a href="https://xdcchain.xyz"&gt;XDC Blockchain Daily Snapshot for Rapid Node Deployment&lt;/a&gt;&lt;/strong&gt;) has been 2.4Gbit/sec (works out to ~17min to download the entire XDC chain snapshot).&lt;/p&gt;

&lt;h3&gt;
  
  
  What we have planned next
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We have already deployed several XDC Geth Clients and their associated RPC/WSS endpoints over the last few months (incl multicore CPUs, 32GB RAM) and we will look at integrating these gradually into the backend of PublicNexus&lt;/li&gt;
&lt;li&gt;We will be adding ARCHIVE NODE access point URLs for both Mainnet and Apothem&lt;/li&gt;
&lt;li&gt;We will consider tightening further the RPC/WSS endpoint testing times during future update deployments&lt;/li&gt;
&lt;li&gt;Lots more... (but some has to remain under wraps for now until closer to deployment)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What YOU can do
&lt;/h3&gt;

&lt;p&gt;PublicNexus as a stable access point for the community is more effective when there are more working RPC/WSS endpoints on the backend. The more access points available, the less the impact of any one endpoint going down.&lt;/p&gt;

&lt;p&gt;The number of requests that go to a faulty endpoint before it is excised can be successfully diluted by the total number of working endpoints available at that point in time. &lt;/p&gt;

&lt;p&gt;An RPC can be simple to run and can even be set up to run at your home/office. It also doesn't need to run 24/7 and can always catch up with the rest of the chain when it is switched on. We will be working on publishing information that will make it simpler for the average community member to run and secure their own RPC endpoint on hardware they may already have. If after doing so, you would like to help the community by donating some access from your own RPC or WSS endpoint to the backend of PublicNexus, please reach out to us at &lt;a href="mailto:xdcchain.xyz@outlook.com"&gt;xdcchain.xyz@outlook.com&lt;/a&gt; and we can look at possibly integrating it as well.&lt;/p&gt;

&lt;p&gt;Other than that, just enjoy the simplicity and reliability of using the RPC/WSS access points we've provided and let us know if you encounter any issues!&lt;/p&gt;

&lt;p&gt;——-&lt;/p&gt;

&lt;p&gt;If anyone requires commercial access to a private reliable fault-tolerant high-speed RPC cluster, let us know and we may be able to help facilitate that separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- s4njk4n&lt;/strong&gt;&lt;/p&gt;




</description>
    </item>
    <item>
      <title>[Informative] Layer 2 RPC Stable Access Point Deployed - https://publicnexus.xdcchain.xyz</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Thu, 13 Jun 2024 09:09:47 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/layer-2-rpc-stable-access-point-deployed-httpspublicnexusxdcchainxyz-6f3</link>
      <guid>https://www.xdc.dev/s4njk4n/layer-2-rpc-stable-access-point-deployed-httpspublicnexusxdcchainxyz-6f3</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;EDIT&lt;/strong&gt;: The latest updates since this article below was published can be found &lt;a href="https://www.xdc.dev/s4njk4n/publicnexus-upgraded-mainnetapothem-rpcwss-xdc0x-prefixes-supported-ge1"&gt;HERE&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Full (open source) information on content and deployment is available on our Github &lt;a href="https://github.com/s4njk4n/publicnexus.xdcchain.xyz"&gt;here&lt;/a&gt;. Deploy your own or use ours for free :)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also see some of our other current and planned dev projects at &lt;a href="https://xdcchain.xyz"&gt;https://xdcchain.xyz&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;To solve public/community RPC availability issues we've built a type of &lt;strong&gt;"Layer 2 RPC Stable Access Point"&lt;/strong&gt; with integrated health-checks.&lt;/p&gt;

&lt;p&gt;Publicnexus is a load-balancer with all known public RPCs on XDC network set as its origin/backend servers.&lt;/p&gt;

&lt;p&gt;It checks each RPC's block height in parallel once per minute. If no response, or wrong response, or if the block height is greater than 4 blocks behind highest-block-height result from the cycle, then that RPC is removed from the list of origin/backend servers and no further RPC traffic will be directed there. (So max exposure time to any problematic RPC should be about 1 minute before Publicnexus fixes itself).&lt;/p&gt;

&lt;p&gt;Conversely, if an RPC improves to meet criteria again, then it gets re-added to the list of origin/backend servers and will once again commence receiving RPC traffic.&lt;/p&gt;

&lt;p&gt;To mitigate potential load on public RPCs from unexpectedly high volume single-party usage we've added in a throttling mechanism for each IP address. The allowed-rate-per-IP is set to be adequate for general public users. (If anyone requires private reliable high-speed RPC access, let us know and we may be able to help facilitate that separately).&lt;/p&gt;

&lt;p&gt;The various rate-limit / throttling settings will also prevent its use for DOS and other malicious activity.&lt;/p&gt;

&lt;p&gt;Project is in alpha. Current RPC settings if wanting to test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xdcchain.xyz/xdc-public-nexus.html"&gt;RPC details available here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Publicnexus uses all known public RPCs, it means this access point only supports the xdc prefix at the moment. A secondary access point can be added at a later point specifically for supporting the 0x-prefix if needed)&lt;/p&gt;




&lt;h3&gt;
  
  
  Server setup
&lt;/h3&gt;

&lt;p&gt;Server running Ubuntu 22.04&lt;/p&gt;

&lt;h4&gt;
  
  
  Deployed on server:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;apache2 (&lt;em&gt;modules enabled: proxy, proxy_http, proxy_balancer, lbmethod_byrequests, ssl, ratelimit&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;certbot (&lt;em&gt;for LetsEncrypt CA cert/key&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;python3&lt;/li&gt;
&lt;li&gt;ufw&lt;/li&gt;
&lt;li&gt;fail2ban&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  apache2
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Should already be deployed on your server&lt;/li&gt;
&lt;li&gt;Enable modules with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a2enmod proxy
a2enmod proxy_http
a2enmod proxy_balancer
a2enmod lbmethod_byrequests
a2enmod ssl
a2enmod ratelimit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  certbot and python3
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Install, get certificate, and set up apache SSL configs with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install certbot python3 python3-certbot-apache -y
certbot --apache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ufw
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Install and setup firewall:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow 443
ufw allow 22
ufw enable
reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Port 22 is default SSH port. You can see instructions on how to change it by modifying /etc/ssh/sshd_config as described in &lt;a href="https://www.xdc.dev/s4njk4n/securing-your-xdc-masternode-running-on-ubuntu-2004lts-57k8"&gt;this article&lt;/a&gt;.&lt;br&gt;
Port 443 for SSL&lt;/p&gt;
&lt;h4&gt;
  
  
  fail2ban
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Follow instructions in &lt;a href="https://www.xdc.dev/s4njk4n/securing-your-xdc-masternode-running-on-ubuntu-2004lts-57k8"&gt;this article&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Further security
&lt;/h4&gt;

&lt;p&gt;Also recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up ssh-key authentication to access the server&lt;/li&gt;
&lt;li&gt;Consider disabling passwords or if keeping passwords then consider making them VERY long and complex consisting of upper/lower case letters, numbers + symbols. Disabling password login to root account can also be helpful as it is an easy to guess username on your server so it is easier to bruteforce.&lt;/li&gt;
&lt;/ul&gt;


&lt;h4&gt;
  
  
  Apache config
&lt;/h4&gt;

&lt;p&gt;This is located in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/etc/apache2/sites-enabled/000-default.conf &amp;lt;-- Certbot will write http to https redirects in here&lt;/li&gt;
&lt;li&gt;/etc/apache2/sites-enabled/000-default-le-ssl.conf   &amp;lt;-- Certbot will add your SSL setup in here
You will need to modify these files to replace with your own domain name of course if you are establishing your own system. Also need to add load balancer configuration as shown.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Scripts/Files
&lt;/h4&gt;

&lt;p&gt;Our scripts are located at ~/RPC_Check/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;rpc_check.sh&lt;/strong&gt; - This performs all the functions required to check RPCs, interpret responses, modify the load-balancer's origin servers, and (gracefully) apply the new origin server addresses. &lt;em&gt;Note: curl is set to allow max 10sec for an RPC to respond. No response in this time = broken RPC. Also remember to set your variables at the top of this file with absolute path locations etc. as we are going to run this script as a cron job. Also at present the permitted lag in block height allowed by the script for an RPC to retain its "Active" status we have arbitrarily set to 4 blocks - which would be about 8 seconds based on an average block time of 2sec on the network. After further testing a more appropriate block-height lag tolerance may be determined.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rpc_check-pause.sh&lt;/strong&gt; - In the event that you need to modify files manually and don't want rpc_check.sh running, this script will create a pause flag that inhibits the actions of rpc_check.sh.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rpc_check-restart.sh&lt;/strong&gt; - This script deletes the pause flag so rpc_check.sh will then kick off where it left off.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rpc_pool.csv&lt;/strong&gt; - This is a csv file containing 3 fields about each RPC it can potentially send traffic to: The RPC address/URL, that RPC's health-status, that RPC's last recorded block height&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;events.log&lt;/strong&gt; - Our log file. By default, rpc_check.sh will limit this file to the last 5000 lines of log history. You can allow a longer history by just modifying rpc_check.sh.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;rpc_check.sh is set to run minutely as a cron job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in the crontab file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* * * * * /bin/bash /root/RPC_Check/rpc_check.sh &amp;gt;/dev/null 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  To do
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Further determine the optimal maximum permissible block height lag. This will become apparent with further usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Please see &lt;a href="https://github.com/s4njk4n/publicnexus.xdcchain.xyz"&gt;our GitHub&lt;/a&gt; for full information and actual script/configuration files. (Or better yet, head on over to &lt;a href="https://xdcchain.xyz"&gt;https://xdcchain.xyz&lt;/a&gt; to check out details of Publicnexus and our other projects!)&lt;/em&gt;&lt;/p&gt;




</description>
    </item>
    <item>
      <title>[Informative] Upgrade XDC-Client v1.4 to v1.6 by MIGRATION</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Thu, 06 Jun 2024 16:00:48 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/upgrade-xdc-client-v14-to-v16-by-migration-2ifh</link>
      <guid>https://www.xdc.dev/s4njk4n/upgrade-xdc-client-v14-to-v16-by-migration-2ifh</guid>
      <description>&lt;p&gt;&lt;em&gt;Please read this article and ensure you understand it completely before implementing anything described. If you're not sure about anything at all, please clarify it with someone who can help before you implement anything. If you post on xdc.dev someone will respond and clarify for you. (And most of all if you are a masternode operator, please make sure you have all appropriate backups of the keystore file from your node before doing anything!)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This article describes the process to upgrade your XDC Client from v1.4 to v1.6 by MIGRATING it to a new VPS. This is a safer option than deleting/reinstalling as all your files on the existing node remain intact until you are happy that your new node is up and running fine. If you encounter any immediate issues on the new node, it gives you the option of just going back to using the old/existing node until you have sorted out whatever is the issue you're experiencing with the new node.&lt;/p&gt;




&lt;h2&gt;
  
  
  Definitions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;VPS1&lt;/em&gt; = The VPS your current v1.4 XDC Client is running on.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;username1&lt;/em&gt; = Username of &lt;em&gt;VPS1&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;IPaddress1&lt;/em&gt; = IP address of &lt;em&gt;VPS1&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;XDC-Client1&lt;/em&gt; = XDC Client running on &lt;em&gt;VPS1&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;VPS2&lt;/em&gt; = The VPS you will be setting up your new v1.6 XDC Client on.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;username2&lt;/em&gt; = Username of &lt;em&gt;VPS2&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;IPaddress2&lt;/em&gt; = IP address of &lt;em&gt;VPS2&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;XDC-Client2&lt;/em&gt; = XDC Client running on &lt;em&gt;VPS2&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Now we begin...
&lt;/h2&gt;

&lt;p&gt;First go and arrange a new VPS to use as &lt;em&gt;VPS2&lt;/em&gt;. You'll find information on system requirements for your VPS here:&lt;br&gt;
&lt;a href="https://xinfin.org/docker-setup"&gt;https://xinfin.org/docker-setup&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;SSH to &lt;em&gt;VPS2&lt;/em&gt; in your Terminal&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh username2@IPaddress2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Once logged into &lt;em&gt;VPS2&lt;/em&gt;, update the OS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
sudo apt autoremove
sudo apt clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Install &lt;em&gt;XDC-Client2&lt;/em&gt; via Bootstrap script:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su -c "bash &amp;lt;(wget -qO- https://raw.githubusercontent.com/XinFinOrg/XinFin-Node/master/setup/bootstrap.sh)" root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll be asked a few questions during installation of your node with the bootstrap script above.&lt;br&gt;
Answer "mainnet" when asked about that.&lt;br&gt;
Select "y" regarding private key etc. (We're going to replace the keystore later anyway).&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Shutdown &lt;em&gt;XDC-Client2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash ./docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Download the current chain snapshot using either:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Rapid Download snapshot from &lt;a href="https://xdcchain.xyz"&gt;https://xdcchain.xyz&lt;/a&gt; if you have a download command from there. It will be the one that looks something like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example: sudo wget -c -O xdcchain.xyz_snapshot.tar "https://nzYS66oEzxybde:CHiJzMnJ354Y3x@xdcchain.xyz/snapshot/xdcchain.xyz_snapshot.tar?authorisedip=nzYS66oEzxybde"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The official XinFin snapshot:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo wget https://download.xinfin.network/xdcchain.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Then logout of &lt;em&gt;VPS2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;SSH to &lt;em&gt;VPS1&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh username1@IPaddress1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Navigate to the &lt;em&gt;XDC-Client1&lt;/em&gt; directory:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Shutdown &lt;em&gt;XDC-Client1:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(It is very important that you do this now to avoid a conflict later on the network with &lt;em&gt;XDC-Client2&lt;/em&gt;)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Depending on the version of your &lt;em&gt;XDC-Client1&lt;/em&gt; installation, you may have to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker-compose -f docker-services.yml down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker-compose -f docker-compose.yml down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If neither of those commands has worked for you to shutdown &lt;em&gt;XDC-Client1&lt;/em&gt;, please reply to this article in the comments. &lt;strong&gt;DO NOT PROCEED WITH FURTHER STEPS ON EITHER VPS UNTIL &lt;em&gt;XDC-Client1&lt;/em&gt; HAS BEEN SUCCESSFULLY SHUTDOWN (or it may interfere with &lt;em&gt;XDC-Client2&lt;/em&gt; receiving rewards or operating correctly)&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;After shutting down &lt;em&gt;XDC-Client1&lt;/em&gt;, we then logout:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;SSH to &lt;em&gt;VPS2&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh username2@IPaddress2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Navigate to the mainnet directory:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Next we need to decompress the snapshot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you've used the xdcchain.xyz snapshot file, then you'll use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tar -xvzf xdcchain.xyz_snapshot.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;br&gt;
If you've used the official XinFin snapshot, then you'll use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tar -xvzf xdcchain.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Delete any pre-existing chain database files on &lt;em&gt;XDC-Client2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf xdcchain/XDC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Move the new chain snapshot files to the correct location:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv XDC xdcchain/XDC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Delete the new coinbase.txt and keystore files in &lt;em&gt;XDC-Client2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf xdcchain/coinbase.txt
sudo rm -rf xdcchain/keystore/UTC*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Copy your coinbase.txt from &lt;em&gt;VPS1&lt;/em&gt; to &lt;em&gt;VPS2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo scp username1@IPaddress1:~/XinFin-Node/xdcchain/coinbase.txt xdcchain/coinbase.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Copy your Keystore file from &lt;em&gt;VPS1&lt;/em&gt; to &lt;em&gt;VPS2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo scp username1@IPaddress1:~/XinFin-Node/xdcchain/keystore/UTC* xdcchain/keystore/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Copy your .env file from &lt;em&gt;VPS1&lt;/em&gt; to your HOME directory on &lt;em&gt;VPS2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo scp username1@IPaddress1:~/XinFin-Node/.env ~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Open the .env file you just copied from &lt;em&gt;XDC-Client1&lt;/em&gt; in nano:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano ~/.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Take note of the name of your node and the contact email address you used&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Type them in a notepad if u need.&lt;/p&gt;

&lt;p&gt;Then close nano by using:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ctrl+X&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Open the NEW .env for &lt;em&gt;XDC-Client2&lt;/em&gt; in nano (we don't need to specify a path as we're already in the ~/XinFin-Node/mainnet/ directory):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Put the name of your node into the INSTANCE_NAME field.&lt;br&gt;
Put the email address you used into the CONTACT_DETAILS field.&lt;br&gt;
Ignore the other fields in the file (including the Private Key field).&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Then close and save the .env file by using:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ctrl+X&lt;br&gt;
Y&lt;br&gt;
Press Enter&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;strong&gt;Delete the old .env file from your HOME directory:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf ~/.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Start &lt;em&gt;XDC-Client2&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash ./docker-up.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Wait 10 minutes and then check your node's status on &lt;a href="https://xinfin.network"&gt;https://xinfin.network&lt;/a&gt; or &lt;a href="https://stats.xdc.org"&gt;https://stats.xdc.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are operating a masternode (validator or standby node), you should also check you node is showing properly on &lt;a href="https://master.xinfin.network"&gt;https://master.xinfin.network&lt;/a&gt; and isn't slashed. If your node is showing as slashed or isn't showing at all on the site then there may be another issue that you'll need to troubleshoot.&lt;/p&gt;

&lt;p&gt;If you are operating a masternode (validator or standby node), you should also check your appropriate rewards are continuing to arrive in your correct wallet at the correct intervals. If  you are not receiving your expected rewards then there may be another issue that you'll need to troubleshoot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If all looks fine then congratulations you have now migrated your node!&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Once you are happy that your node migration has been successful and if you want to clear a few hundred GB of drivespace on &lt;em&gt;VPS2&lt;/em&gt; to allow for growth of the chain, you can consider deleting the chain snapshot file you downloaded by using the relevant command for whichever snapshot file you downloaded and used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf ~/XinFin-Node/mainnet/xdcchain.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OR:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf ~/XinFin-Node/mainnet/xdcchain.xyz_snapshot.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;In case of any technical queries on XDC Network, feel free to drop your queries on &lt;a href="https://www.xdc.dev/"&gt;XDC.Dev&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick links:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xinfin.org/"&gt;XinFin.org&lt;/a&gt;&lt;br&gt;
&lt;a href="https://xinfin.org/xdc-chain-network-tools-and-documents"&gt;XDC Chain Network Tools and Documents&lt;/a&gt;&lt;br&gt;
&lt;a href="https://xdc.network/"&gt;XDC Network Explorer&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.xdc.dev/"&gt;XDC Dev Forum&lt;/a&gt;&lt;br&gt;
&lt;a href="https://betawallet.xinfin.network/"&gt;Beta — XDC Web Wallet&lt;/a&gt;&lt;br&gt;
&lt;a href="https://faucet.apothem.network/"&gt;XDC faucet&lt;/a&gt;&lt;br&gt;
&lt;a href="https://faucet.blocksscan.io/"&gt;XDC faucet - Blocksscan&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XinFin — XDC Social Links:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/XinFin_Official"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/XinFinorg"&gt;GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://t.me/xinfin"&gt;Telegram&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.facebook.com/XinFinHybridBlockchain/"&gt;Facebook&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/company/xinfin/"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/channel/UCQaL6FixEQ80RJC0B2egX6g"&gt;YouTube&lt;/a&gt;&lt;/p&gt;




</description>
    </item>
    <item>
      <title>[Informative] Upgrading XDC Clients from v1.4.4 (Old Directory Structure) to Newer v1.6.0</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sun, 19 May 2024 04:10:39 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/upgrading-xdc-clients-from-v144-old-directory-structure-to-newer-v160-590n</link>
      <guid>https://www.xdc.dev/s4njk4n/upgrading-xdc-clients-from-v144-old-directory-structure-to-newer-v160-590n</guid>
      <description>&lt;p&gt;&lt;em&gt;Upgrade by node MIGRATION is a safer option for masternode operators (both validators and standby nodes). New article to upgrade via Migration is &lt;a href="https://www.xdc.dev/s4njk4n/upgrade-xdc-client-v14-to-v16-by-migration-2ifh"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Original upgrade article is below but please keep in mind if using it that you MUST ensure you have adequate backups of your keystore file or you may risk losing masternode status, losing any associated income and losing your masternode stake. The migration method is much safer so recommended approach is to use that via the link just above instead of the information below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Info below kept only for reference for those who really need it.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We've had some operators request help on how to upgrade from the old v1.4.4 clients to the new v1.6.0 after unsuccessfully using &lt;em&gt;upgrade.sh&lt;/em&gt;. (The reason it doesn't work is that the directory structures in the older client versions were different and have since been updated). Information below is a quick update from memory of how we've done it for other v1.4.4 clients.&lt;/p&gt;

&lt;p&gt;Note that the steps below will involve waiting for the new client to sync the chain which seems to still take us around a week on any nodes we've got running on Gigabit ethernet.&lt;/p&gt;

&lt;p&gt;Clients running in a production environment will need to minimise downtime, so will benefit from obtaining and decompressing a chain snapshot so it can be copied straight into the client as soon as it is completed with installation. We encountered some issues when doing this (details in the instructions below) and are working on putting together a solution which should be available in the next few days.&lt;/p&gt;

&lt;p&gt;The instructions below are are a quick memory-dump guide from my head of the steps we used so definitely recommend you go through them first to make sure you understand and are happy with them before applying anything below.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;TO UPDATE AN OLDER v1.4.4 XDC CLIENT to v1.6.0:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ssh to VPS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE THE OS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
sudo apt autoremove
sudo apt clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;BACKUP REQUIRED FILES TO HOME DIRECTORY:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
sudo cp ~/XinFin-Node/.env .
sudo cp ~/XinFin-Node/xdcchain/coinbase.txt .
sudo cp ~/XinFin-Node/xdcchain/keystore/UTC* .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SHUTDOWN THE CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node
sudo bash ./docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DELETE THE v1.4.4 CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
sudo rm -rf XinFin-Node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;WARNING THIS ABOVE COMMANDS WILL DELETE ALL NODE FILES ON YOUR MACHINE INCLUDING THE CHAIN WHICH THEN NEEDS TO BE RESYNCED ONCE THE CLIENT IS REINSTALLED. YOU MAY BE ABLE TO BACKUP THE CHAIN FILES BUT WE HAVE NOT TRIED THIS SO CANNOT SUGGEST ANYTHING. For the v1.4.4 clients we helped with, we just deleted the whole client and let them resync the chain from scratch after installing the new v1.6.0 client and restoring coinbase.txt, keystore file, and updating .env details. We were unable to get the current xdcchain.tar snapshot file working, and noted it contains some extra files/folders (and we noted is around 500GB download whereas a current completely synced client normally has a drive footprint of only around 360GB even including the OS). If you can get that snapshot working is the best option to start with. As an alternative we are in the process of creating a new chain snapshot in a few days time from one of our own 1.6.0 clients and plan to make download access to it available online for an XDC fee, but suggest everyone first try the official xdcchain.tar if going down this route as it is free.&lt;br&gt;
The official snapshot is available at:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://download.xinfin.network/xdcchain.tar"&gt;https://download.xinfin.network/xdcchain.tar&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INSTALL THE v1.6.0 CLIENT VIA BOOTSTRAP SCRIPT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su -c "bash &amp;lt;(wget -qO- https://raw.githubusercontent.com/XinFinOrg/XinFin-Node/master/setup/bootstrap.sh)" root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SHUTDOWN THE NEW v1.6.0 CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash ./docker-down.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DELETE THE NEW coinbase.txt AND keystore FILES:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
sudo rm -rf ~/XinFin-Node/mainnet/xdcchain/coinbase.txt
sudo rm -rf ~/XinFin-Node/mainnet/xdcchain/keystore/UTC*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;COPY THE OLD coinbase.txt AND keystore FILES INTO THE NEW CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp ~/coinbase.txt ~/XinFin-Node/mainnet/xdcchain/
sudo cp ~/UTC* ~/XinFin-Node/mainnet/xdcchain/keystore/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;MAKE SURE YOU DONT HAVE ANY OTHER RANDOM SIMILARLY-NAMED KEYSTORE FILES IN THE HOME DIRECTORY BEFORE YOU RUN THE COPY COMMAND ABOVE ON YOUR UTC* FILE AS IT WILL COPY THEM ALL TO THE NEW CLIENT'S keystore DIRECTORY. IF YOU HAVE MORE THAN ONE FILE IN YOUR HOME DIRECTORY BEGINNING WITH "UTC", YOU WILL NEED TO PUT THE SPECIFIC FILE DETAILS INTO THIS COPY COMMAND SO ONLY THE RIGHT ONE IS MOVED INTO THE NEW CLIENT&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CHECK AND MAKE NOTE OF NODE NAME AND CONTACT DETAILS FROM OLD .env FILE:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To exit use Ctrl+X&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OPEN THE NEW .env FILE IN THE NEW CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano ~/XinFin-Node/mainnet/.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;UPDATE THE NODE NAME&lt;br&gt;
UPDATE THE CONTACT DETAILS&lt;br&gt;
IGNORE THE NEW PRIVATE KEY FIELD. APPARENTLY THERE'S NOTHING WE NEED TO DO WITH THIS&lt;br&gt;
SAVE THE FILE: CTRL+X, press Y, Press &lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE YOU WILL ALSO NEED TO CONSIDER RESTORING ANY OTHER CUSTOMISATIONS (eg custom port mappings in .yml file, or customisations to your start-node.sh script for --enable-0x-prefix etc.. if you have made any modifications you will need to redo them at this point).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REBOOT YOUR VPS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SSH TO YOUR VPS AGAIN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RESTART THE NEW v1.6.0 CLIENT:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash ./docker-up.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;LOGOUT OF YOUR VPS, WAIT FOR 30-60 MINS THEN CHECK YOUR NODE'S STATUS AND CLIENT VERSION ON:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://xinfin.network"&gt;https://xinfin.network&lt;/a&gt;&lt;br&gt;
or&lt;br&gt;
&lt;a href="https://stats.xdc.org"&gt;https://stats.xdc.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YOU CAN ALSO CHECK THE CLIENT VERSION BY ATTACHING THE JAVASCRIPT CONSOLE:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/XinFin-Node/mainnet
sudo bash ./xdc-attach.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;To exit the console, use the "exit" command&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IF THE NODE SHOWS AS BEING ONLINE, WITH THE CORRECT VERSION, AND IS SYNCING OK, THEN CONSIDER WHETHER YOU NEED ANY FURTHER BACKUPS OF YOUR KEYSTORE FILE. ONCE DONE WITH THAT, WE CAN DELETE THE .env , coinbase.txt , AND keystore FILES WE PLACED IN THE HOME DIRECTORY:&lt;/strong&gt;&lt;br&gt;
SSH TO THE VPS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
sudo rm -rf .env
sudo rm -rf coinbase.txt
sudo rm -rf UTC*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;MAKE SURE YOU DONT HAVE ANY OTHER RANDOM SIMILARLY-NAMED KEYSTORE FILES IN THE HOME DIRECTORY BEFORE YOU RUN THIS COMMAND AS IT WILL DELETE EVERYTHING STARTING WITH "UTC"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LOGOUT OF YOUR NODE AND YOU'RE ALL DONE!&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>[WIP]Nonce utility in broadcast.xinfin.network not working</title>
      <dc:creator>s4njk4n</dc:creator>
      <pubDate>Sat, 04 May 2024 05:48:00 +0000</pubDate>
      <link>https://www.xdc.dev/s4njk4n/nonce-utility-in-broadcastxinfinnetwork-not-working-5605</link>
      <guid>https://www.xdc.dev/s4njk4n/nonce-utility-in-broadcastxinfinnetwork-not-working-5605</guid>
      <description>&lt;p&gt;When going to &lt;a href="https://broadcast.xinfin.network"&gt;https://broadcast.xinfin.network&lt;/a&gt; and selecting the Nonce utility, usually we can either paste in an address or type it in manually.&lt;/p&gt;

&lt;p&gt;Pasting into the &lt;em&gt;Address&lt;/em&gt; box at present doesn't seem to work (?is pasting disabled). Have tested with Firefox/Brave/Chrome.&lt;/p&gt;

&lt;p&gt;If manually typing an address into the &lt;em&gt;Address&lt;/em&gt; field, the &lt;em&gt;Type&lt;/em&gt; field then says Invalid and the &lt;em&gt;Nonce&lt;/em&gt; remains as &lt;em&gt;-&lt;/em&gt; (and no number is shown).&lt;/p&gt;

&lt;p&gt;Inspecting via browser shows these errors below so looks like part of it is maybe RPC and CORS issue?&lt;/p&gt;




&lt;p&gt;Access to XMLHttpRequest at '&lt;a href="https://xinpayrpc.xinfin.network/"&gt;https://xinpayrpc.xinfin.network/&lt;/a&gt;' from origin '&lt;a href="https://broadcast.xinfin.network"&gt;https://broadcast.xinfin.network&lt;/a&gt;' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.&lt;/p&gt;




&lt;p&gt;2.8bd6b7bd.chunk.js:1 &lt;br&gt;
 POST &lt;a href="https://xinpayrpc.xinfin.network/"&gt;https://xinpayrpc.xinfin.network/&lt;/a&gt; net::ERR_FAILED&lt;/p&gt;




&lt;p&gt;Uncaught (in promise) Error: Invalid JSON RPC response: ""&lt;br&gt;
    at Object.InvalidResponse (2.8bd6b7bd.chunk.js:1:1546114)&lt;br&gt;
    at i.onreadystatechange (2.8bd6b7bd.chunk.js:1:1726034)&lt;/p&gt;




&lt;p&gt;Can someone take a look and/or change the RPC being used?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;




&lt;p&gt;ADD: Interestingly it is possible to paste 44 characters as an address (including the xdc prefix)... but XDC addresses normally only have 43 characters when including the xdc prefix.. And if adding an extra character so pasting works, the &lt;em&gt;Address&lt;/em&gt; textbox then won't allow deleting any characters.. Looks like it wants to maintain 44 characters... So the &lt;em&gt;Address&lt;/em&gt; remains incorrect and the &lt;em&gt;Type&lt;/em&gt; field remains Invalid and the &lt;em&gt;Nonce&lt;/em&gt; field remains as -.&lt;/p&gt;




</description>
    </item>
  </channel>
</rss>
