Developers Forum for XinFin XDC Network

11ppm
11ppm

Posted on

Repeated Error When Setting Up XDC Node

I am attempting to set up an XDC node on a server that was hard reset, but I keep encountering the following error repeatedly. Despite trying various solutions, the issue remains unresolved. I have tested this on two servers, and the result was the same. Additionally, it seems that others are experiencing the same error. I would like the XDC team to look into this issue.

ERROR[01-14|03:10:09] Failed to retrieve block author          err="recovery failed"                                                                number=0 hash=4a9d74…42d6b1
ERROR[01-14|03:10:24] Failed to retrieve block author          err="recovery failed"                                                                number=0 hash=4a9d74…42d6b1
WARN [01-14|03:10:44] Full stats report failed                 err="ping timed out"
WARN [01-14|03:10:44] Failed to retrieve stats server message  err="read tcp 172.18.0.2:39586->45.82.64.150:3000: use of closed network connection"
WARN [01-14|03:10:54] Stats server unreachable                 err="dial tcp 45.82.64.150:3000: i/o timeout"
WARN [01-14|03:11:10] Stats server unreachable                 err="read tcp 172.18.0.2:34086->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:11:30] Stats server unreachable                 err="read tcp 172.18.0.2:37066->45.82.64.150:3000: i/o timeout"
ERROR[01-14|03:11:49] Failed to retrieve block author          err="recovery failed"                                                                number=0 hash=4a9d74…42d6b1
WARN [01-14|03:12:09] Full stats report failed                 err="ping timed out"
WARN [01-14|03:12:09] Failed to retrieve stats server message  err="read tcp 172.18.0.2:36694->45.82.64.150:3000: use of closed network connection"
WARN [01-14|03:12:19] Stats server unreachable                 err="read tcp 172.18.0.2:43042->45.82.64.150:3000: i/o timeout"
ERROR[01-14|03:12:41] Failed to retrieve block author          err="recovery failed"                                                                number=0 hash=4a9d74…42d6b1
WARN [01-14|03:13:01] Full stats report failed                 err="ping timed out"
WARN [01-14|03:13:01] Failed to retrieve stats server message  err="read tcp 172.18.0.2:58750->45.82.64.150:3000: use of closed network connection"
WARN [01-14|03:13:11] Stats server unreachable                 err="read tcp 172.18.0.2:35534->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:13:31] Stats server unreachable                 err="read tcp 172.18.0.2:35760->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:13:51] Stats server unreachable                 err="read tcp 172.18.0.2:52106->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:14:11] Stats server unreachable                 err="read tcp 172.18.0.2:53050->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:14:31] Stats server unreachable                 err="read tcp 172.18.0.2:58428->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:14:51] Stats server unreachable                 err="read tcp 172.18.0.2:54984->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:15:11] Stats server unreachable                 err="read tcp 172.18.0.2:36082->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:15:31] Stats server unreachable                 err="dial tcp 45.82.64.150:3000: i/o timeout"
WARN [01-14|03:15:50] Stats server unreachable                 err="read tcp 172.18.0.2:38446->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:16:10] Stats server unreachable                 err="read tcp 172.18.0.2:49742->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:16:27] Stats server unreachable                 err="read tcp 172.18.0.2:39772->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:16:47] Stats server unreachable                 err="read tcp 172.18.0.2:57710->45.82.64.150:3000: i/o timeout"
WARN [01-14|03:17:07] Stats server unreachable                 err="read tcp 172.18.0.2:38510->45.82.64.150:3000: i/o timeout"
Enter fullscreen mode Exit fullscreen mode

Discussion (8)

Collapse
gzliudan profile image
Daniel Liu • Edited on

We will look into this issue. Which docker image and network are you using ?

Collapse
11ppm profile image
11ppm Author

I'm using the docker image xinfinorg/xdposchain:v2.4.0 and the network is Mainnet (mainnet_xinfinnetwork_1).

Doraemon@XDC:~$ docker ps
CONTAINER ID   IMAGE                         COMMAND                 CREATED          STATUS          PORTS                                                                                                                                               NAMES
6a68b6f81ad5   xinfinorg/xdposchain:v2.4.0   "bash /work/entry.sh"   15 minutes ago   Up 15 minutes   8555/tcp, 0.0.0.0:30303->30303/tcp, :::30303->30303/tcp, 0.0.0.0:8989->8545/tcp, [::]:8989->8545/tcp, 0.0.0.0:8888->8546/tcp, [::]:8888->8546/tcp   mainnet_xinfinnetwork_1
Doraemon@XDC:~$ docker logs -f mainnet_xinfinnetwork_1
Enter fullscreen mode Exit fullscreen mode
Collapse
gzliudan profile image
Daniel Liu • Edited on

Here is a solution. You can setup a node based on snapshot file, then sync it with mainnet. The snapshot links:

Thread Thread
11ppm profile image
11ppm Author

Thank you for your reply. I have already tried using snapshots several times. However, the number of peers did not increase, so I decided to start from scratch. Even then, the peers did not increase. Today, I performed a hard reset again and started using the snapshot. I will monitor the situation for 24 hours to see if the number of peers increases.

Thread Thread
gzliudan profile image
Daniel Liu • Edited on

I tested it yesterday again. It takes a few minutes to connect other nodes, then sync with mainnet.

Thread Thread
gzliudan profile image
Daniel Liu

We updated bootnodes for mainnet just now. Please run git pull to update https://github.com/XinFinOrg/XinFin-Node, and sync from genesis or snapshot file. BTW: the snapshot file is no longer needed now.

Thread Thread
11ppm profile image
11ppm Author

I have updated the bootnodes.list. Although there are only 1-2 peers, the synchronization has been successfully completed. Thank you very much.

Some comments have been hidden by the post's author - find out more