Node stuck in "Waiting for farmer to receive"

My node is stuck in “Waiting for farmer to receive” for hours:

Node Log :

2022-09-19 19:18:15 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190664 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.0MiB/s ⬆ 24.1kiB/s
2022-09-19 19:18:19 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:20 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190665 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.3MiB/s ⬆ 17.8kiB/s
2022-09-19 19:18:24 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:25 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190666 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.3MiB/s ⬆ 20.5kiB/s
2022-09-19 19:18:29 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:30 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190667 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.1MiB/s ⬆ 28.0kiB/s
2022-09-19 19:18:34 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:35 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190667 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.2MiB/s ⬆ 26.5kiB/s
2022-09-19 19:18:39 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:40 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190668 (74 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.3MiB/s ⬆ 18.1kiB/s
2022-09-19 19:18:44 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:45 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190668 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.3MiB/s ⬆ 21.5kiB/s
2022-09-19 19:18:49 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:50 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190668 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.1MiB/s ⬆ 21.4kiB/s
2022-09-19 19:18:54 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment
2022-09-19 19:18:55 [PrimaryChain] ⚙️  Syncing  0.0 bps, target=#190669 (75 peers), best: #1 (0x5b02…32b6), finalized #0 (0x43d1…9101), ⬇ 1.3MiB/s ⬆ 17.6kiB/s
2022-09-19 19:18:59 [PrimaryChain] Waiting for farmer to receive and acknowledge archived segment

Farmer Log:

2022-09-19T18:34:18.151042Z  INFO subspace_farmer::utils: Increase file limit from soft to hard (limit is 1048576)
2022-09-19T18:34:18.164602Z  INFO subspace_farmer::commands::farm: Connecting to node at ws://node-g1:40340
2022-09-19T18:34:18.165031Z  INFO subspace_farmer::commands::farm: Relay listening on /memory/1432192818473954409/p2p/12D3L7AUyxEp579UboTGwoEXRDMTEuTRHWqNa8qckBeKCKgY7E5D
2022-09-19T18:34:18.169559Z  INFO subspace_farmer::commands::farm: Relay listening on /ip4/172.22.0.11/tcp/40340/p2p/12D3L7AUyxEp579UboTGwoEXRDMTEuTRHWqNa8qckBeKCKgY7E5D
2022-09-19T18:34:18.169667Z  INFO subspace_farmer::commands::farm: Relay listening on /ip4/127.0.0.1/tcp/40340/p2p/12D3L7AUyxEp579UboTGwoEXRDMTEuTRHWqNa8qckBeKCKgY7E5D
2022-09-19T18:34:18.172714Z  INFO jsonrpsee_client_transport::ws: Connection established to target: Target { sockaddrs: [], host: "node-g1", host_header: "node-g1:40340", _mode: Plain, path_and_query: "/" }
2022-09-19T18:34:18.175893Z  INFO jsonrpsee_client_transport::ws: Connection established to target: Target { sockaddrs: [], host: "node-g1", host_header: "node-g1:40340", _mode: Plain, path_and_query: "/" }
2022-09-19T18:34:18.279530Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56P80T6VQ8NGGX0SPW1}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.283513Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56P57F0R89WNJN7MPK7}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.285758Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56P454TC75CAK05CQZ2}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.287344Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56P5RW07YP7DVSS37XS}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.287707Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56PN38KK84MF4KY5ZN3}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.288084Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56P0E7MYSV12EDHWWFE}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.291413Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56PTRFEWPC6ABESW4QY}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.296098Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56PPQ01AQR1S79XJYWB}: subspace_farmer::single_plot_farm: Opening plot
2022-09-19T18:34:18.343940Z  INFO single_plot_farm{single_plot_farm_id=01GDBGF56PSGQV2Z4WSCJBM09Q}: subspace_farmer::single_plot_farm: Opening plot

It looks like the farmer is stuck trying to open a plot.

what plot size are you trying to use and how Much available space do you have on the disk you are trying to plot too?

Plot size is 100G. The ssd has more than 2T of available space. After trying for almost a day, and after I posted this query here, the node started moving, albeit very slowly.

Is your node fully synced atm?

If it is and the problem worked itself out keep an eye for the same thing happening again.

Try to keep in mind the environment while it happen (I.e computer asleep, farmer and node files up to date or anything else relevant)

Also if you could please fill out this info

Environment

  • Operating System: e.g. Windows 10, Ubuntu 22.04, MacOS, Raspbian
  • CPU Architecture: e.g. x86/x64, Mac M2
  • RAM: e.g. 16GiB
  • Storage: e.g. NVMe, SSD, HDD (7200), Hardware RAID 1
  • Plot Size: e.g. 100G, 10T
  • Subspace Deployment Method: e.g. Windows Desktop, Docker Compose, pre-built CLI, self-built CLI

In the past one instance that created this was a lack of RAM

No, the node is far from fully sync. The current status is :

022-09-19 22:40:52 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192478 (75 peers), best: #553 (0x8cfe…97f0), finalized #452 (0x14e8…06cd), :arrow_down: 146.8kiB/s :arrow_up: 3.2kiB/s
2022-09-19 22:40:57 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192478 (75 peers), best: #554 (0xb942…4fcb), finalized #453 (0xa7bc…b4c3), :arrow_down: 97.1kiB/s :arrow_up: 6.3kiB/s
2022-09-19 22:41:02 [PrimaryChain] :gear: Syncing 0.4 bps, target=#192478 (75 peers), best: #556 (0x9289…79be), finalized #455 (0x7cd1…6600), :arrow_down: 200.9kiB/s :arrow_up: 8.3kiB/s
2022-09-19 22:41:07 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192478 (75 peers), best: #557 (0x0712…ace8), finalized #456 (0xb048…d2c7), :arrow_down: 52.8kiB/s :arrow_up: 4.2kiB/s
2022-09-19 22:41:12 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192478 (75 peers), best: #558 (0xfe0e…b2fc), finalized #457 (0x9f5b…3847), :arrow_down: 120.4kiB/s :arrow_up: 4.2kiB/s
2022-09-19 22:41:17 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192402 (75 peers), best: #559 (0x2693…f784), finalized #458 (0x35c7…7db2), :arrow_down: 180.2kiB/s :arrow_up: 3.2kiB/s
2022-09-19 22:41:22 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192478 (75 peers), best: #560 (0xa906…32ef), finalized #460 (0x5d6b…b1f1), :arrow_down: 149.9kiB/s :arrow_up: 4.6kiB/s
2022-09-19 22:41:27 [PrimaryChain] :gear: Syncing 0.4 bps, target=#192450 (74 peers), best: #562 (0xc788…3664), finalized #461 (0xfa6a…74d9), :arrow_down: 156.6kiB/s :arrow_up: 5.0kiB/s
2022-09-19 22:41:32 [PrimaryChain] :gear: Syncing 0.2 bps, target=#192336 (75 peers), best: #563 (0x090d…1e63), finalized #462 (0x6cee…bbfa), :arrow_down: 146.4kiB/s :arrow_up: 5.2kiB/s

The no-sync problem persisted for almost a day. Is there a way to expedite the sync? I want to sync a few more nodes, but holding up to let this node fully sync before staring a new node.

  • Operating System: Windows 10 (10.0.1.19403)
  • CPU Architecture: x64 - E5-2680V2 X 2
  • RAM: 112GB
  • Storage: NVMe (2TB available)
  • Plot Size: 100G,
  • Subspace Deployment Method: Docker Compose

We have only observed what you had in the first message with really weak network connection. But since your node is syncing now, just let it be, after ~3000 blocks it’ll accelerate and you will be able to sync other nodes from it afterwards.

Got it. I will keep an eye. I have a gigabit up/down network. So network connection should not be a problem. I was able to sync one node using CLI and one with docker without a problem. The problem started when I tried to add more nodes using docker. Thank you!

I am at 55K block and the sync is moving at 10-15 blocks /minute. There is no way I can catch up with the chain. Is there any parameter I can use to speed-up the process? @ALPHANOMICON

You can attempt to adjust the amount of peer connections, but bear in mind this can bog your network down in the upperbands.

commands to do this are
--in-peers [number]
--out-peers [number]

you will only need to add this to your node, not the farmer.
ideally this will allow your system to connect to more peers and sync better, I believe the default is 50-75 to give you an idea of where to start

Let me know if this helps, additionally may i ask approximately what location you are syncing from? Geolocation can have an effect on syncing speeds in a Peer-2-oeer network

I will try out your suggestion.

I have noticed some acceleration between 10K-60K height, but sync slows down > 60K. I have seen this happening with three of my nodes in last couple of days. I am in the North East of USA, and I can see plenty of nodes in my vicinity.

So, I changed the in/out peers to 100 for a node which is currently at 2K height. The number of total peers increase initially to 180-190, but now the number has dropped to 70-90. Does the drop signify a problem?

I see the sync speed increases momentarily around 8K blocks before it drops to 0.2-3bps. I do not see any log message that can help debug the issue. Any pointer to where else I can look at would be very helpful.