Farmer Node stopped every time syncing to the top... Please help!

Every time my Node is synced to the top, the Farmer Node exits with the message “Salt updated, recommitting in background new_salt=0000000000000000”. This cause the problem of having to restart the Farmer Node every few hours to sync back up to the top. Any solution to this would be very much appreciated. Thank you!

1 Like

@nazar-pc Looks like the farmer is crashing on salt recommitments, some other users have reported this issue aswell.

1 Like

That message is not an exit message, must be the message long prior to the exit.

Can you attach the actual full logs of the farmer as text? Ideally including the command you used to start the farmer?

Thanks for getting back to me, Nazar! Here is the farmer log that contains command I used to start the farmer.

I’ve got this issue as well. In my case the farmer is launched with the default arguments, except for reward-address and plot-size = 500G.

I usually make it 15-40 minutes. The last line in the log will be a new_salt. But after relaunch it picks up a few segments ahead, so something is probably being done silently.

2022-06-07T03:17:10.577700Z  INFO subspace_farmer::commands::farm: WS RPC server listening on 0.0.0.0:9955
2022-06-07T03:17:10.577785Z  INFO subspace_farmer::multi_farming: Sync done
2022-06-07T03:18:19.561572Z  INFO subspace_farmer::archiving: Plotted segment segment_index=103710
...
2022-06-07T03:38:22.879392Z  INFO subspace_farmer::archiving: Plotted segment segment_index=103721
2022-06-07T03:40:36.859632Z  INFO subspace_farmer::archiving: Plotted segment segment_index=103722
2022-06-07T03:42:42.002038Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:42:42.002102Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:42:42.002065Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:42:42.002066Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
[RESTARTED HERE]
2022-06-07T03:47:21.873031Z  INFO subspace_farmer::utils: Increase file limit from soft to hard (limit is 1048576)
2022-06-07T03:47:21.873066Z  INFO subspace_farmer::commands::farm: Connecting to node at ws://node:9944
2022-06-07T03:47:21.874503Z  INFO jsonrpsee_client_transport::ws: Connection established to target: Target { sockaddrs: [], host: "node", host_header: "node:9944", _mode: Plain, path_and_query: "/" }
2022-06-07T03:47:21.875355Z  INFO subspace_farmer::commands::farm: Opening object mapping
2022-06-07T03:51:58.134595Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:51:58.134600Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:51:58.137457Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:51:58.137842Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:52:14.171838Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:52:15.347986Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:52:15.415417Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:52:22.462378Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:52:22.837577Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:52:22.838001Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:52:23.002371Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:52:23.002431Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:52:23.002491Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:52:39.516710Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:52:48.205340Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:52:48.208110Z  INFO subspace_farmer::archiving: Subscribing to archived segments
2022-06-07T03:52:48.208335Z  INFO subspace_farmer::multi_farming: Sync done
2022-06-07T03:52:48.208355Z  INFO subspace_farmer::commands::farm: WS RPC server listening on 0.0.0.0:9955
2022-06-07T03:52:52.002258Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T03:57:32.870047Z  INFO subspace_farmer::utils: Increase file limit from soft to hard (limit is 1048576)
2022-06-07T03:57:32.870073Z  INFO subspace_farmer::commands::farm: Connecting to node at ws://node:9944
2022-06-07T03:57:32.871302Z  INFO jsonrpsee_client_transport::ws: Connection established to target: Target { sockaddrs: [], host: "node", host_header: "node:9944", _mode: Plain, path_and_query: "/" }
2022-06-07T03:57:32.872114Z  INFO subspace_farmer::commands::farm: Opening object mapping
2022-06-07T03:57:32.991865Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:57:32.991897Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:57:32.991922Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:57:32.991937Z  INFO subspace_farmer::multi_farming: Opening plot
2022-06-07T03:57:33.436915Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:57:33.439716Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:57:33.463223Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:57:33.483446Z  INFO subspace_farmer::multi_farming: Opening commitments
2022-06-07T03:57:33.513562Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:57:34.291848Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:57:34.333394Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:57:34.336195Z  INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T03:57:34.338604Z  INFO subspace_farmer::archiving: Subscribing to archived segments
2022-06-07T03:57:34.338812Z  INFO subspace_farmer::multi_farming: Sync done
2022-06-07T03:57:34.338846Z  INFO subspace_farmer::commands::farm: WS RPC server listening on 0.0.0.0:9955
2022-06-07T03:58:06.590718Z  INFO subspace_farmer::archiving: Plotted segment segment_index=103725
...

Somehow it gets from 103722 to 103725. If the intermediate nodes were silently plotted or omitted, I don’t know.

If it’s not restarted, the RAM usage of the farmer will start to climb linearly. Consume all 32GB of my system. Start swapping. Then eventually:

kernel: [24093.163210] Out of memory: Kill process 11598 (subspace-farmer) score 619 or sacrifice child
kernel: [24093.171658] Killed process 11598 (subspace-farmer) total-vm:36731164kB, anon-rss:21247844kB, file-rss:0kB

Text would have been nicer than screenshot, people would be able to find it :confused:

I see you stopped farmer most of the time and once it was killed. That might mean you don’t have enough RAM on your system for instance.

Does it help if I reduce the plot size in this case? tia.

Shouldn’t matter too much. How much space did you allocate and how much RAM do you have for that?

I have 4GB of ram and 300 GB of HD and i am trying to allocate 150 GB to the plot.

So sorry, but where are logs stored - ie the folder location? ( I’m running via CLI / Ubuntu)? Thanks.

2022-06-07T11:07:06.001392Z INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T11:07:06.055176Z INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T11:07:06.055236Z INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T11:07:06.189603Z INFO subspace_farmer::farming: Subscribing to slot info notifications
2022-06-07T11:07:06.195149Z INFO subspace_farmer::archiving: Subscribing to archived segments
2022-06-07T11:07:06.195371Z INFO subspace_farmer::multi_farming: Sync done
2022-06-07T11:07:06.195621Z INFO subspace_farmer::commands::farm: WS RPC server listening on 127.0.0.1:9955
2022-06-07T11:07:07.001504Z INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T11:07:07.001646Z INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-07T11:07:07.002449Z INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000

Same problem on linux

1 Like

4GB may not be sufficient for current iteration of the network. It will work, but application will likely crash often and needs to be restarted. Docker handles that though if you use it.

Nowhere, they’re printed to the terminal and it is up to a tool like Docker or systemd to collect them long-term.

Hi Nazar,

I can certainly give Docker a try but please also read Jusana’s writing above too. She seems to be having the same error message even with running on Docker with a lot more RAM. Is my error message different than hers?
Also, do you see any problem with allocating 70% of hard drive to plot. I read a couple of threads here that when the Node trying to sync with bad blocks, it would over use the space allocated. Do I have enough overhead space to avoid such problem?

Thanks!

I finally saw someone’s log where the salt process is executed without hang. There is a “Salt will be updated, recommitting in background” message and then a “Finished recommitment salt” message. There are several pairs in a row, but they are always paired.

As I got 4x “will be updated” message without a finish message, I think that it was stacking up background tasks. Either because it was too slow to recommit before the next one came in or because it was too fast to start a new recommit w/o waiting for a previous operation to complete.

Do you recall if it is farmer or node that are consuming a lot of RAM?

@ivan-subspace please check this thread.

Nazar,

I currently have the situation where my Node is completely synced without the Farmer being run. The Farmer stopped on the New Salt error (Segmentation Fault) but the Node continue to sync. This is different than what I had before where every time the Farmer stopped, the Node was waiting for the Farmer to get restarted and it would not continue to sync with higher blocks. Are there changes that were implemented on your side?

Thanks.

Actually, never mind what I wrote there… I was able to some how fully synced my Node now. The Farmer was able to load new Salt Background also and it was signing blocks. It said that it took 560 seconds to recommit the next _salt background.

Famer.
I left the node running, restarted the farmer, and the memory usage dropped from 100% to < 10%

1 Like

More information: definitely the farmer. But on a hunch I terminated the node. The farmer errored with:
2022-06-08T23:43:47.295575Z ERROR jsonrpsee_core::client::async_client: Error: Connection(Closed) terminating client
The memory utilization dropped from 80% (of 32 GB) to < 10%. BUT the salting process kept running.
Over an hour later it completed

2022-06-08T22:49:31.014437Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-08T22:49:31.014776Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-08T22:49:31.014924Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-08T22:49:31.014975Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-08T22:49:31.015039Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-08T23:43:47.295575Z ERROR jsonrpsee_core::client::async_client: Error: Connection(Closed) terminating client
2022-06-09T00:28:57.756523Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=5966.74169921875
2022-06-09T00:38:23.522783Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=6532.5078125
2022-06-09T00:38:27.013587Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=6535.9990234375
2022-06-09T00:39:15.308187Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=6584.29345703125
2022-06-09T00:39:24.297227Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=6593.28271484375

After relaunching the node it gave salting messages again, but 8 orders of magnitude faster in completion

2022-06-09T01:13:12.002473Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-09T01:13:12.002472Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-09T01:13:12.002506Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=5.151000004843809e-5
2022-06-09T01:13:12.002503Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=4.8070000048028305e-5
2022-06-09T01:13:12.002624Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-09T01:13:12.002633Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=1.047000023390865e-5
2022-06-09T01:13:12.002667Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=0000000000000000
2022-06-09T01:13:12.002690Z  INFO subspace_farmer::farming: Finished recommitment salt=0000000000000000 took_seconds=3.265999839641154e-5

My working theory is that the jsonrpsee_core::client::async_client thread is waiting on the salt updates to complete and buffering up something in the mean time. Since the number of salt updates appears to scale scale with # of plot files, this is worse as capacity increases. If you don’t have enough RAM to buffer whatever is going on before the salting completes, your system runs out of memory.

1 Like