Regarding the fact that there are multiple servers in the LAN, can only

This means network state is not very healthy and it had to recover some pieces instead of getting them directly. This is not an error as such, but it will slow down retrieval of the data significantly. @shamil might be something for you to look at.

1 Like

BTW make sure you’re on the latest release, we addressed issues of exactly this kind in last updates. Judging by logs above you might be running pre-latest release.

My farmer should be the latest version, gemini-3f-2023-sep-05. Btw, do you have a way to collect all data for farmer like what we did for subspace-node --xxx archive

I mean, I can setup a farmer with full datasize, then I can setup more farmers connect to this one locally.

Something confused me a lot, the farmer collects data from other farmers? or the connected node? I have a fully archived node with --state-pruning archive --blocks-pruning archive, but most farmers connect to this node still stuck here.

In Gemini 3f both node and farmer are pulling data from other farmers. Farmers form DSN (Distributed Storage Network), this is possible because plots contain useful data, history of the blockchain itself. There is no need to have archival nodes in Gemini 3f anymore unless you want to run an RPC node and being able to query all the history.

Can we by any chance to run a farmer with all data? So that we can speed up data fetching locally

By default farmer uses 1% of its allocated space for caching purposes. You can increase it with CLI option to store more data locally if you want, but you’ll get less rewards because you will leave less space for plotting.

I want to save full chain data on farmerA, then connect farmerB/C to farmerA locally then speed up the data fetching.

This might be technically possible right now by increasing cache size and adding peers to reserved of each other on DSN side as well, but you need to understand that eventually the blockchain will be so large no one will be able to fit it on their machine anyway.

As I know, current blockchain size is about 6G, right?

I’m on Advanced CLI, gemini-3f-2023-sep-05. I have launched two farmers, both on powerful hardware, on commercial Internet tariffs of 200 Mbit and 500 Mbit. Both suffer from this problem. The binaries are compiled from source with optimization for the current architecture. At least on the second farmer, the 500 Mbit channel is not occupied by anything else at all. One with 92 plots currently, one with 36 (but has been started with 97 plots, I have reduced their number hoping that it will help).
:man_shrugging:
Now I have set the following settings on both:
--out-connections 100 --in-connections 100 --pending-in-connections 100 --pending-out-connections 100 --target-connections 400
But still

Sep 10 19:58:07 lenovo25 subspace-farmer[800980]: 2023-09-10T16:58:07.823524Z  INFO single_disk_farm{disk_farm_index=29}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=3105
Sep 10 19:58:09 lenovo25 subspace-farmer[800980]: 2023-09-10T16:58:09.533650Z  INFO single_disk_farm{disk_farm_index=7}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=3360
Sep 10 19:58:09 lenovo25 subspace-farmer[800980]: 2023-09-10T16:58:09.536085Z  INFO single_disk_farm{disk_farm_index=7}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=5463

And it is unknown when it will end.

2 Likes

In my case it helped to reduce quantity of parallel plotting (5x185 > 3x185 plots)

2 Likes

I have the same problem, node sync blocks fine but farmer never draws, this has been going on for a long time, can you help me?


2023-09-12T07:55:51.557172Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=5130
2023-09-12T07:55:51.566738Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=5130
2023-09-12T08:10:18.459502Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=5130
2023-09-12T08:10:18.466318Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=4018

1 Like

Hey @tight_sleep you may want to update to the latest version if you are not already on sept-11 release, its got some fixes for larger plots such as yours and may have better luck farming.

As for the recovering missing pieces overall the warning is expected, we just want to ensure its making progress and not stuck or fully erroring out.

Yes, I’m using the September 11th version and still have this problem, I’m struggling to know how to deal with it

md5sum farmer
09fef9b487f231a4dba39dd5da9ba796  farmer

Are you saying that if we leave it to itself, it will one day finish plotting successfully?

I still can’t start plotting on one farmer. I have already left one 278 GB plot on each of the four NVME - useless, only segment_reconstruction: Recovering missing piece... missing_piece_index=X

Is it too high for you to continue?

I’m sorry, I didn’t understand the question. :man_shrugging: