Can we have the subspace farmer option to specify main node and backup nodes (multiple nodes option)

From a chat in discord, we had an interesting idea and I think it’ll be good for subspace. So for now, we have remote farmers to connect to only one node in same LAN. What if we can give farmer the option to set node A, node B, node C (2 or 3 nodes) to connect to?

  • If node A goes down, farmers will look for node B to connect.
  • If node A and B goes down, farmers will look for node C to connect.
  • If node A goes up again, farmers will connect back to node A

This will help farmers in case a node is out of sync, or people need to restart the PC on node A for update, etc… People will be more encouraged to run 2-3 nodes, not only for resiliency, but also for db’s back up just in case.

It’s also a win for subspace since netspace will be more stable. And even node A, B, C can be via the same internet line, this will also bring more benefit to the network for P2P connection.

Thanks,

1 Like

It makes sense to me overall, but as historical context it was not designed in a such a way that many farmers connect to the same node all at once. They were meant to be paired, but we know how that payed out in the end :slightly_smiling_face:

I’m wondering though how relevant it will be if/when Farming cluster exists, would that solve a similar issue or you still see a value in supporting multiple nodes in current architecture? This looks like a problem of big-ish farmer that might be better addressed in other way.

1 Like

What I love most about the farming cluster is the cache role, we can have centralized cache for all farmers.

So we can have 2-3 nodes, 2-3 cache roles in the network for resiliency. And we can have unlimited farmers in same LAN.

If this can be implemented, it will encourage big farmers to stay solo. What we’ve seen in Chia is even few PiB farmers still join pool, maybe just an old habit from ETH mining time by them (I heard the db was many TB).

Regarding the nodes, what you seem to be talking about is load balancing and failover for the nodes, which is not difficult to implement and doesn’t even require changes to the current content. As for centralized caching, this is more challenging to handle unless Nazar is willing to modify some of the implementation.

If you can make a tool/tutorial for this, it’ll be good for community. Thanks.

Have you completed the audit of the tool? It is still in the testing phase, and should any issues arise, the impact could be significant and might result in an inability to generate profits. Thank you very much for your testing.