Regarding the fact that there are multiple servers in the LAN, can only

Regarding the fact that there are multiple servers in the LAN, can only one be used as a node
My configuration is as follows:
192.168.100.1
192.168.100.2
192.168.100.3
192.168.100.4
192.168.100.5

When creating a node, I created a node at 192.168.1.1 and started to synchronize, and then started drawing at 192.168.1.1

Then I started looking for 192.168.1.2 and found that if 192.168.1.2 also creates nodes, it will put pressure on the network

Is there a way to make the drawing of 2.3.4.5 directly connected to the node of 192.168.1.1, I found that it seems to be the port of 127.0.0.1:9944? Is there a way to change it to 192.168.1.1:9944?

1 Like

Hello @z_W,

Thank you for reaching out on the forums!

To address your query about node/farmer configurations, you can indeed set up a farmer to connect to other nodes. Here’s a step-by-step guide on how to achieve this:

  1. For your Node launch parameters, add --rpc-external & --rpc-methods unsafe parameters.
  2. For your Farmer launch parameters, add --node-rpc-url ws://192.168.1.1:9944 parameter.

However, it’s crucial to highlight that even though this configuration is feasible, it’s not necessarily recommended. The primary concern is potential latency issues between the farmers. For optimal performance, the general practice is to have a dedicated node for each farmer. Nonetheless, feel free to experiment and share your findings with the community. We appreciate your insights! :pray:

For further insights, consider checking the docker compose commands. These commands are structured in a way that mirrors the connection of a node/farmer to a distinct computer. This is especially relevant since docker containers operate in isolation, making the principle consistent. Refer to the docker documentation here: Advanced CLI | Farm from Anywhere

If you feel this answers your question, please ensure to select Mark As Solution at the bottom of my reply

Error: Networking or low-level protocol error: Connection rejected with status code: 403

Caused by:
Connection rejected with status code: 403

In a local area network, if I use the public IP to connect, the following error will appear. Why?

Do you have --rpc-cors all set? Could you provide all the flags you are supplying to the node?

Oh, and do you have this set? --no-private-ipv4?

./subspace-node-ubuntu-x86_64-skylake-gemini-3f-2023-aug-31
–chain gemini-3f
–execution wasm
–blocks-pruning 256
–state-pruning archive
–validator
–name “ooplay-node”
–in-peers 2000
–out-peers 2000
–rpc-methods unsafe
–unsafe-rpc-external
–rpc-external
–rpc-cors all
–no-private-ipv4 \

Yes, I can connect this way, thank you

2 Likes

CORS is for browsers and --no-private-ipv4 doesn’t impact RPC. Those are important, but in different scenarios.

BTW --unsafe-rpc-external is outdated, use --rpc-external instead and remove --unsafe-rpc-external.

1 Like

Where should I get these additional parameters so that I don’t have to ask someone every time but can find them myself.

Just run an app with --help and it will print all the parameters and supported values for each.

Why do we need it for our Docker setup?

Hm, I’m not sure we actually do, have you tried without it?

After running for a while, the following prompt appears


ERROR single_disk_farm{disk_farm_index=3}: subspace_farmer::utils::farmer_piece_getter: Failed to retrieve first segment piece from node error=The background task been terminated because: Networking or low-level protocol error: WebSocket connection error: connection closed; restart required piece_index=3672
2023-09-08T11:24:36.323646Z ERROR single_disk_farm{disk_farm_index=3}: subspace_farmer::utils::farmer_piece_getter: Failed to retrieve first segment piece from node error=The background task been terminated because: Networking or low-level protocol error: WebSocket connection error: connection closed; restart required piece_index=3719
2023-09-08T11:24:36.607200Z ERROR single_disk_farm{disk_farm_index=3}: subspace_farmer::utils::farmer_piece_getter: Failed to retrieve first segment piece from node error=The background task been terminated because: Networking or low-level protocol error: WebSocket connection error: connection closed; restart required piece_index=3641
2023-09-08T11:24:45.806556Z ERROR single_disk_farm{disk_farm_index=3}: subspace_farmer::utils::farmer_piece_getter: Failed to retrieve first segment piece from node error=The background task been terminated because: Networking or low-level protocol error: WebSocket connection error: connection closed; restart required piece_index=3606

Can you help me take a look? What’s the problem?

When 5 devices are connected to the NODE, there is no problem. Finally, when I add 13 devices, this problem will probably occur. The bandwidth of the node is not abnormal and is within the tolerance range.

WebSocket connection from farmer to node was dropped. Is there a chance your LAN connection was dropped or node restarted? For large number of farmers you might want to increase number of RPC connections, but the default is 100, so should be fine for ~50 farmers.

The following error will occur at the same time
Is there any command that can increase these parameters?
I typed .\subspace-farmer-windows-x86_64-skylake-gemini-3f-2023-sep-05.exe --help and could not find all the additional parameters.

There are multiple commands there. It tells you the whole list. If you want to see options for farming, then you run subspace-farmer farm --help and it will tell you farming-specific options. The same works for node (it is on the node where you might need to increase number of RPC connections).

  1. –max-pieces-in-sector <MAX_PIECES_IN_SECTOR>:

What does this parameter do
My hard disk reads faster, maybe this parameter can be configured?

It can, but it shouldn’t. There is a documentation for that option and it clearly doesn’t recommend to specify it.

@nazar-pc I have started some farmers in this way, but most farmers always stuck, what’s the reason? Running for 4 days, always in this status.

Recovering missing piece succeeded. missing_piece_index=2854
2023-09-09T19:14:29.261083Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=480
2023-09-09T20:08:57.434766Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=480
2023-09-09T20:08:57.440409Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=3252
2023-09-09T20:51:31.333295Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=3252
2023-09-09T20:51:31.338946Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=2314
2023-09-09T21:33:54.590466Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=2314
2023-09-09T21:33:54.595514Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=4698
2023-09-09T22:01:46.579015Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=4698
2023-09-09T22:01:46.583994Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=1449
2023-09-09T22:45:39.242800Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=1449
2023-09-09T22:45:39.246460Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=1411