Node Synced, Farmer farming for multiple days, no rewards

Issue Report

We are seeing multiple users report farming for multiple days with no rewards. In most cases users have been synced for many days and have significant sized plots. In some cases a user might report finally winning a vote or block and get a single 0.1 tSSC. I have verified that in most cases users are checking the proper RPC point on the explorer.

Things to double check

  1. Ensure you are on the latest version as shown on our GitHub
  2. Verify your farmer is present and on the highest block on our telemetry server
  3. Verify your checking your balance on the proper explorer

Are you Facing this issue?

Please include your information below so we can continue to investigate this issue, utilize the template below for your replies.

Issue Template

### Farmer Specifications
 - **Plot Size:**
 - **Subspace Deployment Method:** 
 - **Estimated time you have been farming:** 

### System Specifications
 - **Operating System:**
 - **CPU Architecture:**
- **CPU Model:**
- **# of CPU Cores:**
- **CPU Clock Speed(GHz):**
 - **RAM:**
 - **Storage:**

### Logs (In Text Format)
    ```
    [Logs Here]
    ```

### Additional Details and Notes
- 

Reports of this issue

cc: @nazar-pc @Jeremy_Frank

1 Like

Yes, I am the one, who open this thread in Discord for this issue.

Updates for the moment: No any errors.
All nodes look like alive: “Imported, Idle”, and so on…

Accumulated rewards for this moment (from 5 nodes): 10.9000 TSSC

My 5 nodes:

Farmer Specifications

System Specifications

  • Operating System: - Ub 20.04 serv
  • CPU Architecture: - v3, v2
  • RAM: - 8Gb
  • Storage: - Nvme (4*2Tb Intel server SSD in RAID0)

Logs (In Text Format)

all 5 looks like this: 

:arrow_up: 160.4kiB/s
2023-07-12T18:29:20.038958Z INFO substrate: :sparkles: Imported #101425 (0xf7fa…cca8)
2023-07-12T18:29:21.847610Z INFO substrate: :sparkles: Imported #101426 (0xe6f1…3862)
2023-07-12T18:29:24.671887Z INFO substrate: :zzz: Idle (100 peers), best: #101426 (0xe6f1…3862), finalized #0 (0xa3cd…c1ae), :arrow_down: 140.7kiB/s :arrow_up: 163.2kiB/s
2023-07-12T18:29:29.672617Z INFO substrate: :zzz: Idle (100 peers), best: #101426 (0xe6f1…3862), finalized #0 (0xa3cd…c1ae), :arrow_down: 58.1kiB/s :arrow_up: 61.7kiB/s
2023-07-12T18:29:34.673016Z INFO substrate: :zzz: Idle (100 peers), best: #101426 (0xe6f1…3862), finalized #0 (0xa3cd…c1ae), :arrow_down: 78.5kiB/s :arrow_up: 167.2kiB/s
2023-07-12T18:29:36.212272Z INFO substrate: :sparkles: Imported #101427 (0x0ff9…46fa)
2023-07-12T18:29:39.673304Z INFO substrate: :zzz: Idle (100 peers), best: #101427 (0x0ff9…46fa), finalized #0 (0xa3cd…c1ae), :arrow_down: 99.5kiB/s :arrow_up: 157.2kiB/s
2023-07-12T18:29:43.847677Z INFO substrate: :sparkles: Imported #101428 (0x84d3…b95c)
2023-07-12T18:29:44.673553Z INFO substrate: :zzz: Idle (100 peers), best: #101428 (0x84d3…b95c), finalized #0 (0xa3cd…c1ae), :arrow_down: 90.3kiB/s :arrow_up: 149.7kiB/s
2023-07-12T18:29:49.673829Z INFO substrate: :zzz: Idle (100 peers), best: #101428 (0x84d3…b95c), finalized #0 (0xa3cd…c1ae), :arrow_down: 63.7kiB/s :arrow_up: 55.4kiB/s
2023-07-12T18:29:54.674198Z INFO substrate: :zzz: Idle (100 peers), best: #101428 (0x84d3…b95c), finalized #0 (0xa3cd…c1ae), :arrow_down: 67.2kiB/s :arrow_up: 66.7kiB/s
2023-07-12T18:29:58.280760Z INFO substrate: :sparkles: Imported #101429 (0x0e71…ac74)
2023-07-12T18:29:59.674492Z INFO substrate: :zzz: Idle (100 peers), best: #101429 (0x0e71…ac74), finalized #0 (0xa3cd…c1ae), :arrow_down: 56.5kiB/s :arrow_up: 60.2kiB/s
2023-07-12T18:30:04.674768Z INFO substrate: :zzz: Idle (100 peers), best: #101429 (0x0e71…ac74), finalized #0 (0xa3cd…c1ae), :arrow_down: 57.9kiB/s :arrow_up: 57.7kiB/s
2023-07-12T18:30:09.675195Z INFO substrate: :zzz: Idle (100 peers), best: #101429 (0x0e71…ac74), finalized #0 (0xa3cd…c1ae), :arrow_down: 67.0kiB/s :arrow_up: 68.1kiB/s
2023-07-12T18:30:13.464052Z INFO substrate: :sparkles: Imported #101430 (0xe302…cd72)
2023-07-12T18:30:14.675553Z INFO substrate: :zzz: Idle (100 peers), best: #101430 (0xe302…cd72), finalized #0 (0xa3cd…c1ae), :arrow_down: 94.8kiB/s :arrow_up: 73.6kiB/s
2023-07-12T18:30:19.676330Z INFO substrate: :zzz: Idle (100 peers), best: #101430 (0xe302…cd72), finalized #0 (0xa3cd…c1ae), :arrow_down: 65.1kiB/s :arrow_up: 67.8kiB/s
2023-07-12T18:30:24.676613Z INFO substrate: :zzz: Idle (100 peers), best: #101430 (0xe302…cd72), finalized #0 (0xa3cd…c1ae), :arrow_down: 31.0kiB/s :arrow_up: 83.5kiB/s
2023-07-12T18:30:29.677070Z INFO substrate: :zzz: Idle (100 peers), best: #101430 (0xe302…cd72), finalized #0 (0xa3cd…c1ae), :arrow_down: 110.5kiB/s :arrow_up: 138.0kiB/s
2023-07-12T18:30:32.585372Z INFO substrate: :sparkles: Imported #101431 (0x6a14…2453)

I Would be more than happy to provide any additional info for positive results for SS devs and community.

Breaking news:

Log’s update from one of my 5 nodes:

2023-07-12T18:47:10.012393Z WARN peerset: Ignoring request to disconnect reserved peer 12D3KooWDJrXNiFUihtB96HGW8ZRXkK4M3Q9YqvHgqXptWwmfNVC from SetId(2).

It would be helpful if you can collect CPU model/frequency and number of cores (lscpu on Linux will tell it in most cases). The most likely thing is slow CPU or too few cores, in which case farmer is not able to produce a solution in time.

This will change with introduction of Proof of Time into the protocol, where instead of having 750ms to produce a solution farmer will have a few seconds to do so.

1 Like

Farmer Specifications

  • **Plot Size:7100GiB(6661sector finished)
  • **Subspace Deployment Method:CLI
  • **Estimated time you have been farming:5days

System Specifications

  • **Operating System:win10
  • **CPU Architecture:intel 11400
  • **RAM:64G

Additional Details and Notes

only get about 1% rewards than it should be. farming almost comsume cpu one thread 100%

I use 2 Proxmox servers (3950x and 2*Xeon 2690 v1) with containers.
A separate container for each node.
Gives 4 cores per node before.
Now I increase pernodecores to 8-12-16 (different per node, for the test).
*I already tryed to provide all CPU’s for 1 node and power off other containers during this test - 0 TSSC comes.
Even before increasing, the average load level (for 4cores) was about 7-7.5%.
Now, for example for 12 cores of 3950x CPU load level is 2-2.5%.
Memory load - 1.4Gb.
*My Internet channel: 500Gbit

Maybe you will say that VM is the problem, but this config’s works fine during G2 and bring rewards for G3d and G3e during plotting (I guess), although during plotting CPU load was close to 95%.

You can check it by yourself. Here is the list of my telemetry addresses and wallets for G3:
101SS - st7VAYHjfbcbDU6yCTcJAN9Q13CosCBQzyowVAjSpwbrcdKpC
102SS - st8koGG2U8tKLa8PZStbHSXBhbhzbNr73BmchrMcXwWfuPkFh
103SS - stBi3XveahaBLD2BigzwFHkZzRvAxKsDQqiCozLhCzWuhwWio
220SS - st6Kp6P2br4X5oXWh7gthegiPWQoEbSxHce68EAmsB8Mp3zGZ
221SS - stC2PQqmaMWH5GU9kGmyngVUZbWFEscpaBU5UPFyxmKqEE77M

Waiting for your reply. :pray:

Xeon 2690 v1 is really old, but 3950x is perfectly fine.

Try enabling debug logs on the farmer and see if it is even trying to farm at all.

You can do that by setting environment variable RUST_LOG=subspace_farmer=debug. Note that this will generate quite a lot of logs.

If farmer does appear to produce a solution, then we can move from there. Also if you can provide a few lines of logs where farmer receives a challenge and then produces a solution, that’d be helpful. We’ll see how much time difference there is between those two events (it is possible that NVMe performance is significantly lower with a VM).

Like this?
2023-07-14T02:09:34.001757Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650287, global_challenge: [217, 155, 92, 22, 73, 1, 94, 240, 35, 35, 65, 136, 191, 194, 34, 136, 109, 151, 220, 245, 235, 121, 51, 248, 54, 10, 89, 245, 39, 190, 255, 218], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:34.001810Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650287 sector_count=425
2023-07-14T02:09:36.001893Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650288, global_challenge: [17, 244, 150, 202, 51, 198, 150, 253, 163, 86, 210, 195, 165, 27, 245, 233, 1, 111, 136, 60, 224, 66, 120, 102, 16, 156, 57, 122, 46, 112, 60, 207], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:36.001965Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650288 sector_count=425
2023-07-14T02:09:38.002421Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650289, global_challenge: [237, 151, 55, 81, 212, 4, 149, 59, 182, 38, 85, 228, 0, 64, 91, 85, 67, 173, 59, 187, 180, 85, 33, 4, 60, 98, 105, 8, 255, 137, 168, 9], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:38.002463Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650289 sector_count=425
2023-07-14T02:09:40.000973Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650290, global_challenge: [82, 121, 186, 3, 67, 213, 168, 98, 190, 45, 46, 159, 64, 121, 167, 17, 210, 201, 43, 171, 89, 100, 200, 61, 97, 239, 131, 220, 238, 187, 11, 230], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:40.001024Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650290 sector_count=425
2023-07-14T02:09:42.001120Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650291, global_challenge: [75, 15, 54, 107, 44, 12, 72, 43, 30, 51, 230, 97, 153, 134, 62, 144, 184, 76, 88, 30, 2, 51, 201, 190, 135, 193, 143, 90, 116, 221, 11, 45], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:42.001166Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650291 sector_count=425
2023-07-14T02:09:44.001058Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650292, global_challenge: [125, 75, 42, 41, 141, 99, 253, 230, 40, 151, 133, 63, 11, 146, 14, 235, 109, 116, 161, 74, 44, 78, 209, 160, 0, 87, 140, 30, 186, 107, 19, 140], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:44.001104Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650292 sector_count=425
2023-07-14T02:09:46.001115Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650293, global_challenge: [113, 229, 148, 246, 96, 84, 31, 125, 236, 159, 214, 51, 11, 120, 235, 250, 6, 35, 106, 90, 206, 162, 182, 14, 17, 158, 45, 72, 211, 78, 245, 218], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:46.001155Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650293 sector_count=425
2023-07-14T02:09:48.001264Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650294, global_challenge: [218, 79, 64, 33, 92, 149, 205, 180, 178, 51, 122, 165, 3, 197, 118, 113, 250, 3, 222, 233, 246, 194, 86, 202, 81, 140, 27, 192, 103, 220, 131, 85], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:48.001307Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650294 sector_count=425
2023-07-14T02:09:50.001408Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: New slot slot_info=SlotInfo { slot_number: 844650295, global_challenge: [179, 66, 141, 242, 53, 181, 97, 68, 154, 176, 150, 59, 108, 104, 235, 97, 19, 189, 10, 200, 68, 101, 141, 240, 197, 78, 79, 52, 129, 198, 116, 235], solution_range: 54008931701, voting_solution_range: 540089317010 }
2023-07-14T02:09:50.001449Z DEBUG single_disk_plot{disk_farm_index=0}: subspace_farmer::single_disk_plot: Reading sectors slot=844650295 sector_count=425

Yes, first indicator of not enough time there is that farmer is solving every other second rather than every second.

I think trying to plot a single sector (specify 2G for this) would be a test to see if it starts farming every second. If that helps, look at disk settings in the VM, try to disable caching layers and ideally pass-through disks to the VM directly.

Also I see you are using RAID0 - that is not a good idea with Subspace, unless you’re very short on RAM, having multiple disks would result in better performance (I don’t recall if simple CLI supports it already though).

wiping and plotting 2G :ok_hand:

How decreasing R/W speed and total IOPS (compared with RAID0 on the same disks) can help with performance? :thinking:

Looks like a fail

2023-07-14T05:48:27.504577Z INFO single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece… missing_piece_index=242
2023-07-14T05:48:27.511045Z ERROR single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece failed. missing_piece_index=242
2023-07-14T05:48:27.511058Z INFO single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece… missing_piece_index=374
2023-07-14T05:48:27.517042Z ERROR single_disk_plot{disk_farm_index=0}: subspace_networking::utils::piece_provider: Couldn’t get a piece from DSN. No retries left. piece_index=613 current_attempt=3 max_retries=3
2023-07-14T05:48:27.517062Z ERROR single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece failed. missing_piece_index=374
2023-07-14T05:48:27.517070Z INFO single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece… missing_piece_index=38
2023-07-14T05:48:27.523548Z ERROR single_disk_plot{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece failed. missing_piece_index=38
2023-07-14T05:48:27.533485Z WARN single_disk_plot{disk_farm_index=0}: subspace_farmer_components::plotting: Sector plotting attempt failed, will retry later sector_offset=0 sector_index=7255540903447569405 error=Failed to retrieve piece 38: Not enough pieces to reconstruct a segment
2023-07-14T05:48:29.800377Z INFO substrate: :zzz: Idle (100 peers), best: #111925 (0xfea9…72f9), finalized #0 (0xa3cd…c1ae), :arrow_down: 27.5kiB/s :arrow_up: 27.7kiB/s
2023-07-14T05:48:34.800532Z INFO substrate: :zzz: Idle (100 peers), best: #111925 (0xfea9…72f9), finalized #0 (0xa3cd…c1ae), :arrow_down: 77.6kiB/s :arrow_up: 79.6kiB/s
2023-07-14T05:48:39.800678Z INFO substrate: :zzz: Idle (100 peers), best: #111925 (0xfea9…72f9), finalized #0 (0xa3cd…c1ae), :arrow_down: 67.5kiB/s :arrow_up: 68.8kiB/s
2023-07-14T05:48:43.322682Z INFO substrate: :sparkles: Imported #111926 (0x0451…b92b)

contabo - 4 vCPU Cores RAM 8 GB RAM STORAGE 450 ssd

processor :
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7282 16-Core Processor
stepping : 0
microcode : 0x1000065
cpu MHz : 2794.748
cache size : 512 KB

sys Ubuntu 20.04.6 LTS
first: Plot Size: 2 GB, Nodes Guru - Subspace (RU) - working 96 h
second^ Plot Size: 300 GB, Nodes Guru - Subspace (RU) - working 96+ h
now: Plot Size: 100 GB, - Simple CLI (Recommended)- working 24+ h

  • no rewards

2023-07-14T12:44:38.353032Z INFO substrate: :zzz: Idle (100 peers), best: #113985 (0x4c11…4bfc), finalized #0 (0xa3cd…c1ae), :arrow_down: 68.8kiB/s :arrow_up: 50.5kiB/s
2023-07-14T12:44:43.353743Z INFO substrate: :zzz: Idle (100 peers), best: #113985 (0x4c11…4bfc), finalized #0 (0xa3cd…c1ae), :arrow_down: 45.6kiB/s :arrow_up: 43.5kiB/s
2023-07-14T12:44:44.369126Z INFO substrate: :sparkles: Imported #113986 (0x7147…5472)
2023-07-14T12:44:44.801968Z INFO substrate: :sparkles: Imported #113986 (0x8af8…68aa)
2023-07-14T12:44:48.410566Z INFO substrate: :zzz: Idle (100 peers), best: #113986 (0x7147…5472), finalized #0 (0xa3cd…c1ae), :arrow_down: 90.9kiB/s :arrow_up: 86.3kiB/s
2023-07-14T12:44:50.076277Z INFO substrate: :sparkles: Imported #113987 (0x01a2…a3c4)
2023-07-14T12:44:50.864543Z INFO substrate: :sparkles: Imported #113988 (0x10a1…b8eb)
2023-07-14T12:44:53.447050Z INFO substrate: :zzz: Idle (100 peers), best: #113988 (0x10a1…b8eb), finalized #0 (0xa3cd…c1ae), :arrow_down: 80.2kiB/s :arrow_up: 62.7kiB/s
2023-07-14T12:44:58.454783Z INFO substrate: :zzz: Idle (100 peers), best: #113988 (0x10a1…b8eb), finalized #0 (0xa3cd…c1ae), :arrow_down: 59.1kiB/s :arrow_up: 40.6kiB/s
2023-07-14T12:45:03.480066Z INFO substrate: :zzz: Idle (100 peers), best: #113988 (0x10a1…b8eb), finalized #0 (0xa3cd…c1ae), :arrow_down: 80.3kiB/s :arrow_up: 53.0kiB/s
2023-07-14T12:45:07.642263Z INFO substrate: :sparkles: Imported #113989 (0x0f89…f294)
2023-07-14T12:45:08.520687Z INFO substrate: :zzz: Idle (100 peers), best: #113989 (0x0f89…f294), finalized #0 (0xa3cd…c1ae), :arrow_down: 100.0kiB/s :arrow_up: 81.9kiB/s
2023-07-14T12:45:13.524156Z INFO substrate: :zzz: Idle (100 peers), best: #113989 (0x0f89…f294), finalized #0 (0xa3cd…c1ae), :arrow_down: 98.0kiB/s :arrow_up: 68.7kiB/s
2023-07-14T12:45:14.011095Z INFO substrate: :sparkles: Imported #113990 (0x605a…8a18)
2023-07-14T12:45:18.542645Z INFO substrate: :zzz: Idle (100 peers), best: #113990 (0x605a…8a18), finalized #0 (0xa3cd…c1ae), :arrow_down: 62.5kiB/s :arrow_up: 82.7kiB/s
2023-07-14T12:45:23.424499Z INFO substrate: :sparkles: Imported #113991 (0x6018…b22d)
2023-07-14T12:45:23.555529Z INFO substrate: :zzz: Idle (100 peers), best: #113991 (0x6018…b22d), finalized #0 (0xa3cd…c1ae), :arrow_down: 78.5kiB/s :arrow_up: 72.5kiB/s
2023-07-14T12:45:28.568060Z INFO substrate: :zzz: Idle (100 peers), best: #113991 (0x6018…b22d), finalized #0 (0xa3cd…c1ae), :arrow_down: 86.9kiB/s :arrow_up: 73.8kiB/s
2023-07-14T12:45:33.576485Z INFO substrate: :zzz: Idle (100 peers), best: #113991 (0x6018…b22d), finalized #0 (0xa3cd…c1ae), :arrow_down: 67.8kiB/s :arrow_up: 49.4kiB/s
2023-07-14T12:45:38.583609Z INFO substrate: :zzz: Idle (100 peers), best: #113991 (0x6018…b22d), finalized #0 (0xa3cd…c1ae), :arrow_down: 49.6kiB/s :arrow_up: 34.5kiB/s
2023-07-14T12:45:41.358976Z INFO substrate: :sparkles: Imported #113992 (0x8577…d454)
2023-07-14T12:45:43.584563Z INFO substrate: :zzz: Idle (100 peers), best: #113992 (0x8577…d454), finalized #0 (0xa3cd…c1ae), :arrow_down: 77.5kiB/s :arrow_up: 101.5kiB/s
2023-07-14T12:45:48.586177Z INFO substrate: :zzz: Idle (100 peers), best: #113992 (0x8577…d454), finalized #0 (0xa3cd…c1ae), :arrow_down: 38.1kiB/s :arrow_up: 51.0kiB/s
2023-07-14T12:45:50.245544Z INFO substrate: :sparkles: Imported #113993 (0x2fe7…610f)
2023-07-14T12:45:53.594193Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 64.7kiB/s :arrow_up: 42.7kiB/s
2023-07-14T12:45:58.599742Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 55.1kiB/s :arrow_up: 61.9kiB/s
2023-07-14T12:46:03.612329Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 65.1kiB/s :arrow_up: 57.6kiB/s
2023-07-14T12:46:08.664076Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 35.3kiB/s :arrow_up: 44.6kiB/s
2023-07-14T12:46:13.671229Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 27.5kiB/s :arrow_up: 57.0kiB/s
2023-07-14T12:46:18.704395Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 38.6kiB/s :arrow_up: 27.2kiB/s
2023-07-14T12:46:21.062686Z WARN peerset: Ignoring request to disconnect reserved peer 12D3KooWJ16ztKBNyMvXZTZnJyiiLDmSTPxtFw2YxJhCmV7ipxYv from SetId(2).
2023-07-14T12:46:23.705254Z INFO substrate: :zzz: Idle (82 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 65.5kiB/s :arrow_up: 79.8kiB/s
2023-07-14T12:46:24.576249Z WARN peerset: Ignoring request to disconnect reserved peer 12D3KooWHqspuNqpVcuNuimXJgiijpuaAhWFGVyfjFy8jXHifiTq from SetId(2).
2023-07-14T12:46:28.707819Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 92.7kiB/s :arrow_up: 81.4kiB/s
2023-07-14T12:46:33.710202Z INFO substrate: :zzz: Idle (100 peers), best: #113993 (0x2fe7…610f), finalized #0 (0xa3cd…c1ae), :arrow_down: 43.4kiB/s :arrow_up: 34.0kiB/s
2023-07-14T12:46:36.634440Z INFO substrate: :sparkles: Imported #113994 (0xef77…73d1)
2023-07-14T12:46:38.152845Z INFO substrate: :sparkles: Imported #113994 (0xaaf4…6067)
2023-07-14T12:46:38.711708Z INFO substrate: :zzz: Idle (100 peers), best: #113994 (0xef77…73d1), finalized #0 (0xa3cd…c1ae), :arrow_down: 76.9kiB/s :arrow_up: 98.2kiB/s
2023-07-14T12:46:43.737917Z INFO substrate: :zzz: Idle (100 peers), best: #113994 (0xef77…73d1), finalized #0 (0xa3cd…c1ae), :arrow_down: 59.3kiB/s :arrow_up: 61.9kiB/s

Maracle!
I didn’t do any changes with my 4 nodes and got 0.3 (summary) in last 2 days. :face_with_peeking_eye:

Another 0.2 TSSC comes (from 4 nodes). :rofl:

You’re decreasing total performance by using RAID0 if workload (farming) can utilize all individual disks at the same time. RAID0 is not free (though it is cheap) and abstracts away hardware from the application, which is useful sometimes, but Subspace farming can make better decisions performance-wise when it sees how many physical disks you have and parallelize things appropriately without hoping that RAID0 will work in the most optimal way for this use case.

The issue is with plotting there, not farming, see Creating plotts error

Is that from smaller plot size or the same plots that were not farming before?

Its from already working nodes.
New one, which you suggest me to polot with 2G plot is not operating - can not plot bcz of plot issues.

Try to wipe and start over with Release v0.5.3-alpha · subspace/subspace-cli · GitHub on that new instance, it should work.