Rewards vs plotts-size bug

Issue Report

As promised earlier, here are some bugs.

  1. This is really true, tested on different configurations. The main problem is large plotts. The reward for rafts larger than ±130Gb drops a lot. It gets to the point where 100G rafts yield more than 200G rafts. This feature was noticed on the following configurations:
  • 2*2696v3/128Gb/first disk - nvme (OS + node), second disk - nvme (plotts only)
  • 2*2696v3/128Gb/first disk - nvme (OS + node), second disk - ssd-sata (plotts only)
  • 2*2696v3/128Gb/first disk - nvme (OS), second disk - ssd-sata (node), third disk - nvme (plotts only)
  • 2*2696v3/128Gb/first disk - nvme (OS), second disk - ssd-sata (node), third disk - ssd-sata (plotts only)
  • ryzen 9 5950x/128Gb/first disk - nvme (OS + node), second disk - ssd-nvme (plotts only)
  • ryzen 9 5950x/128Gb/first disk - nvme (OS + node), second disk - ssd-sata (plotts only)
  • xeon e3-1280v5/16Gb/first disk - nvme (OS + node), second disk - ssd-sata (plotts only)
  • i7-3660qm/8Gb/first disk - ssd-sata (OS+node), second disk - ssd-sata
  • i3-7320/40Gb DDR4/ssd-nvme (OS+node+plotts)
    Ubuntu 20.04-22.04 installed everywhere.
    All internet lines have speeds of at least 100 megabits per second, the number of peers ± the same, but there is such a peculiarity. It does not depend on client version: cli or advanced - it does not matter.
  1. CLI - version, in startup mode “subspace-cli farm --verbose”. First the node is synchronized, then the farmer MUST START synchronizing. But if you don’t restart the process, farmer synchronization doesn’t start.

Best regards

Thanks for the feedback and testing results!

@nazar-pc tagging in case any of this is helpful with large plots

@user7 in regards to the cli issue, is this on the latest 4.1 version?

1 Like

I think this is related to audit performance, if audit doesn’t complete in time, reward can’t be claimed.
Things will be different in upcoming Gemini 3e, we’ll revisit this then.

2 Likes