Some bugs of gemini3e

Issue Report

A little bit of bugs in the feed.
I’m writing everything in one topic, so as not to clog the feed with unnecessary messages. First, some minor bugs:

  1. Fast node synchronization (more than 100+ blocks per second) goes only up to ±18k blocks, then synchronization speed drops 10 times.
  2. When running (at least the first run) ‘subspace-cli farm’ after node synchronization, plott synchronization does not start and the system gives an error. Cured by running ‘subspace-cli farm --verbose’.
  3. one node gives errors comparable to those from gemini3d (screenshot). Wipe plott and summary does not help. I had to wipe the node as well. The error occurred during plott synchronization.
  4. Large plots do not work correctly. In a project where the main focus is on farming on hard disks, it seems to me that this is a very big problem. For clarity, I am attaching some screenshots with different measurements, from which you can see that the large plott does not work (does not bring rewards). Also no rewards are being given on virtual machines.

Environment

  • Operating System: Ubuntu 20.04
  • CPU Architecture: Any: i7-3630, i3-7320, e3-1280v5, e5-2696v3, Rzn 9 5900x/5950x
  • RAM: 8Gb - 256gb
  • Storage: Any: sata-ssd 480Gb, 2Tb ssd nvme, 7.68 ssd-sata, 7.68 ssd-nvme
  • Plot Size: 449Gb and more
  • Subspace Deployment Method: 0.5.1-alpha-2, subspace-cli, compiled from the source

Screen to bug number 3

Screens to bug number 4


For " Etalon " I took the first position - i7-3630qm/8Gb DDR - similar characteristics in the recommended by the developers. In any case, this is the simplest configuration and the other machines should be awarded at least as much as this one.
In reality, it turns out quite differently. If we compare it with 3600Gb Plott, we can see that 449Gb brings more than 3600Gb. When you look at the disk load, you can see that the IOPS have very high values. This is more likely the reason why larger plottes don’t bring the expected rewards.

PS. Unfortunately, I can’t fully express my thoughts. Perhaps more screenshots or data would be needed. I will be happy to provide them. Please ask me questions.

1 Like

Hey Penguin, thanks for these interesting performance insights.

  1. That is somewhat expected, as database get fuller and blocks get bigger (as farmers started to produce votes), it is expected that the sync speed will settle on a smaller number. We have done some performance improvements in Gemini 3e and will do more performance tuning as protocol is functionally complete.

  2. @Jeremy_Frank something to look into I think

  3. Yep, known issue, see Creating plotts error

  4. Have seen reports of this too (see Node Synced, Farmer farming for multiple days, no rewards), I’ll look into doing more parallelism for larger plots + there will be still some protocol changes that’ll help with this, certainly useful feedback, thanks a lot!

I think we should be able to address most of these soon, we were focused on Gemini 3f with more protocol changes to finish it functionality-wise before performance tuning. It is great to see detailed feedback like this so we can see what to focus on next.

2 Likes