The CPU is an EPYC 7V13. Both plotting programs only utilized around 20 to 30 cores, which leaves many cores idle for the 7V13.
The launch parameters are the same for both. With one physical SSD, the plotting for one farm program reaches 35%. However, with two physical SSDs, the plotting for one farm program only reaches 20%.
These are the startup parameters, which are basically the same.
Based on the sector index, the farmer with 1 plot, you’ve done plotting for about 2500 GB. And the farmer with 2 plots, you’ve done plotting for about 3600 GB. So 2 plots > 1 plot, in total.
However I think the CPU load for the 2 plots can be 3x compared to the 1 plot. At least it’s observed from my PC.
In the illustration, whether it’s two plots or one plot, only 20-30 cores are utilized. This is with the EPYC 7V13 CPU. I also have a server with an EPYC 7K62, but for some reason, the plotting speed on the 7K62 is almost more than 3 times slower than on the 7V13."
I’m not sure about the underlying principle of plotting, and the plotting performance varies across different servers.
Yes, same with me. It’s not clear at all, not yet. Have no idea how to optimize hardware. I decided to stop changing nor buying new hardware now. Will wait for the next development from the team. Anyway there’ll be change in the protocol and of course, the node and farmer instance.
Check per-CPU core usage, not total. Plotting is memory-bound, so cores may look under-utilized, but they are in fact just waiting for data to come from memory and can’t do anything else.
As to “reaches X%” I’m not exactly sure what you mean by that. How much time does it take to reach there for example? Can you define it more clearly alongside with your expectations?
I mean, under the exact same configuration, the plotting speed varies significantly. This is unreasonable. What causes these situations? Why do some plots finish very quickly while others are extremely slow?
Hard to say without very detailed logs