Compressed plot and increase k size plot

I have some concerns as following, hope team can share if it has been considered for subspace

  1. Compressed plot: What we know from Chia is that after a while, the third party can offer farmers a new plot format with compression, which give them a more effective farming capacity, i.e. one who has 60% compressed plot at 100TB can win same reward as other who has 140TB of original plot. This resulted in many farmers had to re-plot 1-2 times. I understand Chia has 3x block time compared to subspace (18s vs 6s), which has less time for un-compress task, leading to lower gain from compression. But want to know if Subspace team has ever thought about that? And if you have any plan to mitigate?

  2. Increase K size. From my humble understanding, please correct if I’m wrong, each plot is created from a combination of pieces in the network. So is it correct if someone has all the pieces and has a super power system that can receive challenge and make the proof ‘on the fly’ within 4 seconds, then he’ll be able to win reward at every round? If answer is YES, I’m not sure if we’ll have to increase the K size of the plot here. Right now a good CPU can even make a 1 GiB sector in 60s-80s, a couple of 4090 can surely make a 1 GiB sector in 4s?


This is not applicable to Subspace. Explanation why would be too long to write here, I can’t recommend Introduction | Autonomys Academy enough if you’re interested in this kind of stuff: Plotting | Autonomys Academy
Also I made an implementation walkthrough a while ago about this topic.

Why would you run a couple 4090s all the time instead of plotting 1GiB with 10 years old quad core CPU and let it basically run idle until you win the challenge? It makes no economic sense :slightly_smiling_face:

Thanks for your thoughtful questions @dragonP!
Let me try to address your concerns!

  1. Compression of plots:
    There are several things at play here. First, and generally, since we do not use the plotting function the same way as Chia does a lot of issues Chia faced do not apply to Subspace. Nazar has beat me to the Academy link :slight_smile: We do not store the Chia plot files, we generate them, encode the history pieces and discard.
  2. Even if able to produce 1 GiB in 4s that sector is still competing with all other space in the network. To really have a moat the super fast system would have “on-the-fly” encode petabytes of sectors.

If you are interested in more formal considerations, you can also take a look at our research paper. Benchmarks are outdated though.

Thanks @nazar-pc and @dariolina. I may be wrong but let me try to elaborate a bit on what I mean about ‘on the fly’ plot.

So subspace has audit and proof fetch process. During audit, it will scan all the plotted sectors to find the valid sector. During proof fetch, it will fetch proof by using CPU to read data from that sector to generate proof. My concern is if someone is capable of building a tool to make the ‘on the fly’ plot, he will make the plot that’s always pass the audit stage for every challenge; and then use the remaining time of allowed 4s, to generate the proof.

Is this possible?


You don’t know if sector passes audit before you construct the sector, so you need to do the work of sector creation before checking it and in most cases it will not be valid. As Dariia mentioned above, you’ll have to generate a huge number of sectors to be competitive. This is technically possible, but it will be prohibitively expensive and it is much more rational to plot it once and store on SSD.

Thanks Nazar, it is indeed crucial that someone cannot choose a combination of piece based on the ‘received challenge’ that he knows will pass the audit to make the single sector/proof that will win reward every round.

So my concern is irrelevant here.