5 nodes and all is full for 100 %!

Issue Report


  • Operating System: Ubuntu 20.04
  • CPU Architecture: 4 vCPU cores
  • RAM: 8 GB
  • Storage: 200 GB SSD
  • Plot Size: 80-90 GB
  • Subspace Deployment Method: —> CL


I have five nodes that are at 100% capacity, rendering them unusable. Consequently, I’ve lost four days of progress.

Steps to reproduce

  1. Set up nodes with the following configuration: 8 GB RAM, 4 vCPU cores, 200 GB SSD, and 80-90 GB plot size.
  2. Run version 3d of the software.
  3. Observe that the nodes reach 100% capacity, making them inoperable.

Expected result

  • Nodes should function efficiently without reaching full capacity, allowing for continuous operation and optimal performance.

What happens instead

  • Nodes become completely filled, making them unusable and resulting in lost time and progress.

The application panicked (crashed).
Message: Older blocks should always exist
Location: /home/runner/.cargo/git/checkouts/subspace-5c1447fb849a5554/677ab36/crates/sc-consensus-subspace/src/archiver.rs:54
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
subspaced.service: Main process exited, code=exited, status=101/n/a
subspaced.service: Failed with result ‘exit-code’.
subspaced.service: Scheduled restart job, restart counter is at 5509.
Stopped Subspace Node.
Started Subspace Node.
2023-04-26T04:28:32.385199Z INFO subspace_cli::utils: Increase file limit from soft to hard (limit is 1024000)
2023-04-26T04:28:32.392628Z INFO validate_config:parse_config: subspace_cli::config: close time.busy=7.24ms time.idle=73.7µs
2023-04-26T04:28:32.392804Z INFO validate_config: subspace_cli::config: close time.busy=7.49ms time.idle=18.1µs

Please advise on how to resolve this issue and prevent it from recurring in the future. Your assistance is greatly appreciated.

root@vmi882676:~# du -ch -d1 ~/.local/share/subspace-cli
32K /root/.local/share/subspace-cli/cache
9.6M /root/.local/share/subspace-cli/logs
123G /root/.local/share/subspace-cli/node
68G /root/.local/share/subspace-cli/plots
191G /root/.local/share/subspace-cli
191G total

1 Like

Just to clarify, You have five nodes on five different 200GB SSD Drives? Or you are trying to run Five nodes on one 200GB SSD?

Keep in mind in this iteration (Gemini 3D) Its an archival storage by default in simple CLI. So you need:

(100+ GB of free space for the node) + (your plot size) + (Any other programs you have on the SSD)

Currently the node is at about 123 GB but as the history/blocks continue to grow that number will grow too.

If you have a 200 GB SSD and are already at 191 GB of data for the Node and all it components you only have 9 GB of wiggle room left, is the node the only thing saved to the drive?

Hello @tradershort,

I wanted to update you regarding the issue you reported earlier. I reached out to the development team, and they informed me that the problem you encountered may be related to a bug that we are currently addressing. You can find more information about it here: storage_providers_db is growing too large · Issue #1388 · subspace/subspace · GitHub

As a result, it appears that the minimum specifications mentioned in our documentation (https://docs.subspace.network) may not be accurate. I have submitted an update to rectify this.

For the time being, it would be best to have at least ~150GB available for the node archive, in addition to the desired size for your farmer plot, until we can resolve this issue.

1 Like

Thank you for your response and for seeking clarification. To answer your question, I have five different nodes, each running on its own 200 GB SSD.

Based on your information, it seems that having 100 GB of free space for the node might not be enough in my case. I previously tried allocating 80 GB for the plot, and as soon as the nodes were synchronized, they reached full capacity and became inoperable.

Considering this issue, would you recommend allocating a smaller plot size to ensure that there is enough free space for the node and its components to function efficiently? I would appreciate your guidance on the optimal plot size for a 200 GB SSD.

Thank you for your assistance.

Hi @ImmaZoni,

Thank you for your update and for looking into the issue I reported. It’s good to know that the development team is aware of the bug and working on addressing it. I appreciate you sharing the link to the GitHub issue and updating the documentation to reflect more accurate minimum specifications.

Based on your suggestion, I will allocate at least ~150 GB for the node archive in addition to my desired farmer plot size. This should help mitigate the problem until a resolution is implemented.

I would like to mention that it’s unfortunate I lost a week restarting my VPS repeatedly due to reaching full capacity as soon as synchronization was complete. I hope that the Subspace team appreciates the assistance and feedback I have provided, especially considering I have been with you for 15 months. I believe in the project and want to contribute to its success.

Thank you once again for your assistance and for keeping me informed about the progress on this issue. I appreciate your dedication to resolving this matter.

1 Like

Hey @tradershort

You’re welcome, and thank you for your dedication and patience in helping us identify and resolve this issue. I understand your frustration with the lost time and effort, and I want to assure you that the Subspace team truly appreciates your assistance and feedback, especially considering your long-term commitment to our project.

I’m glad to hear that you found my suggestion helpful and that you’re willing to allocate more space to mitigate the problem until the issue is resolved. We’ll keep you updated on the progress, and please don’t hesitate to reach out if you have any further questions or concerns.

Thank you once again for your valuable contributions to the Subspace community, and we look forward to continuing to work with you in the future.

1 Like