Fake display of high RAM usage or RAM leak on Windows by Subspace farmer

I have 2 sets of PC. One set farming about 14TB, other set farming about 28TB of plot. Both sets are Windows 11 and 64GB RAM.

I notice as soon as I run the subspace farmer, the RAM usage goes up to 60GB (the PC with 14TB plot goes up slower though, but it finally reaches this level later). As soon as I close subspace farmer, RAM usage dropped dramatically.

Image 1: Subspace farmer is running

Image 2: Subspace farmer is closed. RAM usage dropped in few seconds

Image 3: Run subspace farmer again, RAM usage rises constantly until it reaches 60GB after few minutes

To be fair, I can run farmer without any issues. I have also run some other programs without issues for days. But the RAM usage up/down with or without subspace farmer is the fact for Windows users.

How much time does it take to reach high level or at least get a confirmation that the issue is still happening (in case we have a potential fix to verify)? I’m asking because I’d like you to do some experiments to roughly figure out the source of it, which will take a few attempts.

I took the last screenshot only 5 minutes ago, and now it’s already reached 60GB RAM usage. Please see the screenshot right now.

One of my friend reported he has a few PC’s crashed with this RAM issue, but all my PC’s can run stably for weeks. It can be exactly as you explained in the Discord few times: It’s an issue with Windows display and other programs can take the RAM when required. But to be fair, I do not see it in other PoS farmer (chia, spacemesh) and other GPU/CPU coin miners (lolminer, srbminer, etc…).

1 Like

Is your farmer still plotting/replotting?

same here , i am running ryzen 3600x with 16GB ram , after 5 min the ram will be 95% , i am plotting 1TB nvme

Yes, it’s still plotting/replotting very hard :slight_smile: :slight_smile:

Can you restart everything without --validator on the node? This will disable farming and will give us a hint whether this might be caused by farming or plotting/replotting.

Run for the same 5 minutes and restart back as it was. Shouldn’t be a huge loss, but will give us useful information about the behavior.

This node is having 12 remote farmers connecting to it so restart it means I’ll have to restart all the farmers. But anyway, I’ve done that.

There is no RAM rise due to plotting/replotting. As you can see below, I set the pool thread for plot/replot is 20/28 thread of my 14700K, CPU is stable used at approx. 70% for this task, I’ve been waiting for 10 minutes, the RAM is very stable low also.

I’ve checked all other farmers, all the same, low RAM usage without farming. But I don’t think I need to share so many screenshot here.

Thank you.

I have faced a similar issue, and it is a combination of display + memory leak. On a 256 GB machine running just the farmer and node, the memory usage was close to 40%. On 64 GB and 128 GB, subspace uses 98% and 95% of the memory, respectively. The memory usage graphs are very similar to dragonP’s. On the 64GB machine, I was running a few more things, and my computer would freeze due to low memory. Killing the subspace farmer always fixed the freeze. I think you can replicate the freeze using a 32 GB machine.

While I’d like to help, I don’t have a Windows machine for testing

Okay, so farming specifically is related to memory increase. What is the minimum farm size that results in large memory usage from your experience? I might try to experiment with reduced demo app to see what is the root cause of this behavior.

All farmer is doing during farming is reading random parts of the large file with OS hint to not cache anything at all because access is random. Yet Windows is still doing something odd apparently.

Also do you have farmer that is not plotting/replotting anything and does the same behavior still persist there (just to make sure it is not a combination of plotting and farming at the same time)?

I tried with two 400 GB plots, and didn’t see a significant increase in RAM usage. I think you need a few TB to see memory getting blown up.

Plotting 52T on this machine. The initial memory ramp is when it is subscribing to archived sectors. the bump up is when it starts plotting.

I have confirmed there is no memory inflation when disabling farming while plotting. On my already plotted machines, there is no persistent inflation, though I cannot confirm if it happens during replotting.

Here’s what it looks like when a sector is finished.

Can someone try farmer from Snapshot build · subspace/subspace@8ad87ac · GitHub? It switches to default memory allocator and I’m wondering if that results in a different behavior by any change.

Didn’t seem to fix for me. CPU usage is also down 20% from before, and sector times increased 67% from 3 min per 2 sectors to 5 min per 2 sectors.

Okay, thanks!
Then I’m still looking for a hint of a small-ish size of the farm that still has this issue.
I can create sized of a wide range and assign almost any amount of RAM in a VM, I just need to get some indication of what I should have issues with.

My farm is a single 4tb nvme ssd on space acres latest version (0.0.17) with very similar memory usage as dragonP showed above. My 64gb of memory is at 95% in a matter of minutes after opening the program and starting to farm. Small farms are definitely affected as well

Well, 4TB is definitely not small, I’m looking for something more reasonable. And yes, Space Acres will have a similar behavior, that is expected as they share almost all of the relevant code.

I’ve just re-run one of my farmer with existing 1 TB plot (removing other plots, keeping only this plot), not sure why but it’s doing piece cache sync again :slight_smile:

Nvm, I’ll be able to report tomorrow morning. This PC has only 32GB RAM so we’ll easily see the impact.