Windows version - Subspace farmer: how does it use mapped file in RAM?

I’m farming only 2 x 2TB plots in 2 NVMe disk, but my 64GB is full all the time. Under the resource monitor, it’s showing as sharable.
However when I use Rammap to check, these RAM is under mapped file category, not sharable.

It is also very weird that when I check the file details under Rammap, the mapped file size is hugely different between the first plot and the second plot. How it could be? Does it mean the mapped file of the plot 2 is limited because I have only 64GB of RAM. On other words, if I have 128GB RAM, the mapped file for 2 plots will be 2 x 44.8GB?

Just sharing what I’m observing. Hope it could help to optimize further for Windows. I have 6 SSD disks on this PC, but as soon as I farm 3 plots at same time, the CPU i9 10900 will rise to 80% = 90% load all time and it’s getting very hot. I can only farm 2/6 SSD right now.

We map the whole file. How that looks in the kernel is OS-specific and the API of the library we use doesn’t allow to set a hint of random access on Windows (at least, maybe it is not possible in general). So it is likely that OS will fill the whole RAM with pages of the plot, but at the same time it will be able to free absolute majority of them whenever needed.

The reason we use memory-mapped I/O on Windows (but no longer on other operating systems) is that it appears to be significantly faster than direct file reads on a thread pool.

So in practice you can consider that space as free even though Windows confusingly reports it as used. Mid-term we are looking for ways to get away without memory-mapped I/O because we are still not satisfied with performance and there are definitely faster approaches, they are just annoyingly tricky to use sometimes.

Understood. Thank you.