Farmer doesn't take space it has allocated

Hi there,

I’m running multiple nodes on VPS with SSD 240Gb and 2Tb.
I set them up with custom base-path with --plot-size=165G or 1900G allocated, so they have enough space for node sync.

They’re fully synced for a few days at least. Farmer is plotting new segments, node is syncing.

But du shows farmer folder are using tiny amount of storage like 3Gb or 5Gb.
Some are using more, but none are fully filled. Any ideas?

I tried to make wipe and remove farmer’s folder completely, but it still gets up to 3-5Gb and stay there.

2 Likes

This goes for 240Gb machine. (8 CPU and 16 RAM by the way)

~/volume$ journalctl -u subspaced -f
-- Logs begin at Sun 2022-06-05 05:58:34 UTC. --
Jun 14 09:46:48 vps-19 subspace-node[142356]: 2022-06-14 09:46:48 [PrimaryChain] ♻️  Reorg on #199299,0xa8c0…c1f1 to #199299,0x9482…bb94, common ancestor #199298,0xf16f…1151
Jun 14 09:46:48 vps-19 subspace-node[142356]: 2022-06-14 09:46:48 [PrimaryChain] ✨ Imported #199299 (0x9482…bb94)
Jun 14 09:46:50 vps-19 subspace-node[142356]: 2022-06-14 09:46:50 [PrimaryChain] 💤 Idle (50 peers), best: #199299 (0x9482…bb94), finalized #0 (0x9ee8…ccf0), ⬇ 312.5kiB/s ⬆ 740.7kiB/s
Jun 14 09:46:52 vps-19 subspace-node[142356]: 2022-06-14 09:46:52 [PrimaryChain] ✨ Imported #199300 (0xc27f…f582)
Jun 14 09:46:55 vps-19 subspace-node[142356]: 2022-06-14 09:46:55 [PrimaryChain] 💤 Idle (50 peers), best: #199300 (0xc27f…f582), finalized #0 (0x9ee8…ccf0), ⬇ 400.8kiB/s ⬆ 535.8kiB/s
Jun 14 09:46:59 vps-19 subspace-node[142356]: 2022-06-14 09:46:59 [PrimaryChain] ✨ Imported #199301 (0xa3e4…7071)
Jun 14 09:47:00 vps-19 subspace-node[142356]: 2022-06-14 09:47:00 [PrimaryChain] 💤 Idle (50 peers), best: #199301 (0xa3e4…7071), finalized #0 (0x9ee8…ccf0), ⬇ 218.2kiB/s ⬆ 1.1MiB/s
Jun 14 09:47:05 vps-19 subspace-node[142356]: 2022-06-14 09:47:05 [PrimaryChain] ✨ Imported #199302 (0x77bb…6bf3)
Jun 14 09:47:05 vps-19 subspace-node[142356]: 2022-06-14 09:47:05 [PrimaryChain] 💤 Idle (50 peers), best: #199302 (0x77bb…6bf3), finalized #0 (0x9ee8…ccf0), ⬇ 240.8kiB/s ⬆ 395.6kiB/s
Jun 14 09:47:07 vps-19 subspace-node[142356]: 2022-06-14 09:47:07 [PrimaryChain] ✨ Imported #199303 (0x8021…ce42)
^C
~/volume$ journalctl -u subspaced-farmer -f
-- Logs begin at Sun 2022-06-05 05:58:34 UTC. --
Jun 14 09:38:39 vps-19 subspace-farmer[142414]: 2022-06-14T09:38:39.257588Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107912
Jun 14 09:39:13 vps-19 subspace-farmer[142414]: 2022-06-14T09:39:13.827382Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107913
Jun 14 09:40:17 vps-19 subspace-farmer[142414]: 2022-06-14T09:40:17.680817Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107914
Jun 14 09:41:10 vps-19 subspace-farmer[142414]: 2022-06-14T09:41:10.720190Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107915
Jun 14 09:41:43 vps-19 subspace-farmer[142414]: 2022-06-14T09:41:43.120687Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107916
Jun 14 09:42:33 vps-19 subspace-farmer[142414]: 2022-06-14T09:42:33.330995Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107917
Jun 14 09:43:18 vps-19 subspace-farmer[142414]: 2022-06-14T09:43:18.400589Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107918
Jun 14 09:44:18 vps-19 subspace-farmer[142414]: 2022-06-14T09:44:18.695225Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107919
Jun 14 09:45:22 vps-19 subspace-farmer[142414]: 2022-06-14T09:45:22.697825Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107920
Jun 14 09:46:33 vps-19 subspace-farmer[142414]: 2022-06-14T09:46:33.542314Z  INFO subspace_farmer::archiving: Plotted segment segment_index=107921
^C
~/volume$ du -h -d2
4.9G	./farmer/plot0
3.4M	./farmer/object-mappings
4.9G	./farmer
51G	./node/chains
51G	./node
55G	.

This goes for 2Tb machine. 10 CPU and 60Gb RAM.
Same as above, synced and plotting.

~/volume$ du -h -d2
51G	./node/chains
51G	./node
4.9G	./farmer/plot11
4.9G	./farmer/plot7
4.9G	./farmer/plot2
4.9G	./farmer/plot10
4.9G	./farmer/plot8
4.9G	./farmer/plot0
4.9G	./farmer/plot12
4.9G	./farmer/plot5
4.9G	./farmer/plot6
4.9G	./farmer/plot9
4.9G	./farmer/plot1
4.9G	./farmer/plot13
4.9G	./farmer/plot4
4.9G	./farmer/plot15
3.4M	./farmer/object-mappings
4.9G	./farmer/plot3
4.9G	./farmer/plot14
78G	./farmer
128G	.

is there any more plot folders inside ./farmer/?

there should be plot0, plot1, etc like seen on your other machine?

The 10 CPU 2TB machine, your plot is set to ~2TB?

No, there’s only plot0 on smaller machine. This is full output of du command with 2 levels of nesting.

Plot sizes set to 165G on smaller and 1900G on 2TB.

Hm something is definitely off then, because if you are plotting more than 100gb you should have atleast more than 1. (one plot# folder is generated for every 100gb you plot) I would suggest wiping and plotting your farm

Done multiple times already.
With wipe only, with node purge and wipe, with wipe and removing farmer folder via rm.

And the problem is not only in plots numbers, but the 2TB node uses 5Gb in each plot, which is deadly ineffective

thats odd, each plot should be 100GB if im not mistaken, lets get a Dev in here.

@ivan-subspace would you mind taking a look here, something is off with this users plotting experience.

Hmm :thinking:

@Teamoorko Could you print your farmer command from systemd unit?

Farmer:

$ cat /etc/systemd/system/subspaced-farmer.service 
[Unit]
Description=Subspaced Farm
After=network.target

[Service]
User=theguy
Type=simple
ExecStart=/usr/local/bin/subspace-farmer --base-path /home/theguy/volume/farmer farm --reward-address *** --plot-size=165G
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

Node:

$ cat /etc/systemd/system/subspaced.service
[Unit]
Description=Subspace Node
After=network.target

[Service]
User=theguy
Type=simple
ExecStart=/usr/local/bin/subspace-node --base-path /home/theguy/volume/node --chain gemini-1 --execution wasm --pruning 1024 --keep-blocks 1024 --validator --name ***
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

*** - wallet and name removed

And some of my nodes running with same config are using 103Gb by farmer and earn some rewards, but they probably should be using 165Gb?

~/volume$ du -h -d2
103G	./farmer/plot0
492M	./farmer/object-mappings
103G	./farmer
53G	./node/chains
53G	./node
156G	.

And some of my nodes running with same config are using 103Gb by farmer and earn some rewards, but they probably should be using 165Gb?

No, this is to be expected. We currently reserve more than we actually need in order to not exceed supplied space.

These systemd services look fine. Could you share your setup for 1900G allocated node and farmer?

They’re exactly the same with the only difference is 1900Gb allocated.

$ cat /etc/systemd/system/subspaced.service
[Unit]
Description=Subspace Node
After=network.target

[Service]
User=theguy
Type=simple
ExecStart=/usr/local/bin/subspace-node --base-path /home/theguy/volume/node --chain gemini-1 --execution wasm --pruning 1024 --keep-blocks 1024 --validator --name ***
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target


$ cat /etc/systemd/system/subspaced-farmer.service 
[Unit]
Description=Subspaced Farm
After=network.target

[Service]
User=theguy
Type=simple
ExecStart=/usr/local/bin/subspace-farmer --base-path /home/theguy/volume/farmer farm --reward-address *** --plot-size=1900G
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

This happened to me when I first fast synced node without farmer (by not using --validator parameter) and then connected a farmer to this already synced node. The farmer then starts plotting only for newly added blocks and its plot size is growing very slow.
Did you sync your node with farmer connected on all the way?

1 Like

Hmm, that’s interesting…
I never experienced fully working node and farmer from block 0 to fully synced and always had to restart farmer and\or node, do wipe & purge, etc.
Although I was running as --validator all the time, so my config has never changed.

Can you try increasing file limit and syncing from the start (removing all the node and farmer data, starting both farmer and node)

How exactly it should look like increasing the file limit?

Try increasing LimitNOFILE value to some bigger number. 500000 should be fine

I have noticed the same issue on several farmers. It appears that plotting didn’t finish or only does very very slowly. CPU is not busy at all. Notice below a couple plot folders are only 13G respectively 1.8G. The node is synced for many days.

Note my open files are high already.

ulimit -Hn

1048576

ulimit -Sn

1048576

Also note that the processes for node and farmer are no where near the limits

ls -l /proc/$(pidof subspace-node)/fd |wc -l

655

ls -l /proc/$(pidof subspace-farmer)/fd |wc -l

4691

du -hs plot*

103G plot0
103G plot1
103G plot10
103G plot11
103G plot12
103G plot13
103G plot14
103G plot15
103G plot16
103G plot17
103G plot18
103G plot19
103G plot2
103G plot20
103G plot21
103G plot22
103G plot23
103G plot24
103G plot25
103G plot26
103G plot27
103G plot28
103G plot29
103G plot3
103G plot30
103G plot31
103G plot32
103G plot33
103G plot34
103G plot35
103G plot36
103G plot37
103G plot38
103G plot39
103G plot4
103G plot40
103G plot41
103G plot42
13G plot43
13G plot44
13G plot45
13G plot46
13G plot47
13G plot48
13G plot49
103G plot5
13G plot50
13G plot51
13G plot52
13G plot53
13G plot54
13G plot55
13G plot56
13G plot57
13G plot58
13G plot59
103G plot6
13G plot60
13G plot61
13G plot62
13G plot63
13G plot64
13G plot65
13G plot66
13G plot67
13G plot68
13G plot69
103G plot7
1.8G plot70
1.8G plot71
1.8G plot72
1.8G plot73
103G plot8
103G plot9