No Tssc Rewards on Gemini 3f

Issue Report

Environment

  • Operating System: Ubuntu 20.04
  • CPU Architecture: amd64
  • RAM: 8G
  • Storage: 800G
  • Plot Size: 500G
  • Subspace Deployment Method: Advanced CLI (Docker)

Problem

My node seems synced but not generating any rewards. Giving same output time to time. Which is:

subspace-farmer-1  | 2023-09-06T11:53:59.735569Z  INFO subspace_networking::node_runner: Public address status changed. old=Private new=Public("/ip4/193.46.243.99/tcp/30533/p2p/12D3KooWFBuJhrreBJzj3R2b8MNaVNt61W6AEPsjeEd5tKnB7yco") ```


Any idea about this? Thanks.
2 Likes

Hey @Nirnaeth! I see those messages on my nodes too. I don’t have a definitive answer as to what’s going on with them. Maybe @nazar-pc could explain what they mean. We’ve had some other reports from the community querying this behaviour. I think this may be a red herring in your case though and you’re not getting anything else done with your farmer.

How does performance look? Are you bottlenecked on CPU/RAM (htop) or IO (I’ve recently been using nmon to check IO utilisation)?

I’ve just seen your other messages in Discord. Are you replotting? Are you using one of the storage boxes from Contabo by any chance?

Hi @Jim-Subspace thansk for your reply. Here more info about my system.

Cpu and Ram usage

Disk usage

Farmer Logs

Btw yeah it is contabo

1 Like

OK, looks like you are replotting which I’ve seen others mention causes a drop in rewards. The amount of replotting we have to do will drop as the network stabilises (still lots of new farmers joining and storage pledged pouring in at the moment).

Note that if this is a Contabo Storage box I’ve had experience of them and the performance was atrocious. Both disk speed and CPU was not suitable for running a node. Other tiers were not so bad.

This is not something to be concerned about, this is just an informational (obviously) message from networking stack about what node things their publicly reachable multiaddress is. Anything that is INFO shouldn’t be a reason for concerns for you.

Enable custom thread names in htop settings, you’ll see which parts of the farmer use CPU from thread names.

Looks like plotting uses the most

Yes, also farmer cache is still filling up it seems. Both of those will eventually finish, but will take some time to complete.

OK but I want to mention that my node synced on 5th september. On the other hand I did not check the farmer state at that time.

Yes this machine is on storage type vps because tssc was disabled before I did not notice such issues. I will try a bit more if it stuck on plotting caching phase will change the machine.

I learned this lesson the hard way. It started out fine but as it filled up it got worse and worse. It may work for you - keep us updated!

does this mean only %4.91 of my plotting is finished?

Yes. How long have you been running? I am seeing 2-4 minutes per sector on a modest SSD rig at home.

Acording to logs it plotted %2 to %6.62 in almost 8 hours. 15-20 min per sector average. This looks super slow to me. Also unefficient cuz we are going to reset network time to time.

Both I/O and CPU performance could be a factor here. Plotting is designed for modern processors and shouldn’t take a long time, but if it is on old CPUs, throttled or otherwise slow it may take longer, but hard to imagine why it would take 15 minutes even at half the expected speed.

If performance is really bad farming may suffer as well BTW.

1 Like

Environment

  • Operating System: Ubuntu 22.04
  • CPU Architecture: Intel
  • RAM: 8G
  • Storage: 260G
  • Plot Size: 180G
  • Subspace Deployment Method: Pulsar

I also don’t get the rewards

I changed the plot size from 100GB to 180GB until 2 days ago. After that, the plotting ended and the replotting started

I suspect the provider (contabo storage vps) uses the very low and cpu and ssd so I will go for another. Thanks for your asistance ser.

The issue is not with CPU nor I/O, according to logs the node is fully synced and plotting is completed, while pulsar shows farmer is active, the plotting in not finished, so it’s inconsistency and no awards coming for over 2 days.

1 Like

Why is plotting not finished yet if not because of CPU or I/O then?

Nope. I meant that in logs it shows Plotting finished, like these:

2023-09-06T23:25:00.927580Z INFO single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm::plotting: Sector plotted successfully (100.00%) sector_index=92

2023-09-08T08:30:42.887029Z INFO single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm::plotting: Node is synced, resuming plotting

2023-09-08T09:29:59.194554Z INFO single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm::plotting: Subscribing to archived segments

while in info output it remains (after 2 days) as plotting not finished:

A farmer instance is active!
You have pledged to the network: 100.0 GB
Farmed 0 block(s)
Voted on 0 block(s)
0 SSC(s) earned!
This data is derived from the first 0 blocks in the chain!
Initial plotting is not finished…