Can the --farming-thread-pool-size only use up to 32 threads at most? For some large EPYC processors, isn't that too few?

Can the --farming-thread-pool-size only use up to 32 threads at most? For some large EPYC processors, isn’t that too few?

It defaults to 32 threads at most. You should not ever need more threads for one farm. On those processors you’ll likely have multiple SSDs and as the result multiple farming thread pools spread across all of the available cores.

This EPYC server has been set up with a total of 40TiB for plotting, and it should have completed about 20TiB by now. However, there is only one reward_signing: Successfully signed reward hash per hour.

Is there a problem somewhere? Because my Intel server, under the same capacity, has more Successfully signed reward hashes.

Check node logs, but generally better not to mix different questions in the same topic

The log contains many errors like “2024-02-26T20:08:13.104074Z WARN Consensus: sc_proof_of_time::source: Proof of time chain was extended from block import from_next_slot=2120440 to_next_slot=2120441.”

I will observe for half a day to gather more information.

If you see those regularly, it means you do not receive challenges in time, so farmer doesn’t have challenges to solve. That can be due to network issues or node misconfiguration network-wise.

./node run \
--chain gemini-3h \
--blocks-pruning 256 \
--state-pruning archive-canonical \
--farmer \
--base-path /root/sub \
--listen-on /ip4/0.0.0.0/tcp/30333 \
--dsn-listen-on /ip4/0.0.0.0/tcp/30433 \
--name "ooplay"

Is it because I only use TCP?

Can you upload node logs somewhere for me to see?

Rerun logs for several hours

That log file looks fine

Is there a specific way to check this? Many Chinese users have too low rewards for the same SSD capacity, and I suspect that there might be certain issues with nodes in Mainland China.

Hard to say, I’m not a good expert on Chinese firewall. A good way to ensure you get proofs of time is to have timekeepers running. Since crossing Chinese firewall is likely a significant bottleneck and adds latency, having a few powerful timekeepers (14900K/KS) might be helpful to get proofs of time a bit sooner and not miss rewards. I know there are quite a few in the community in addition to those that Subspace team runs, but I have no idea how many there are within Mainland China right now.

Does this mean that running a timekeeper inside China might improve the situation?

Exactly, ideally a few folks would do that. There is no explicit reward for doing this, but as you can see, it is crucial for network operation.

How to check if my node has this issue? Do I just need to observe if there are the following two log contents?

e[2m2024-03-04T11:57:42.717469Ze[0m e[33m WARNe[0m e[1mConsensuse[0me[2m:e[0m e[2msc_proof_of_time::sourcee[0me[2m:e[0m Proof of time chain reorg happened e[3mfrom_next_slote[0me[2m=e[0m2694952 e[3mto_next_slote[0me[2m=e[0m2695588
sc_proof_of_time::source: Proof of time chain was extended from block import from_next_slot=2120440 to_next_slot=2120441.

Generally - yes, but mostly chain extension from block import. It means that you’ve got the block with PoT sooner than PoT itself, which in practice should not really happen unless node is misconfigured (too many or too few connections) or there are other issues. Too many users customize those options without understanding consequences.

I will try to set up a node using an i9 14900K to see if it can solve the problem. If it can, is there any way for my node to help other users in mainland China?

1 Like
2024-03-09T09:33:43.605959Z  WARN Consensus: sc_consensus_subspace::slot_worker: Proof of time is invalid, skipping block authoring at slot slot=3118817
2024-03-09T09:33:46.922265Z  WARN Consensus: sc_consensus_subspace::slot_worker: Proof of time is invalid, skipping block authoring at slot slot=3118818
2024-03-09T09:33:47.118944Z  INFO Consensus: substrate: 💤 Idle (40 peers), best: #553301 (0xf6e2…1736), finalized #481918 (0x27a2…b426), ⬇ 46.2kiB/s ⬆ 46.3kiB/s    
2024-03-09T09:33:47.921854Z  WARN Consensus: sc_consensus_subspace::slot_worker: Proof of time is invalid, skipping block authoring at slot slot=3118819
2024-03-09T09:33:48.924079Z  WARN Consensus: sc_consensus_subspace::slot_worker: Proof of time is invalid, skipping block authoring at slot slot=3118820
2024-03-09T09:33:49.662497Z  INFO Consensus: substrate: ✨ Imported #e[1;37m553302e[0m (0xd159…01f1)    
2024-03-09T09:33:49.996862Z  WARN Consensus: sc_consensus_subspace::slot_worker: Proof of time is invalid, skipping block authoring at slot slot=3118821

After running Timekeepers, some errors occurred. My CPU frequency is 5.9 GHz.

How are you running timekeeper? Is that on farmer machine? Can you provide all the logs since node start and not just a few lines?

https://raw.githubusercontent.com/qingyuhuaming/log/main/node_log.txt

The node is still showing 0bps in the end, why is that?