Gemini-3d-2023-apr-25, synchro problem

Issue Report

Environment

  • Operating System: Ubuntu 20.04
  • CPU Architecture: Ryzen 9 5950x
  • RAM: 128Gb
  • Storage: SSD NVMe
  • Plot size: ±50 GB synchronized/5 TB total
  • Subspace Deployment Method: A binary was built on the local machine (cargo build) - subspace-farmer+subspace-node. Version gemini-3d-2023-apr-25

Problem

Verification failed for block 0x68d21c5bad67a60953b82ebb725f99083a72ae312ea4bff847b2d9a110c8f525 received from peer: 12D3KooWSPTwsKU9eAp8st9cgrFcGbGFbw5rLDdKrj2nrcZKXtev, “Runtime code error: :code hash not found”

Node synchronization stopped and not continued

Steps to reproduce

  1. Once and for all. Restarting the node and farmer does not help

Expected result

  • I expect you to shoot a laser from a space satellite at my server and everything will work. Or give advice on how to fix the problem.

What happens instead

  • Nothing
[Paste error here]

Thanks for the report! I see you are on apr-25 version, is this the first time install or have you ran prior versions successfully?

Would you mind posting your runtime commands for the farmer and node?

  • I see you are on apr-25 version, is this the first time install or have you ran prior versions successfully?

  • Yes, this is not the first launch. I have tried previous versions as well. There were no problems with them. Moreover, the same version is running on the same computer next door and it’s fine.

  • Would you mind posting your runtime commands for the farmer and node?

  • Sure.
    Node: subspace-node --execution wasm --name TrollsBattleShip --base-path /home/subspace3d1/.local/share/subspace-node --state-pruning archive --keep-blocks archive --chain gemini-3d --validator --in-peers 100 --max-parallel-downloads 30 --segment-publish-concurrency 100 --port 30363 --ws-port 9996

Farmer:
subspace-farmer
–base-path /home/subspace3d1/.local/share/subspace-farmer
–farm path=/mnt/mp600/1,size=53G
–farm path=/mnt/mp600/2,size=53G
–farm path=/mnt/mp600/3,size=53G
–farm path=/mnt/mp600/4,size=53G
–farm path=/mnt/mp600/5,size=53G
–farm path=/mnt/mp600/6,size=53G
–farm path=/mnt/mp600/7,size=53G
–farm path=/mnt/mp600/8,size=53G
–farm path=/mnt/mp600/9,size=53G
–farm path=/mnt/mp600/10,size=53G
–farm path=/mnt/mp600/11,size=53G
–farm path=/mnt/mp600/12,size=53G
–farm path=/mnt/mp600/13,size=53G
–farm path=/mnt/mp600/14,size=53G
–farm path=/mnt/mp600/15,size=53G
–farm path=/mnt/mp600/16,size=53G
–farm path=/mnt/mp600/17,size=53G
–farm path=/mnt/mp600/18,size=53G
–farm path=/mnt/mp600/19,size=53G
–farm path=/mnt/mp600/20,size=53G
–farm path=/mnt/mp600/21,size=53G
–farm path=/mnt/mp600/22,size=53G
–farm path=/mnt/mp600/23,size=53G
–farm path=/mnt/mp600/24,size=53G
–farm path=/mnt/mp600/25,size=53G
–farm path=/mnt/mp600/26,size=53G
–farm path=/mnt/mp600/27,size=53G
–farm path=/mnt/mp600/28,size=53G
–farm path=/mnt/mp600/29,size=53G
–farm path=/mnt/mp600/30,size=53G
–farm path=/mnt/mp600/31,size=53G
–farm path=/mnt/mp600/32,size=53G
–farm path=/mnt/mp600/33,size=53G
–farm path=/mnt/mp600/34,size=53G
–farm path=/mnt/nvme3n1/201,size=53G
–farm path=/mnt/nvme3n1/202,size=53G
–farm path=/mnt/nvme3n1/203,size=53G
–farm path=/mnt/nvme3n1/204,size=53G
–farm path=/mnt/nvme3n1/205,size=53G
–farm path=/mnt/nvme3n1/206,size=53G
–farm path=/mnt/nvme3n1/207,size=53G
–farm path=/mnt/nvme3n1/208,size=53G
–farm path=/mnt/nvme3n1/209,size=53G
–farm path=/mnt/nvme3n1/210,size=53G
–farm path=/mnt/nvme3n1/211,size=53G
–farm path=/mnt/nvme3n1/212,size=53G
–farm path=/mnt/nvme3n1/213,size=53G
–farm path=/mnt/nvme3n1/214,size=53G
–farm path=/mnt/nvme3n1/215,size=53G
–farm path=/mnt/nvme3n1/216,size=53G
–farm path=/mnt/nvme3n1/217,size=53G
–farm path=/mnt/nvme3n1/218,size=53G
–farm path=/mnt/nvme3n1/219,size=53G
–farm path=/mnt/nvme3n1/220,size=53G
–farm path=/mnt/nvme3n1/221,size=53G
–farm path=/mnt/nvme3n1/222,size=53G
–farm path=/mnt/nvme3n1/223,size=53G
–farm path=/mnt/nvme3n1/224,size=53G
–farm path=/mnt/nvme3n1/225,size=53G
–farm path=/mnt/nvme3n1/226,size=53G
–farm path=/mnt/nvme3n1/227,size=53G
–farm path=/mnt/nvme3n1/228,size=53G
–farm path=/mnt/nvme3n1/229,size=53G
–farm path=/mnt/nvme3n1/230,size=53G
–farm path=/mnt/nvme3n1/231,size=53G
–farm path=/mnt/nvme3n1/232,size=53G
–farm path=/mnt/nvme3n1/233,size=53G
–farm path=/mnt/nvme3n1/234,size=53G
–farm path=/mnt/nvme3n1/235,size=53G
–farm path=/mnt/nvme3n1/236,size=53G
–farm path=/mnt/nvme3n1/237,size=53G
–farm path=/mnt/nvme3n1/238,size=53G
–farm path=/mnt/nvme3n1/239,size=53G
–farm path=/mnt/nvme3n1/240,size=53G
–farm path=/mnt/nvme3n1/241,size=53G
–farm path=/mnt/nvme3n1/242,size=53G
–farm path=/mnt/nvme3n1/243,size=53G
–farm path=/mnt/nvme3n1/244,size=53G
–farm path=/mnt/nvme3n1/245,size=53G
–farm path=/mnt/nvme3n1/246,size=53G
–farm path=/mnt/nvme3n1/247,size=53G
–farm path=/mnt/nvme3n1/248,size=53G
–farm path=/mnt/nvme3n1/249,size=53G
–farm path=/mnt/nvme3n1/250,size=53G
–farm path=/mnt/nvme3n1/251,size=53G
–farm path=/mnt/nvme3n1/252,size=53G
–farm path=/mnt/nvme3n1/253,size=53G
–farm path=/mnt/nvme3n1/254,size=53G
–farm path=/mnt/nvme3n1/255,size=53G
–farm path=/mnt/nvme3n1/256,size=53G
–farm path=/mnt/nvme3n1/257,size=53G
–farm path=/mnt/nvme3n1/258,size=53G
–farm path=/mnt/nvme3n1/259,size=53G
–farm path=/mnt/nvme3n1/260,size=53G
–farm path=/mnt/nvme3n1/261,size=53G
–farm path=/mnt/nvme3n1/262,size=53G
–farm path=/mnt/nvme3n1/263,size=53G
–farm path=/mnt/nvme3n1/264,size=53G
–farm path=/mnt/nvme3n1/265,size=53G
farm
–disable-private-ips
–reward-address stAWnNUo5ohfT2VXaG7StSgmWyHLiiMkdrTzVqSW1gbYZ8bWU
–disk-concurrency 100
–node-rpc-url ws://127.0.0.1:9996

For testing purposes, lets try farming a single drive and see if that syncs a bit better. This could be related to some of the farming issues that the dev team is looking into, but i just want to ensure.

Let me know how a single drive does for you :smile:

I have not tried the latest version yet (from May 15), but I have tried advanced on one drive and on several drives in different combinations. The result is the same. Sometimes it works, sometimes it doesn’t. I can’t determine the dependency of working/not working.

But based on previous versions of CLI and advanced, I’m leaning towards the CLI version.
The thing is, the plot synchronization in the advanced version is much slower than in the CLI version.

Speaking of plot synchronization. Is there any way to speed it up? I’ve tried it on different CPUs and the result is almost the same - XXX kbytes/sec. In these conditions, I can’t use large plotts, because their synchronization time close to 1 year. Will the synchronization speed of plotts be increased in the future?
I remember with sadness the gemini-1 version, when the 8TB plot was syncing very fast.