Delay before starting to sync on build sep-13-2

A few community members are reporting they are unable to sync on the latest version sep-13-2 and are switching back to sep-11 . I have tested it out and there is a while spent at 0.0bps but I’ve seen this delay starting to sync since the start of Gemini 3. Are there any concerns about these logs (10 minutes to start) or these ones (25 minutes to start)?

It should eventually finish sync (it is reporting 0.0bps because it doesn’t import any blocks, but that doesn’t mean it doesn’t do anything). There are a few Substratisms we’re fighting here that prevent us from making the process faster, for example Pruning doesn't have expected behavior · Issue #14758 · paritytech/substrate · GitHub describes why we can’t finalize blocks that we want to finalize, which sync from DSN depends on. We’ll get there, but currently it is somewhat expected.

But it does start eventually. The only reason sep-11 “works better” is because it fails to sync from DSN faster and sep-13-2 actually succeeds, which takes more time. But sep-11 and Substrate sync generally suffer from unresolved (for now) Node can't sync with the network · Issue #493 · paritytech/polkadot-sdk · GitHub that DSN sync is able to bypass eventually, so there is no perfect solution here.

As more nodes become pruned in Subspace network, sep-11 will eventually fail to start from genesis, while sep-13-2 will work successfully.

2 Likes

I don’t know what to say about this, I had 5 nodes updated to 13-2 none of them was able to sync over night (they were synced before), they stayed unsync and my farmers didn’t gain any rewards for a window of 6-8 hours. I rolled them back to sep-11 and each node went sync in under 1 hour and farmers went back online.

Was there anything interesting in logs except 0.0 bps speed, was node using any CPU at all?

Nothing obvious to me other than 0.0 bps speed (like I usually do a grep -v INFO to find the unwanted stuff). I have upgraded one of these 5 nodes again to 13-2 since you insist this is the version to go. I can post results in a few hours. I didn’t look at the CPU load of the node specifically.

If CPU load was there then it was busy trying to catch up from last finalized block, the reason for that might be very low last finalized block as explained above with some Substratisms we’re still fighting. There are things we can and will do about this both upstream and downstream.

So far didn’t pick up a new block

Blockquote
2023-09-16 13:32:32 [Consensus] Last archived block 40345
2023-09-16 13:32:32 [Consensus] Archiving already produced blocks 40346…=54821
2023-09-16 13:38:23 [Consensus] :adult:<200d>:ear_of_rice: Starting Subspace Authorship worker
2023-09-16 13:38:23 [Consensus] :computer: Operating system: linux
2023-09-16 13:38:23 [Consensus] :computer: CPU architecture: x86_64
2023-09-16 13:38:23 [Consensus] :computer: Target environment: gnu
2023-09-16 13:38:23 [Consensus] :computer: CPU: AMD EPYC 7513 32-Core Processor
2023-09-16 13:38:23 [Consensus] :computer: CPU cores: 64
2023-09-16 13:38:23 [Consensus] :computer: Memory: 257748MB
2023-09-16 13:38:23 [Consensus] :computer: Kernel: 6.4.13-100.fc37.x86_64
2023-09-16 13:38:23 [Consensus] :computer: Linux distribution: Fedora Linux 37 (Server Edition)
2023-09-16 13:38:23 [Consensus] :computer: Virtual machine: no
2023-09-16 13:38:23 [Consensus] :package: Highest known block at #54921



2023-09-16 13:38:24 [Consensus] Received notification to sync from DSN reason=WentOnlineSubstrate
2023-09-16 13:38:28 [Consensus] :gear: Syncing, target=#456954 (20 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 34.8kiB/s :arrow_up: 20.0kiB/s
2023-09-16 13:38:33 [Consensus] :gear: Syncing 0.0 bps, target=#456954 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 61.2kiB/s :arrow_up: 28.3kiB/s
2023-09-16 13:38:38 [Consensus] :gear: Syncing 0.0 bps, target=#456954 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 53.6kiB/s :arrow_up: 37.0kiB/s
2023-09-16 13:38:43 [Consensus] :gear: Syncing 0.0 bps, target=#456954 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 51.7kiB/s :arrow_up: 40.5kiB/s
2023-09-16 13:38:43 [Consensus] :x: Error while dialing /dns/telemetry.subspace.network/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }
2023-09-16 13:38:48 [Consensus] :gear: Syncing 0.0 bps, target=#456954 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 65.2kiB/s :arrow_up: 50.6kiB/s
2023-09-16 13:38:53 [Consensus] :gear: Syncing 0.0 bps, target=#456954 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 64.2kiB/s :arrow_up: 57.9kiB/s
2023-09-16 13:38:58 [Consensus] :gear: Syncing 0.0 bps, target=#456958 (33 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 68.6kiB/s :arrow_up: 61.5kiB/s
2023-09-16 13:39:03 [Consensus] :gear: Syncing 0.0 bps, target=#456960 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 59.1kiB/s :arrow_up: 48.8kiB/s



2023-09-16 13:59:48 [Consensus] :gear: Syncing 0.0 bps, target=#457181 (28 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 38.6kiB/s :arrow_up: 0.8kiB/s
2023-09-16 13:59:53 [Consensus] :gear: Syncing 0.0 bps, target=#457181 (28 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 28.6kiB/s :arrow_up: 116.9kiB/s
2023-09-16 13:59:58 [Consensus] :gear: Syncing 0.0 bps, target=#457183 (27 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 33.2kiB/s :arrow_up: 87.3kiB/s
2023-09-16 14:00:03 [Consensus] :gear: Syncing 0.0 bps, target=#457183 (27 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 22.7kiB/s :arrow_up: 35.5kiB/s
2023-09-16 14:00:08 [Consensus] :gear: Syncing 0.0 bps, target=#457183 (28 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 26.4kiB/s :arrow_up: 1.6kiB/s

8% CPU load

I looked at the entire log since I posted earlier. The only thing that changes are the target. It’s not syncing and the CPU is not busy (~5% subspace-node process). The only ERROR since earlier this afternoon is posted below.

Blockquote
2023-09-16 18:31:54 [Consensus] :gear: Syncing 0.0 bps, target=#459972 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 49.6kiB/s :arrow_up: 2.0kiB/s
2023-09-16 18:31:59 [Consensus] :gear: Syncing 0.0 bps, target=#459973 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 54.2kiB/s :arrow_up: 2.2kiB/s
2023-09-16 18:32:04 [Consensus] :gear: Syncing 0.0 bps, target=#459973 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 45.4kiB/s :arrow_up: 1.2kiB/s
2023-09-16 18:32:09 [Consensus] :gear: Syncing 0.0 bps, target=#459974 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 63.1kiB/s :arrow_up: 1.3kiB/s
2023-09-16 18:32:14 [Consensus] :gear: Syncing 0.0 bps, target=#459975 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 67.5kiB/s :arrow_up: 2.2kiB/s
2023-09-16 18:32:18 [Consensus] Error when syncing blocks from DSN error=Other: Error during data shards reconstruction: Impossible to recover, too many shards are missing
2023-09-16 18:32:18 [Consensus] Received notification to sync from DSN reason=WentOnlineSubspace
2023-09-16 18:32:19 [Consensus] :gear: Syncing 0.0 bps, target=#459976 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 50.0kiB/s :arrow_up: 1.5kiB/s
2023-09-16 18:32:24 [Consensus] :gear: Syncing 0.0 bps, target=#459977 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 68.3kiB/s :arrow_up: 1.3kiB/s
2023-09-16 18:32:29 [Consensus] :gear: Syncing 0.0 bps, target=#459978 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 51.3kiB/s :arrow_up: 1.8kiB/s
2023-09-16 18:32:34 [Consensus] :gear: Syncing 0.0 bps, target=#459978 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 61.2kiB/s :arrow_up: 0.9kiB/s
2023-09-16 18:32:39 [Consensus] :gear: Syncing 0.0 bps, target=#459978 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 46.0kiB/s :arrow_up: 0.6kiB/s

This is a problem. Is it possible that you have bad connectivity to the rest of the network? RUST_LOG=info,subspace_service=trace will tell you more about what node is doing.

That network of this machine is stable, it’s located in a high end data center, I won’t exclude issues, but the server rack has 5 nodes, 4 are in sync with sep-11 and the 5th with 13-2 doesn’t sync. I can restart the node with RUST_LOG on

Here is the trace of when the error was happening again (Error during data shards reconstruction: Impossible to recover, too many shards are missing) The node is not synced at all, has not made a single add block of progress.

Blockquote
2023-09-17 05:04:06.334 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466156 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 46.3kiB/s :arrow_up: 2.8kiB/s
2023-09-17 05:04:11.334 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466156 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 38.1kiB/s :arrow_up: 2.0kiB/s
2023-09-17 05:04:16.334 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466156 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 37.7kiB/s :arrow_up: 1.3kiB/s
2023-09-17 05:04:21.335 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466157 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 81.9kiB/s :arrow_up: 0.9kiB/s
2023-09-17 05:04:26.335 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466159 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 50.4kiB/s :arrow_up: 1.8kiB/s
2023-09-17 05:04:31.335 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466160 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 63.6kiB/s :arrow_up: 1.1kiB/s
2023-09-17 05:04:36.336 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466160 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 60.2kiB/s :arrow_up: 1.7kiB/s
2023-09-17 05:04:39.231 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Piece request succeeded piece_index=PieceIndex(511) piece_found=false
2023-09-17 05:04:41.336 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466162 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 39.4kiB/s :arrow_up: 1.5kiB/s
2023-09-17 05:04:46.336 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466163 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 67.7kiB/s :arrow_up: 0.7kiB/s
2023-09-17 05:04:51.337 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466164 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 52.4kiB/s :arrow_up: 0.6kiB/s
2023-09-17 05:04:56.337 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466165 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 42.5kiB/s :arrow_up: 0.6kiB/s
2023-09-17 05:05:01.338 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466166 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 45.6kiB/s :arrow_up: 1.0kiB/s
2023-09-17 05:05:06.338 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466167 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 59.2kiB/s :arrow_up: 1.1kiB/s
2023-09-17 05:05:11.338 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466168 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 54.5kiB/s :arrow_up: 1.2kiB/s
2023-09-17 05:05:14.859 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Piece request succeeded piece_index=PieceIndex(507) piece_found=false
2023-09-17 05:05:14.861 WARN tokio-runtime-worker subspace_service::sync_from_dsn: [Consensus] Error when syncing blocks from DSN error=Other: Error during data shards reconstruction: Impossible to recover, too many shards are missing
2023-09-17 05:05:14.861 INFO tokio-runtime-worker subspace_service::sync_from_dsn: [Consensus] Received notification to sync from DSN reason=WentOnlineSubspace
2023-09-17 05:05:14.861 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Searching for latest segment header last_known_segment_index=30
2023-09-17 05:05:14.861 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Downloading last segment headers retry_attempt=1
2023-09-17 05:05:16.339 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466170 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 47.5kiB/s :arrow_up: 0.5kiB/s
2023-09-17 05:05:21.339 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466171 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 45.0kiB/s :arrow_up: 0.8kiB/s
2023-09-17 05:05:26.339 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466172 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 58.4kiB/s :arrow_up: 0.7kiB/s
2023-09-17 05:05:30.611 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWFXNp8WnDbaJuwZFXAzP2hdD4WdaZEw5T5483C4AJmKm4 segment_headers_number=2
2023-09-17 05:05:30.945 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWLJXmLXFA7GHkZqJwXAFAjWHYv95W3hgQku3eeTjo3Aer segment_headers_number=2
2023-09-17 05:05:31.180 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooW9pzn6TS737XoaB9qEN7XqnJc4N2hzv353Wieda1AwifN segment_headers_number=2
2023-09-17 05:05:31.340 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466172 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 45.2kiB/s :arrow_up: 0.9kiB/s
2023-09-17 05:05:31.762 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWHLsojXfP5M2ugsoffR5emqbvMhivAYkgZVudzP6xSYcU segment_headers_number=2
2023-09-17 05:05:32.673 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWKFqqcpjn2o3pARMFsFWDyFhzMux7p1aAeJzoJhaBdcyc segment_headers_number=2
2023-09-17 05:05:33.327 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWJYTJwA1zsuHSdke6YjZLXprijmws4qvAEL6F1SzeUdkJ segment_headers_number=2
2023-09-17 05:05:34.315 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWSGSEvSb1CpbpknxMTPisCK8giQmeC3Web9RhRusJb7qN segment_headers_number=2
2023-09-17 05:05:34.708 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWQfpyBYW7GirTL7gnpvCJgJZQzFKEL6vFQR7b1ha98ZBG segment_headers_number=2
2023-09-17 05:05:35.221 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWLSrTDT5rL1W9GDpB1KcE2u545j5Jc1MiEG3XxP3hzHV5 segment_headers_number=2
2023-09-17 05:05:35.446 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWSDXR57C9z2T1EbcwSJfwCJPPQDrG7Nex3UjT8cpHN9a9 segment_headers_number=2
2023-09-17 05:05:35.740 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWHtcxFAceo8RABudUYkMdjfTx5ntT9aBGjM5CpJ86apVr segment_headers_number=2
2023-09-17 05:05:36.137 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWFR1U1GuTLMq7NTocJBkKD7YXEfDxBnaAfQXfv3K8kUbg segment_headers_number=2
2023-09-17 05:05:36.340 INFO tokio-runtime-worker substrate: [Consensus] :gear: Syncing 0.0 bps, target=#466173 (40 peers), best: #54921 (0xc776…6620), finalized #0 (0x92e9…5095), :arrow_down: 36.1kiB/s :arrow_up: 1.0kiB/s
2023-09-17 05:05:36.357 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWKGD72BshrijyJFryGFyQBpi7BHEV97fxeb8tmp2ifCKu segment_headers_number=2
2023-09-17 05:05:36.690 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment headers request succeeded peer_id=12D3KooWE7JMP3JsNmgHZ5KnoBkYiJ3wTLHwH5WJMyzfvgExQ5y7 segment_headers_number=2

This is an indication of a problem. It didn’t find even half out of 256 pieces it was looking for. Moreover, judging from this:

There was zero Subspace networking connections for some reason, which is not good at all. I’m not sure what caused it, but essentially your node manages to establish a few connections, make some requests, then gets essentially “offline” and when it gets back up the process continues in a loop :confused:

Longer logs would help to clarify information some more, but so far something is definitely working suboptimally or effectively not working at all.

I get the entire file to you, sometime Monday.

1 Like

I sent you @nazar-pc the file via discord

Something is bad in there, piece retrieval is extremely slow and full of piece_found=false.

I do not see such issue on my machine, all pieces my node tries to download are retrieved successfully. On your node though most of them fail.

Another suspicious thing is this:

2023-09-16 23:38:25.412 DEBUG tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Last segment header matches last known segment header, nothing to download last_known_segment_index=30
2023-09-16 23:38:25.412 DEBUG tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Found 0 new segment headers
2023-09-16 23:38:25.412 DEBUG tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Processing segment segment_index=1
2023-09-16 23:38:25.412 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Checking segment header segment_index=1 last_archived_block_number=14493 last_archived_block_progress=Partial(384)

Your node thinks segment_index==1 is the latest, which is very far from truth. And most of the nodes you’re connected to seem to agree, which makes little sense actually, here is what my node sees (just a few days later than your logs):

2023-09-19 00:21:43.495 DEBUG tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Downloading segment headers last_known_segment_index=0 last_segment_index=32
2023-09-19 00:21:43.495 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Getting segment header batch.. segment_indexes=[SegmentIndex(31), SegmentIndex(30), SegmentIndex(29), SegmentIndex(28), SegmentIndex(27), SegmentIndex(26), SegmentIndex(25), SegmentIndex(24), SegmentIndex(23), SegmentIndex(22), SegmentIndex(21), SegmentIndex(20), SegmentIndex(19), SegmentIndex(18), SegmentIndex(17), SegmentIndex(16), SegmentIndex(15), SegmentIndex(14), SegmentIndex(13), SegmentIndex(12), SegmentIndex(11), SegmentIndex(10), SegmentIndex(9), SegmentIndex(8), SegmentIndex(7), SegmentIndex(6), SegmentIndex(5), SegmentIndex(4), SegmentIndex(3), SegmentIndex(2), SegmentIndex(1), SegmentIndex(0)]
2023-09-19 00:21:43.495 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] get_closest_peers returned an item peer_id=12D3KooWBzZsQTYUwx4XxsFe5oWcSUNADey5mmCSipW2jQuYagxy
2023-09-19 00:21:43.578 TRACE tokio-runtime-worker subspace_service::sync_from_dsn::segment_header_downloader: [Consensus] Segment header request succeeded peer_id=12D3KooWBzZsQTYUwx4XxsFe5oWcSUNADey5mmCSipW2jQuYagxy segment_indexes=[SegmentIndex(31), SegmentIndex(30), SegmentIndex(29), SegmentIndex(28), SegmentIndex(27), SegmentIndex(26), SegmentIndex(25), SegmentIndex(24), SegmentIndex(23), SegmentIndex(22), SegmentIndex(21), SegmentIndex(20), SegmentIndex(19), SegmentIndex(18), SegmentIndex(17), SegmentIndex(16), SegmentIndex(15), SegmentIndex(14), SegmentIndex(13), SegmentIndex(12), SegmentIndex(11), SegmentIndex(10), SegmentIndex(9), SegmentIndex(8), SegmentIndex(7), SegmentIndex(6), SegmentIndex(5), SegmentIndex(4), SegmentIndex(3), SegmentIndex(2), SegmentIndex(1), SegmentIndex(0)]
2023-09-19 00:21:43.578 DEBUG tokio-runtime-worker subspace_service::sync_from_dsn::import_blocks: [Consensus] Found 33 new segment headers

It found segment indices up to 31!

What are the CLI arguments of the node in your case? There is either something off there or you’re incredibly unlucky to be stuck in these conditions.

Is there a chance it’s related to the region of where the node is located like North America? How I am starting this node see below.

Blockquote
name=“foo-02”
base_path=“/sub/node/”
mkdir -p “$base_path”
export RUST_LOG=info,subspace_service=trace
cmd="./target/x86_64-unknown-linux-gnu/production/subspace-node
–name $name
–base-path $base_path
–port 33930
–rpc-port 33931
–prometheus-port 33933
–dsn-listen-on /ip4/0.0.0.0/tcp/33934
–chain gemini-3f
–execution wasm
–blocks-pruning 256
–state-pruning archive
–no-private-ipv4
–validator
"
$cmd

Anything is possible of course, but we have a lot of nodes on NA according to telemetry.

Try to stop the node and delete/rename known_addresses.bin file in node’s directory and see if it helps at all.

Ok, I try that, will see tomorrow, if that helped at all.