Thread 'main' panicked at 'Failed to make runtime API call during last archived block search: UnknownBlock

Issue Report

Not sure if related, but happened after the node run out of disk space.

Environment

  • Ubuntu 22.04.3 LTS
  • Advanced CLI

Problem

2023-09-01 22:15:50 Subspace
2023-09-01 22:15:50 :v: version 0.1.0-a5336b5f501
2023-09-01 22:15:50 :heart: by Subspace Labs https://subspace.network, 2021-2023
2023-09-01 22:15:50 :clipboard: Chain specification: Subspace Gemini 3f
2023-09-01 22:15:50 :label: Node name: motko1
2023-09-01 22:15:50 :bust_in_silhouette: Role: AUTHORITY
2023-09-01 22:15:50 :floppy_disk: Database: ParityDb at /mnt/Buffer/subspace1/chains/subspace_gemini_3f/paritydb/full
2023-09-01 22:15:50 :chains: Native runtime: subspace-2 (subspace-0.tx0.au0)
2023-09-01 22:15:52 [Consensus] DSN instance configured. allow_non_global_addresses_in_dht=false peer_id=12D3KooWJr9wgCHD4sh5LLVRNWWJQM7R5v3WLeR8sqD5HFt1EKwW protocol_version=/>
2023-09-01 22:15:52 [Consensus] Subspace networking initialized: Node ID is 12D3KooWJr9wgCHD4sh5LLVRNWWJQM7R5v3WLeR8sqD5HFt1EKwW
2023-09-01 22:15:52 [Consensus] DSN listening on /ip4/127.0.0.1/tcp/30433/p2p/12D3KooWJr9wgCHD4sh5LLVRNWWJQM7R5v3WLeR8sqD5HFt1EKwW
2023-09-01 22:15:52 [Consensus] DSN listening on /ip4/10.0.4.199/tcp/30433/p2p/12D3KooWJr9wgCHD4sh5LLVRNWWJQM7R5v3WLeR8sqD5HFt1EKwW
2023-09-01 22:15:52 [Consensus] :label: Local node identity is: 12D3KooWJr9wgCHD4sh5LLVRNWWJQM7R5v3WLeR8sqD5HFt1EKwW
2023-09-01 22:15:53 [Consensus] Added observed address as external: /ip4/193.87.163.5/tcp/30433
2023-09-01 22:16:08 [Consensus] Public address status changed. old=Unknown new=Private

====================

Version: 0.1.0-a5336b5f501

0: sp_panic_handler::set::{{closure}}
1: std::panicking::rust_panic_with_hook
2: std::panicking::begin_panic_handler::{{closure}}
3: std::sys_common::backtrace::__rust_end_short_backtrace
4: rust_begin_unwind
5: core::panicking::panic_fmt
6: subspace_service::new_full::{{closure}}
7: subspace_node::main::{{closure}}::{{closure}}
8: sc_cli::runner::Runner::run_node_until_exit
9: subspace_node::main
10: std::sys_common::backtrace::__rust_begin_short_backtrace
11: std::rt::lang_start::{{closure}}
12: main
13: __libc_start_call_main
at ./csu/…/sysdeps/nptl/libc_start_call_main.h:58:16
14: __libc_start_main_impl
at ./csu/…/csu/libc-start.c:392:3
15: _start

Thread ‘main’ panicked at 'Failed to make runtime API call during last archived block search: UnknownBlock("State already discarded for 0xdf1cff42ae685ed432e26134c171b0f3f7c0ff>

This is a bug. Please report it at:

    https://forum.subspace.network
1 Like

Hey @Motko222 thank you for the submission!

You mentioned that this occurs when the node runs out of disk space? You may need to cleanup some space, or lower your plot size if this is occurring. Once their is space available you should be able to restart your farmer with no issues.

Feel free to get back to me and let me know if this issue persists even with available space.

Hi, already did that, did not help.

If you ran out of space database might be corrupted. I think you’ll need to wipe the node and start again. Farmer doesn’t need to be wiped.

Yes, I wiped the node and it is syncing again. Thanks!

1 Like

Hello, I have this problem again with another node. It would appear it is not related to the lack of disk space.
This time, node was stuck at block 433530. I have tried to restart and getting this error again.

So node was synced, then went de-synced, you have restarted it and it broke like that? Did I udnerstand it correctly?

If so can you provide as much logs as possible before restart and then with this crash right after? It’ll help with understanding what is going on there.

Will try to reproduce and save logs.
Now I have realized, I was still using start scripts from Gemini II and was starting the node like this: “–execution wasm --state-pruning 1024 --keep-blocks 1024 --validator”.
Could that be the problem?

This is your problem! Such value is not supported, you should use archive instead. State will still be pruned, but only when it is safe to do so.

Yes sir, thanks! Already tried with archive and I can restart the node without problems.
However, I do have nodes running with state prunning, what is the drill here? Wipe node and plot or just the node?

Just the node, nothing wrong with your plot in this case.

1 Like