The node crashes occasionally after running for a few days

#e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 52.5kiB/se[0m e[31m⬆ 2.0kiB/se[0m    
e[2m2024-03-01T11:51:58.806417Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439136 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 74.6kiB/se[0m e[31m⬆ 2.5kiB/se[0m    
e[2m2024-03-01T11:52:03.807318Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439137 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 77.2kiB/se[0m e[31m⬆ 3.3kiB/se[0m    
e[2m2024-03-01T11:52:08.808492Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439137 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 60.3kiB/se[0m e[31m⬆ 1.9kiB/se[0m    
e[2m2024-03-01T11:52:13.809554Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439139 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 95.1kiB/se[0m e[31m⬆ 3.3kiB/se[0m    
e[2m2024-03-01T11:52:18.810819Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439139 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 36.2kiB/se[0m e[31m⬆ 2.5kiB/se[0m    
e[2m2024-03-01T11:52:23.811644Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439139 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 37.5kiB/se[0m e[31m⬆ 1.7kiB/se[0m    
e[2m2024-03-01T11:52:28.812447Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439141 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 53.4kiB/se[0m e[31m⬆ 2.2kiB/se[0m    
e[2m2024-03-01T11:52:33.813349Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439141 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 55.3kiB/se[0m e[31m⬆ 1.8kiB/se[0m    
e[2m2024-03-01T11:52:38.814219Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439143 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 63.1kiB/se[0m e[31m⬆ 2.6kiB/se[0m    
e[2m2024-03-01T11:52:43.815152Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439136 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 53.7kiB/se[0m e[31m⬆ 1.6kiB/se[0m    
e[2m2024-03-01T11:52:48.815981Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439136 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 44.2kiB/se[0m e[31m⬆ 2.2kiB/se[0m    
e[2m2024-03-01T11:52:53.816702Ze[0m e[32m INFOe[0m e[1mConsensuse[0me[2m:e[0m e[2msubstratee[0me[2m:e[0m ⚙️  e[1;37mPreparing  0.0 bpse[0m, target=#439136 (e[1;37m40e[0m peers), best: #e[1;37m439116e[0m (0x69f9…2f7f), finalized #e[1;37m365690e[0m (0xd644…b85d), e[32m⬇ 36.0kiB/se[0m e[31m⬆ 2.5kiB/se[0m    

log:https://raw.githubusercontent.com/qingyuhuaming/log/main/sub1.log

Is it because the CPU I’m using is too poor?

According to logs node simply got stuck and stopped importing blocks. This lasted for some number of minutes, which might be caused by CPU power if you see CPU overloaded otherwise.

After the last part of the log, the node process has already exited automatically.

What causes the node to get stuck and then crash and exit automatically? Is it because the CPU frequency is too low? Or is it because there are too few cores? Or is it because something went wrong due to farming? Because I have encountered a similar issue before, but at that time, the node was just showing 0bps.

It doesn’t simply exit unexpectedly. Check kernel logs with dmesg or similar. It is likely that it was killed or crashed in some way.

It seems that the node crash was indeed due to a memory issue, but I also want to note that sometimes it doesn’t crash, but instead remains at a status of 0bps.

My farm and node are not on the same network, and there are occasional packet losses and high latency between the two networks. I’m wondering if this could be the reason, but I can’t deploy a node on every one of my computers.

Hypothetically that can be related. If there is a networking or other hickup somewhere in software, it can start buffering things in RAM and use much more than it normally does until it crashes. If on Linux, running under Docker or systemd can be helpful to automate restarts.