pull down to refresh

So I recently decided to rebuild my fullnode1 on the same hardware (RPi4 8gb) but a larger hard drive (2 TB USB3 external SSD). It's basically the same setup as the last fullnode I built in 2022, just with a bigger hard drive.
Some observations:
  • IBD took about 3 days in 2022
  • IBD now, I am sitting at 7 days and only 78% synced. The progress has whittled down to only ~1-2% per day. At this rate I think it will take me another 2-3 weeks to finish initial sync.
  • Looking at my hardware utilization rates, I believe CPU is the bottleneck. RAM is not being pressured nor bandwidth, but CPU is at consistently high usage.
  • Progress started to slow around blocks mined in 2023. This leads me to believe it is the bloating of the UTXO set with ordinals that is now the bottleneck.
If my hypothesis is correct, it makes sense why core devs are concerned about UTXO bloat. I can't see any normie, or even most techies, who would consider it a smooth experience to wait 2-3 weeks for sync. Maybe the days of running a fullnode on a Pi are long gone

Footnotes

  1. Don't ask me why I didn't just copy the blockchain from my old SSD to my new SSD. I stupid ok? Plus, I did want to try the experience from scratch a second time.
It completely exploded in 2023. No wonder your 8 GB RAM Pi performs poorly on a 11 GB UTXO set.
You could still try increasing the value of dbcache= in your bitcoin.conf to a number of Megabytes that still fits in your memory. Might improve performance a little.
reply
Little eerie seeing this post...
I also dusted off an umbrel install I had last running in 2022, plugged it in and started re-syncing. Same specs: RPi4 with 2TB spinning hard disk. Has bitcoin v22.0 installed. Also at 78% of the full height as of this morning.
In top noticed that beyond bitcoind the elctr (?) contianer was taking a third of the CPU being utilized. When I get home I verify this and check out some of these diagnositic commands.
reply
Redundant LevelDB is the culprit. There is the bug report: https://github.com/bitcoin/bitcoin/issues/31882
In short, the current implementation is not scalable. There is huge improvement possible.
reply
130 sats \ 1 reply \ @jakoyoh629 4h
It's getting harder and harder for the volunteers who get absolutely nothing for what they do. Things need to change, like, now.
reply
True. We want the barrier to be a node runner to go down, or at least remain constant, not get harder.
reply
550 sats \ 0 replies \ @anon 4h
You are correct, it is probably the spamming of the utxo set by token creators. Over the last few years spammers have spent over several hundred million on fees (g Maxwell writes that it’s been over 280 million $) to essentially gamble on tokens.
I have read that the Rpi5 with an nvme is still very fast, as in ibd within a day or 2.
reply
130 sats \ 5 replies \ @optimism 4h
Looking at my hardware utilization rates, I believe CPU is the bottleneck. RAM is not being pressured nor bandwidth, but CPU is at consistently high usage.
What are values for wa/si/hi/st (you can see in top) during sync? In the past, the RPi would have massive soft interrupts (si) when under network/usb load.
reply
wa hovers at aroud 10-15. si hovers around 1-3. The other two are zero.
reply
30 sats \ 3 replies \ @optimism 3h
wa = wait -> waiting for disk.
Which version are you running? 28.0 had an improvement for this that may help:(https://github.com/bitcoin/bitcoin/blob/master/doc/release-notes/release-notes-28.0.md#chainstate)

1-3 si is high but not insane. What base distro are you running? I'll try to ask a friend later today because I know he was digging into this on his pi4.
reply
I'm running bitcoin core 29.0 on 64bit Raspberry Pi OS (debian bookworm)
reply
30 sats \ 1 reply \ @optimism 3h
Interesting.
Someone was talking about RPi tuning relating to the caches on the repo, but I can't for the life of me remember who it was. Also there is a ton of work sitting in master right now but from what I gathered by glancing it when they got merged, those would slow down IBD by ~5%.
reply
Thanks for the insights. it's quite possible that there are some changes since 2022 that aren't well optimized for the Pi. Anyway, not much for me to do right now except wait. Good thing I don't really need the node for anything right now. But it is a bit concerning... I wonder how many people tried to run a node and gave up because of technical difficulties. Not a good sign for decentralization.
Anyone remember that HTLC phone that supposedly performed a full node sync? I'm curious if that can even manage to sync to 100% nowadays.
reply
Patience is a virtue. It took me a few weeks to download but whats the rush? True all the extraneous crap being put on the blockchain is a problem, not sure how that can be fixed. The cost of data storage is still coming down though more slowly, as is the cost of data transmission itself.
reply
0 sats \ 0 replies \ @anon 15m
Hey @SimpleStacker, I'm working full‑time on speeding up Bitcoin Core IBD, see https://github.com/bitcoin/bitcoin/pull/32043.
I'm really surprised by the 7‑day, 78 % sync you're seeing. I run multiple IBDs and reindex‑chainstates per day to hunt new bottlenecks, and even the worst finish in about 12 hours. I should get some Raspberry Pi benchmarking servers somehow.
A few of us are also working on an experimental IBD alternative called SwiftSync (https://delvingbitcoin.org/t/swiftsync-speeding-up-ibd-with-pre-generated-hints-poc/1562). The latest prototype reindex‑chainstated up to block 888888 in 29 minutes on my laptop (with profiling enabled!). Granted, it's a really powerful laptop.
My first guess is that your disk I/O is probably very slow, something that can be mitigated by keeping more data in memory with, for example, -dbcache=5000. You can also increase the batch size of writes to LevelDB with -dbbatchsize=67108864. Lastly, you can turn off script verification by setting -assumevalid=000000000000000000013e40cf3ae6464f5f99d415d6a1fb31577841103df5d8 to the hash of the block you want to re‑enable it from.
No wonder your 8 GB RAM Pi performs poorly on an 11 GB UTXO set.
That's the size on disk - in memory it's almost 30 GiB.
reply