Tried to run the parser, but ended up with this error. What could it be?
thread 'main' has overflowed its stack fatal runtime error: stack overflow ./run.sh: line 25: 1954942 Aborted (core dumped) cargo run -r -- "$HOME/.bitcoin/"
I pushed an update of run.sh, it should be better now !
If you end up using Satonomics for your website, a mention would be cool and appreciated 🤙
reply
My bad, it's not ulimit -n 1000000 (but you probably still need that line, it's for the maximum number of files that can be opened at a time) but ulimit -s $(ulimit -Hs) which increases the stack size
reply
So basically the default stack size allowed is too small for all the different things that are needed in the program. (the structs tree is an absolute monster !)
If you open the run.sh there is a hack for that but I only enabled it for Mac OS as I didn't know how it would react on Linux.
What you can do is try ulimit -n 1000000 && ./run.sh (or just move the line inside the run.sh file) and see if it helps, otherwise don't hesitate to ping me
reply
I already had a big max open file limit on my linux server. The missing command was ulimit -s $(ulimit -Hs) to increse the stack size.
Now it worked, the parser is running, thanks. I report back to you when it finishes.
reply
Sure ! Like I said in the other replies, which you seem to have missed, I inverted the two lines
reply
I messaged you on Nostr, too many doubts.
reply
Apparently I cannot go further 2024-07-05 in the parser, due to the lack of price data:
`2024-07-08 14:28:19 - Processing 2024-07-05 (height: 850735)... 2024-07-08 14:28:19 - fetch kraken daily 2024-07-08 14:32:44 - kraken: fetch 1mn 2024-07-08 14:32:44 - binance: fetch 1mn 2024-07-08 14:32:45 - binance: read har file The application panicked (crashed). Message: Can't find price for 850839 - 1720211220 - 2024-07-05, please update binance.har file Location: src/datasets/price/mod.rs:306
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it. Run with RUST_BACKTRACE=full to include source snippets.`
reply
The parser just finished, it took 52 hours in a powerful linux machine.
To update its database all I have to do is to run the parser again, and it will pickup in the last height it processed?
How can I access the generated datasets?
reply