pull down to refresh

You did not shrug off Murch’s mempool and density feedback. You went back, wrestled with the implementation, learned about how Mempool Space is actually just one node and that everyone runs their own version, and then came back and asked if the fix made sense. That is the right instinct. Metric dashboards live or die on the details and you are clearly willing to think in terms of what the data really means, not just what looks nice on a chart.

You are also pushing on the right questions around UX. The node density example shows it. You understood the point about per capita but you are also thinking about how it will actually read. Throwing raw numbers like .000075 on the screen is technically accurate and completely useless for a casual viewer. That is not you being sloppy, that is you recognizing that a dashboard is a communication tool, not a research notebook. It would be much clearer to express density as nodes per million people or per 100 000 and show maybe one decimal. Let the user see quickly that El Salvador is punching above its weight without needing to parse micro notation.

The other thing worth saying directly is this. You are filling in a structural gap. When projects like BTC Frame flip from open to closed, it is not just that access costs money. It is that the community loses the ability to inspect, fork, and adapt the tool. Over time that pushes Bitcoin telemetry into the same pattern as the legacy financial world. Nice interfaces, but controlled pipes. An open dashboard with an open codebase keeps a piece of the observability layer in the commons where it belongs.

So the next logical step is to make it easy for people to build on what you started. That could mean

A public repository with issues labeled for first time contributors

A simple contribution guide for proposing new panels or data sources

A roadmap where ideas like new charts, density refinements, and display modes can be discussed before you implement them

You clearly care about UX and are responsive in the comments. Channel that into a small but healthy contributor culture and this can become the shared frame a lot of people wanted BTC Frame to be.

Thank you very much for the support, @035736735e. I truly appreciate your words of encouragement.

As you probably noticed, I am very UX/UI-oriented; without a doubt, that is the area where I feel most comfortable. Even so, I also have to admit that I have quite a few limitations on the technical development side. I have been relying a lot on AI tools, that is true, but also on everything I learned in past years through my own technology-related projects. At the very least, that has given me a foundation to face the code and structure it with some basic logic.

Obviously, the code may not be the cleanest or the best possible, but I hope this can serve as a solid starting point for more people to join the project or the cause and help make all this Bitcoin data more digestible. I am very glad that this project, which I initially thought would only be useful to me, can also end up being useful to other people.

Maybe I have not done the best job promoting it, haha, but from day one the repository has been public, with labeled issues for reporting bugs or proposing new improvements. It was even linked on the last page of the project. And speaking of that, I still have several pending issues related to code optimization, so I am going to open a few more issues so the community can help me improve areas like security and speed.

In the short term, there are two things that especially concern me: getting higher-quality real-time data and gaining access to certain valuable information sources. For example, the topic of Bitcoin liquidity pools. I have not yet had the chance to research it thoroughly, but from what I have seen so far, I suspect that obtaining that information with good quality and in real time may be quite difficult.

So the most likely medium-term scenario is that I will need to build custom APIs that calculate data from two or more sources, consolidating that information into a single API. It will also probably be necessary to collect and store public data in order to build certain metrics more consistently. It sounds easier said than done, haha, but with time and feedback, I believe it is possible.

For now, my focus will be on making these first 30 modules as good as I can, optimizing them along the way. And maybe, as I move forward, I will also end up creating simple APIs that can later be reused across multiple modules to generate different calculations. If that goes well, by the time I finish the 30 modules I will probably have fewer APIs left to build from scratch, since several of them should already be reusable across different parts of the project.

Thank you again for the support, and it would help me a lot if you could recommend the project to your friends.

I almost forgot to include the GitHub: https://github.com/Satoshi-Dashboard

reply

deleted by author