pull down to refresh
Great article! The PRISM signature scheme sounds quite promising while being only ~5x the size of existing PK + schnorr signature.
I'm not a mathematician/cryptographer and I didn't quite follow it all, but I appreciate how you applied the signature schemes to existing bitcoin features like HD wallet derivation and key tweaks. (Mostly it made me appreciate all of the cool features we have now in Schnorr sigs and taproot.) This isn't perfect, but it's impressive how much can be replicated with isogeny cryptography.
I feel like we have a moving target, where we need to upgrade to quantum resistant signatures with an unknown deadline, but meanwhile, the cryptographic state of the art is advancing too. The verification times for PRISM are a little concerning, but maybe that won't be the case 10 years from now. I see this as an argument for allowing quantum resistant signature scheme research to cook a while longer before we push an upgrade path. It's all a little fraught when it's urgent but not imminent.
UltrafastSecp256k1 v3.21 released.
Major improvements:
• constant‑time SafeGCD scalar inverse
• faster CT ECDSA signing
• stricter BIP‑340 parsing
• expanded audit infrastructure
The project now includes cross‑platform benchmarks and reproducible Docker CI.
oh, but mempool.space has it: https://mempool.space/lightning/penalties
not sure why it's hidden
edit: if i'm counting right there's been total 187 penalty transactions on mainnet ever
Hello @BlokchainB and @Doung,
I wanted to share an update on the project. There is already a first visible version and I’d really appreciate hearing your thoughts:
https://www.satoshidashboard.com
There is still a lot left to develop, but I think it’s a good starting point. It would help me a lot if you could take a look and share any feedback whether it’s potential bugs, improvements, or any ideas that come to mind.
I also think it could be interesting to consider adding new sections beyond the ones originally proposed in BTC Frame. In the image I attached, those are the base modules I started from, and so far I’ve replicated 13 out of 30.
You can also check the repository here if you want to explore the code:
https://github.com/Satoshi-Dashboard
cryptographic algorithm (this could be the wrong word, but I can't find the better term to refer to it: eg. curve only applies to elliptic curve cryptography. what if I also want to reference sha-256?)
It's not the wrong word. Cryptographic algorithms are e.g.: ECDSA, Schnorr, SHA256, RIPEMD-160
I'm curious if you think we will still be happily and safely relying on secp256k1 in the year 2085?
Hard to tell what will be in 59 years, but let's speculate.
Right now the best known practicable algorithm to find private keys from exposed public keys without knowing a single bit of the private key is Pollard's Rho. This algorithm has a time complexity of . So it effectively cuts the number of bits of security in half. A 256-bit private key can be found with about iterations.
If computers ever become fast enough to be a threat, we could simply switch to an elliptic curve over a larger prime field with let's say 512 bits. Therefore Pollard's Rho would require iterations.
But the likelihood of that becoming necessary any time soon seems very low. Looking at the records of breaking the Elliptic Curve Dircrete Logarithm Problem (ECDLP) over time we can see a growth rate of roughly 1 bit of security in 4 years:
| Year | Bits of security |
| 2000 | 54 |
| 2002 | 54.5 |
| 2009 | 56 |
| 2014 | 56.5 |
| 2016 | 58.675 |
If we project this out to 2085 we get:
bits of security
Unless some more efficient algorithm for solving the ECDLP is found, we are probably going to be fine.
Now just for fun, I projected this very rough estimate out even further into the future. According to this growth rate, we would be able break 128 bit security in the year 2294.
By this reasoning, it will take a war to put an end to America's war machine, since Americans vote for the peace candidate almost every time and only ever get more war.
I'm a little lost on this too. Fair enough, per Dana's clip, there aaalways more things to do, human desires endless. But are they worth it? Are there customers/audience for it?
[trigger warning]
IMHO what you're seeing is an explosion of slop. Not "software", but "slopware". Most of the things I look at don't pass review past "oh this looks like a nice diff". It all looks like a nice diff now. But looks can be deceiving.
Not to be that guy but he posts this on X. Not here on stacker news or on nostr shoot even pubky. Does it on X yeah sure it’s where the people are but I just get annoyed when great stuff is being built with all the challenges devs face and he posts this on X. Hard to take it seriously
The pace of things does feel like its increasing. Almost as though you are missing out if you aren't participating in this frantic pace.
I'm not vibe coding anything, but I think about this with respect to writing: people are able to increase their output with more words, more research, more details, more arguments, and more ideas. So there's lots more writing out there now.
But this doesn't mean there's much more thinking.
I feel the pressure we are all feeling now to hurry and rush and get more stuff out there, but I've been trying to focus on writing things that still require me to think deeply...and which hopefully inspire reflection and thought in the people who read what I write.
I think there are going to be lots of great projects that people produce with agents, but I hope we don't come to worship speed so much that we forget the awful slowness of thinking deeply.
Well... you are that guy now. haha
Seriously though: I think we should ask: what is needed to get Giacomo here on SN? He's often quoted here and these posts often have a lot of traction.
When you use bitcoin, you are mostly using two cryptographic algorithms: secp256k1 and SHA-256.
secp256k1 is not a cryptographic algorithm, rather it's a specific elliptic curve over a specific prime field.
The quantum FUD is so silly. Don't fall for the marketing hype.
In 2001 the number 15 was factored with Shor's algorithm on a quantum computer.
In 2012 Shor's algorithm was applied on a quantum computer to factor 21.
And now it is the year 2026 and we still haven't gotten past 21, not to mention factoring numbers which are actually used in cryptography.
Also note the quantum circuits were compiled beforehand with the knowledge of the solution already.
And when it comes to classical computers, we have a live view of the progress on cracking private keys thanks to the Bitcoin Puzzles:
https://bitcointalk.org/index.php?topic=5218972.msg53649852#msg53649852
As of now, the best someone managed was finding the remaining bits of a private key with 126 bits exposed by applying Pollard's Kangaroo onto the respective public key.
And using brute force the best someone managed was finding the remaining bits of a private key with 187 bits exposed.
So absolutely nothing to worry about if you expose 0 bits of your private key.
Exactly. Speaking purely from a software R&D industry [1] perspective, filling up a backlog to keep devs busy is easy. Filling it with profitable work is much harder and there's been a ton of inefficiency in many companies regarding this already. But the cost to replace has been a moat, and customers have been extorted for years. I know, because I have been a C-level part of said problem for a part of my career.
This is why, after that existing backlog clears and we've upped service excellence to heights never seen before, there's going to be friction in terms of workload. We need to invent new features and new product lines. Add features that until now didn't make the cut. With each of these there is risk involved; risks we weren't willing to take before. If we go fully API-first or expose graphql (customers will ask for this because they wanna vibe code their own bespoke things on the platform we sold them) then we will see a dip in feature requests, so we'll even get less orders [2].
Development costs at an item level may go down and of course there is opportunity. But in a competitive landscape, where every player in an industry at once is doing this opportunity chasing, there will be a lot of losers too. The markets (unless more commie government measures bail out big software firms like done with Intel) will be relentless. And then, instead of having a sustainable business with a smaller team, we kept the same team, not trim any fat, and go under as a whole as the bloated dinosaur we have become. Now, instead of saving 40-60% of our jobs and 100% of the business, we lost it all. Customers will just vibe the replacement of whatever we thought was a moat.
And that's why I think Wall St. is right in selling overvalued software stonks, why Jack is right in trying to be an early mover now that he can be reasonably sure to use his local cluster of GLM-5 if Anthropic and OpenAI ever go bust, and why Oracle, another jobs-obese company that has for decades lived off the replacement-cost moat that is now gone, is reorganizing in much the same way.
Hard times ahead for devs.
Software/IT, per the claim in #1448839 (no source link tho) is the most affected industry of AI takeover right now. ↩
You'll get less orders because your entire company is now competing with bespoke AI-generated solutions on the customer side. Thus I think it would be audacious to claim the market wrong on putting software stonks on sale, i.e. #1441980 ↩
IMO, it's not that deep: Stacker News is more like Reddit. Nostr is more like Twitter.
They don't post on Stacker News for the same reason(s) they don't post on Reddit (I assume, didn't verify).
They don't post on nostr because twitter is better at being twitter than nostr.
Good question. I am shocked how so many bitcoin folks could care less about SN. and it does everything that was promised by LN micropayments at scale.
The 500 stackers here isn’t enough reach for folks.
I don't feel the need to code by hand, character for character, anymore, and enjoy being free to think at higher abstraction levels, and try random ideas out at low cost.
I'm also not yolo dev-ing. If the code is doing something important (low bar: I'll use it or give/ship it for other people to use), I'll painstakingly review the code that's generated (95% of my dev time is code review and QA now). Judging from what I see online and the chuckles (hut, skill issue) I get from programmer frens, that might make my situation odd and explain why I don't feel like I'm losing anything.
I’ve noticed a diminishing ability to code without AI since I started using Claude.
If you're anything like me, it's not diminished ability as much as it is diminished motivation. IME our feelings are smart. In particular, our feelings are smart about us. If you believe you'll never code by hand again, and don't find purpose or pleasure in by-hand coding, it's going to be hard to lift a finger to do it, let alone get your brain to cooperate when thinking through a one-shottable code generation problem.
I suspect this is why folks have zero reservations about sharing essay slop; they never understood why many people learned and continue to write by-hand: to think. If you enjoy thinking, you'll enjoy writing by hand and do it even without an external goal. Most of us write code for the output and you don't need to write code by-hand to get output anyone. However, if you write code for some reason other than output, for some reason that depends on you writing it by hand (e.g. learning a new programming language), I suspect you'll remain motivated and able to do that.
If you are a roofer, and the nail gun were invented, how often would you feel the need to use a hammer for practice's sake?
stackmilkers! ~lol
I am reminded of two things:
First, Szabo's "Money, Blockchains, and Social Scalability" where he makes the distinction between technological scalability and social scalability. Sure, fees go up when lots of people use a chain, but I think Shin is probably ignoring a similar pressure in centralized banking: regulators seem to tend to want to increase regulation which drives people toward the least regulated means of transacting. Shin's own BIS acknowledged this in a recent paper (#1450816). Szabo's paper is interesting to put into juxtaposition with Shin's:
Second, Voskuil's "Utility Threshold Property" where he argues that
Shin says
But I think that the fragmentation of liquidity that Shin finds so troubling also exists in traditional finance: I may be able to pay with Visa or Mastercard, but those are effectively separate systems. The only reason they work well together is that we have decades of building infrastructure around them.