pull down to refresh

I've been reading the book "Real-World Cryptography" and came across something I absolutely love about Bitcoin, how things are all built in the open. I didn't know though it's actually something really old called the Kerckhoffs's principle.
I wrote a few notes as I researched about the topic, thought of sharing here to tease and provoke yall to look into the amazing idea so we can keep it alive through open source.

Kerckhoffs's principle

The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. This concept is widely embraced by cryptographers, in contrast to security through obscurity, which is not. 1

The enemy knows the system

Kerckhoffs's principle was phrased by the American mathematician Claude Shannon as "the enemy knows the system", in other words, "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". In that form, it is called Shannon's maxim. 1

Open gardens

This idea has inspired people to build safe systems well beyond messaging, my favorite example is Bitcoin, a money system where the protocol and its algorithms are public knowledge and the system remains safe.
In that spirit the term "open gardens" refer to the practice of building systems making all details about it public, except secrets that will make such a system safe while being used. Making the system public helps build trust with people who can benefit from it, make it safer as engineers audit to find its flaws, and as people contribute/fork it helping it fulfill its social function even if it's not through the hands of the original authors.

Security through obscurity

In the short-term the approach to security of making the inner workings of a system secret might improve security, but in the long-term it's often the other way around as these systems will not be widely audited to find its flaws. As a matter of fact in today's world red teams and black hats especialize in finding these flaws which common to them while usually unknown to software developers, giving them the upper hand.
Disciplines like static analysis, network observation, debugging, fuzzing, etc, are well documented and taught nowadays, so if the system has flaws they will eventually be exploited, hence the focus shouldn't be on the flaws not being discovered but on them being fixed faster than malicious actors can exploit them.
This approach is widely used in many industries, often without knowing it's the case. When people know about it and it's used along other secure practices it can be a good thing like the idea of "moving target defense" and cyber deception, this technique has anti-malware software as one of its largest proponents.

Footnotes

0 sats \ 0 replies \ @adlai 5h
I first encountered it after reading about scientific mistakes in Dan Brown's early novel, Deception Point; his "Bergofsky1 Principle" is the hilarious [and possibly deliberately humorous] fabrication, "any cryptosystem can be broken if you know the key".

Footnotes

  1. after some old guy about whose math classes lots of kids complained, although I actually enjoyed the few months I had in his classroom. I think we mostly covered early AP Calc, and somehow it was much less difficult than pre-
reply