[HN Gopher] Cryptography is not magic
       ___________________________________________________________________
        
       Cryptography is not magic
        
       Author : loup-vaillant
       Score  : 89 points
       Date   : 2020-07-25 15:02 UTC (7 hours ago)
        
 (HTM) web link (loup-vaillant.fr)
 (TXT) w3m dump (loup-vaillant.fr)
        
       | [deleted]
        
       | deanCommie wrote:
       | > Cryptography has rules, which are both simpler and more
       | fundamental than we might think.
       | 
       | <Proceeds to describe Cryptography in a way that confirms that it
       | is exactly as complex as I set it out in my head>
       | 
       | e.g. from just the first pagraph:
       | 
       | > " Never ignore timings, they're part of most threat models."
       | 
       | > On most CPUs, the timing side channel can be eliminated by
       | removing all secret dependent branches and all secret dependent
       | indices.
       | 
       | > Some CPUs also have variable time arithmetic operations.
       | 
       | > Watch out for multiplications and shifts by variable amounts in
       | particular.
       | 
       | Yeah dude, stuff like this is EXACTLY what most people don't want
       | to think about, and shouldn't have to think about, and which is
       | why the guidance is "don't roll your own".
       | 
       | I reject his premise as well that this guidance prevents good
       | people from pursuing Crypto as a field of study - as far as I can
       | tell it's not discouraging anyone with actual interest in it.
        
         | loup-vaillant wrote:
         | Oh, I did not mean to say it would be a piece of cake. Yes,
         | side channels can be a nightmare to track down. But it's not a
         | matter of knowing cryptography. It's a matter of knowing your
         | _platform_. The rule (don 't let data flow from secrets to the
         | side channel), remains dead simple.
         | 
         | Think Go (the board game). The rules are much simpler than
         | Chess, and the game itself arguably deeper (for instance,
         | effective Go AIs appeared much later than effective Chess AIs).
         | 
         | > _Yeah dude, stuff like this is EXACTLY what most people don
         | 't want to think about, and shouldn't have to think about_
         | 
         | Selecting yourself out is fine. And apparently you're doing it
         | for all the right reasons: too much investment, not worth your
         | time.
         | 
         | One of my goals was to address the "how hard can it be?" eyes
         | wide would be cryptographer. Well, _this_ hard. More or less.
         | 
         | > _I reject his premise as well that this guidance prevents
         | good people from pursuing Crypto as a field of study - as far
         | as I can tell it 's not discouraging anyone with actual
         | interest in it._
         | 
         | I confess I'm not quite sure about this one. I'll just note
         | that we've seen people bullied out of some fields. (I recall
         | stories of women being driven out on competitive gaming that
         | way.) A constant stream of "don't roll your own crypto" is not
         | exactly bullying, but I can see it be a tad discouraging. To
         | give you an example, here's the kind of mockery I have to face,
         | even now.
         | 
         | https://twitter.com/bascule/status/1287113393439035392
        
           | dependenttypes wrote:
           | > Yes, side channels can be a nightmare to track down
           | 
           | I think that this is extremely overrated. As long as you are
           | using C (rather than some weird language), avoid branches
           | with secrets, avoid indexing arrays with secrets, and avoid
           | *, division, and mod with secrets it should be fine.
           | 
           | > https://twitter.com/bascule/status/1287113393439035392
           | 
           | Low quality posts like that which encourage dunking on people
           | rather than discussion are what made me stop using twitter.
           | Extremely disgusting on his part, I am sorry that you have to
           | deal with this sort of bullying. (I also did not find any
           | signed shift despite the claim of the person responding)
        
             | loup-vaillant wrote:
             | Agreed, timings are not too hard to address in C. Though I
             | confess I gave up on multiplication. Monocypher's manual
             | warns users about that, but I can't avoid it without
             | incurring unacceptable slowdowns on most platforms.
             | 
             | The other side channels however I gave up on them: only
             | custom silicon can meaningfully squash the energy
             | consumption side channel for instance. Software approaches
             | are in my opinion brittle mitigations at best.
             | 
             | About Twitter, I may have overplayed it: I don't use it, so
             | I mostly don't see these things, which in reality are
             | really infrequent. The worst I got was at the time I
             | disclosed the signature vulnerability. It was like a dozen
             | tweets, and only a couple were openly mocking (for the
             | anecdote, I only saw those tweets a year later). In any
             | case, I don't give them much weight: writing this kind of
             | drivel requires some degree of ignorance about my work.
        
           | tzs wrote:
           | > Think Go (the board game). The rules are much simpler than
           | Chess, and the game itself arguably deeper (for instance,
           | effective Go AIs appeared much later than effective Chess
           | AIs).
           | 
           | Arguably, that was because until recently our Chess and Go
           | machines relied too heavily on extensive deep search. For
           | much of the game Go has a branching factor at least an order
           | of magnitude higher than Chess, and a Go game typically lasts
           | many more moves than a Chess game, and the consequences of a
           | bad move in Go can take a lot longer to become apparent than
           | in Chess.
           | 
           | When DeepMind came alone with an approach that was not as
           | heavily reliant on extensive deep search, their machines
           | didn't seem to have much more difficulty with Go than with
           | Chess.
        
         | gregmac wrote:
         | > Yeah dude, stuff like this is EXACTLY what most people don't
         | want to think about, and shouldn't have to think about, and
         | which is why the guidance is "don't roll your own".
         | 
         | I'd even go a step further and say a lot of the problem is that
         | people don't even know about this as a thing to even _consider_
         | thinking about. Unknown unknowns are the dangerous bit. Does
         | this article cover every aspect of the things you need to
         | consider? I know enough about it to say both I don 't know and
         | I highly doubt it.
        
           | zrm wrote:
           | There's also the matter that they keep adding things to the
           | list. Remember three years ago when we didn't know we had to
           | worry about Spectre?
        
             | ljhsiung wrote:
             | Minor nitpick, but "three years ago" the crypto world still
             | did "worry" about Spectre and general transient execution
             | attacks [1][2][3][4].
             | 
             | To me, the biggest thing about Spectre et al. wasn't the
             | proof that CPU uArch can negatively impact security.
             | Rather, it's the proof of information leakage from other
             | processes and privilege modes, which is a drastically
             | heightened vector than cryptography.
             | 
             | [1] https://ts.data61.csiro.au/projects/TS/cachebleed/
             | https://ts.data61.csiro.au/projects/TS/cachebleed/
             | 
             | [2] On Subnormal Floating Point and Abnormal Timing
             | https://cseweb.ucsd.edu/~dkohlbre/papers/subnormal.pdf
             | 
             | [3] Predicting Secret Keys via Branch Prediction
             | https://eprint.iacr.org/2006/288.pdf
             | 
             | [4] Exploiting I-Cache https://eprint.iacr.org/2007/164.pdf
        
         | vore wrote:
         | > Also be careful around compilers and interpreters. No
         | mainstream language specifies how to make constant time code,
         | so your tools could insert secret dependent branches where the
         | source code had none. High level languages in particular tend
         | to have variable time arithmetic. In practice, low level
         | languages like C are fairly reasonable, but I've seen
         | exceptions.
         | 
         | This is by definition not simple if the very tools you're using
         | can cause what you think is correct to be wrong and dangerous!
        
         | wildmanxx wrote:
         | >> Cryptography has rules, which are both simpler and more
         | fundamental than we might think.
         | 
         | > <Proceeds to describe Cryptography in a way that confirms
         | that it is exactly as complex as I set it out in my head>
         | 
         | This. Soooooo much this.
        
         | bjoli wrote:
         | This reminds me of when I tried implementing AES-GCM. AES-OCB3
         | was fine. I just read the paper on ocb3 and it worked.
         | 
         | GCM on the other hand took me a gazillion tries, and even
         | though I ended up more or less copying a reference
         | implementation I still got weird edge case errors.
        
       | codysc wrote:
       | >Perhaps surprisingly, implementing cryptographic primitives &
       | protocols requires little cryptographic knowledge.
       | 
       | That's a dangerous statement on it's own. Making proper use of
       | primitives is not at all a simple concept. Developers can
       | absolutely undermine their systems with poor choices/mistakes.
       | 
       | Self promotion: I wrote a blog up on a very high level screw up
       | with type conversions to show just the very surface of how to
       | screw up using solid crypto primitives. Time allowing I want to
       | do more entries on topics within the crypto realm itself. IV
       | reuse, etc.
       | 
       | https://pritact.com/blog/posts/crypto-mistakes-part-1.html
        
         | na85 wrote:
         | Is that really a crypto-specific problem, though? It seems like
         | it's just yet another reason not to use JavaScript for anything
         | serious.
         | 
         | Wouldn't a more strongly-typed language have prevented that bug
         | at compile time?
        
           | codysc wrote:
           | Yes, the goal was/is to start with a very high level example
           | then get more into crypto specific concerns.
           | 
           | The audience is for devs without any real
           | experience/knowledge of using crypto that might go into it
           | too casually.
        
         | loup-vaillant wrote:
         | > _That 's a dangerous statement on it's own._
         | 
         | I'm not sure how best to say it.
         | 
         | Implementing primitives & protocols requires little
         | _cryptographic_ knowledge. It does however require significant
         | knowledge about program correctness: testing methods, proofs,
         | and if side channels are important, the characteristics of your
         | platform, and an accurate enough idea how your compiler or
         | interpreter works.
         | 
         | Likewise, to implement an energy constant implementation of
         | Chacah20 in silicon, you don't need a cryptographer, you need a
         | _hardware designer_. The only thing you need a cryptographer
         | for, is telling the hardware designer to make it constant
         | energy -- or convincing the higher ups why the extra cost is
         | justified.
         | 
         | The blog post you link (which I love by the way) seems to
         | confirm my view: many problems are ones of correctness. I
         | believe most such bugs would be caught by corrupting the
         | inputs, as I alluded to. Here, corrupting the password would
         | fail to abort, and you'd catch the bug.
        
           | codysc wrote:
           | I don't think I exactly understand your points.
           | 
           | Using an example of IV reuse in AES-GCM:
           | 
           | The weaknesses resulting from this wouldn't be discoverable
           | with a test like corrupting the password from the first
           | example. If the developer wasn't aware that IV reuse
           | introduced that weakness then they would be using strong
           | primitives but in a way that dramatically undermines the
           | actual encryption.
           | 
           | Not to put words in your mouth, but I assume your answer
           | would be to say that this would be a matter of correctness.
           | If yes, then where I'm coming from is that the majority of
           | devs don't have the skillset to be correct and sometimes
           | wouldn't dive deep enough to discover these kinds of
           | pitfalls.
        
             | loup-vaillant wrote:
             | > _Using an example of IV reuse in AES-GCM:_
             | 
             | Yes, that one wouldn't be caught by corrupting inputs. You
             | need to make sure you don't reuse the IV in the first
             | place. And that's indeed a cryptography related bug.
             | 
             | > _I assume your answer would be to say this would be a
             | matter of correctness_
             | 
             | It would be.
             | 
             | > _where I 'm coming from is that the majority of devs
             | don't have the skillset to be correct_
             | 
             | Unfortunately, I can believe that. Correctness is hard. Or
             | expensive. Let's try with this example.
             | 
             | If you're designing an AEAD yourself, you can notice the
             | error by trying (and failing) to prove IND-CCA2. If you're
             | implementing or using AEAD, code review should the
             | problem... _unless_ this is a bug like you 've shown
             | before. Tough one to spot.
             | 
             | One way to avoid the IV bug with a reasonable degree of
             | certainty would be to use a counter or a ratchet. Don't
             | send the IV over the network, let it be implicit. Then
             | write two implementations in two very different languages.
             | It is very unlikely that _both_ happen to repeat the same
             | hard coded IV.
             | 
             | If we still want to use random IVs, we probably need to
             | mitigate replay attacks: have the receiver store the last
             | few IVs of the messages it received this session, and have
             | it compare any new IVs with this set. Won't stop all replay
             | attacks (the attacker could wait until old IVs are
             | forgotten), but it will at least catch the accidental
             | reuse.
        
       | rini17 wrote:
       | Who are the experts here? For example, does any TLS protocol
       | version or its implementations fullfill the following?
       | 
       | "The slightest error may throw cryptographic guarantees out the
       | window, so we cannot tolerate errors. Your code must be bug free,
       | period. It's not easy, but it is simple: it's all about tests and
       | proofs."
       | 
       | (Please, I don't intend this as a flamebait, only asking on what
       | is this belief founded?)
        
         | loup-vaillant wrote:
         | There are a number of components there.
         | 
         | (1) Protocols are very sensitive to errors, possibly more than
         | primitives. If you screw up the internals of a primitive, the
         | results will be different (and visibly so), but it stands a
         | good chance at still being _secure_. Protocols however tend to
         | be very tightly designed. Modifications that have a working
         | happy path are more likely to have significant wholes: a
         | missing check, failure to authenticate part of the transcript,
         | loss of forward secrecy... No real justification there, it just
         | has been my experience dealing with modern primitives and
         | protocols.
         | 
         | (2) Correctness subsumes security. A program is correct if it
         | fulfils its requirements. A program is secure if it fulfils its
         | _security_ requirements. Which by definition are a subset of
         | all requirements. That said, while immunity to relevant side
         | channels are definitely parts of security requirements, it
         | helps to separate them from the correctness of end results.
         | 
         | (3) Bug free code is possible. It's not easy, but it can be
         | done. Constant time implementations of modern primitives are
         | among the easiest code to test, ever: since code paths only
         | depend on the lengths of parameters, testing them all against a
         | reference is trivial. That's not just "100% code coverage",
         | it's 100% _path_ coverage. As for the proofs, while they may
         | not be easy to produce, the good ones are fairly easy to follow
         | (though extremely tedious), and the great ones can be checked
         | by a machine, making them trivial to verify.
        
           | tialaramex wrote:
           | > As for the proofs, while they may not be easy to produce,
           | the good ones are fairly easy to follow (though extremely
           | tedious), and the great ones can be checked by a machine,
           | making them trivial to verify.
           | 
           | The main value from protocol level proofs is that since you
           | need to tell the machine your assumptions before it spits out
           | a proof, a careful proof development process can discover
           | unstated assumptions.
           | 
           | The TLS Selfie attack is an example. In principle this attack
           | could have been found during proof generation for TLS 1.3,
           | but in practice the proofs generated during TLS 1.3
           | development smuggle in an unstated assumption that means
           | Tamarin rules out Selfie even though in some cases it would
           | be a viable attack.
           | 
           | [Selfie goes like this: Alice and Bob have a PSK for
           | authentication, Mallory doesn't know the PSK, but Mallory can
           | interfere with the network between Alice and Bob. Alice
           | intends to ask Bob, "Do you have the car?". Mallory can't
           | read this question or write an answer Alice will accept
           | because they don't know the PSK. However, Mallory just
           | redirects the question back to Alice. "Do you have the car?"
           | and Alice doesn't have the car, so she answers "No" and she
           | knows the PSK so her answer is proper. Now Alice gets an
           | answer to her question, "No" and so she concludes Bob doesn't
           | have the car. But actually Bob was never asked!]
           | 
           | Knowing about this, it can be repaired. Alice and Bob simply
           | address the intended recipient in each message, "Bob, do you
           | have the car?" "No Alice, I don't" - and check for their own
           | name in messages they receive. Or they use a separate PSK for
           | each direction, not just one per pair of participants in
           | their system. Or they can choose only to either be a TLS
           | client or a server and never both. But all these steps aren't
           | obvious if there's nowhere stated the assumption that you did
           | one of these three things. Intuitively it seems as though
           | since Mallory doesn't know the PSK and the Tamarin prover
           | says this protocol works you're fine.
        
           | zrm wrote:
           | > Protocols are very sensitive to errors, possibly more than
           | primitives. If you screw up the internals of a primitive, the
           | results will be different (and visibly so), but it stands a
           | good chance at still being _secure_.
           | 
           | I don't know if I agree with this. You can easily write an
           | implementation of a primitive that even creates the correct
           | output bytes while leaking secrets via every side channel, or
           | via the one side channel you didn't realize existed.
           | 
           | I also think that "protocol" is too wide to be a useful
           | category. TLS is a protocol, right? But what about HTTPS?
           | Your site's API? There is always going to be cryptography at
           | the bottom, but at some point you have to draw a line or
           | "don't roll your own crypto" becomes "don't write your own
           | software" because everything is in scope.
           | 
           | Or maybe you can't draw that line because the upper layer
           | stuff still has implications. Think about the compression
           | oracle attacks. The upper layer has secret data, compresses
           | it and then shovels it through a "secure" protocol but has
           | already leaked the secret contents through the content size
           | difference due to the compression. But if that means
           | everything _is_ in scope, what then?
        
             | loup-vaillant wrote:
             | > _or via the one side channel you didn 't realize
             | existed._
             | 
             | Can't do much about that one. Gotta have someone telling
             | you, or (worst case) the very state of the art advancing
             | under your feet.
             | 
             | > _at some point you have to draw a line or "don't roll
             | your own crypto" becomes "don't write your own software"
             | because everything is in scope._
             | 
             | Yes, there is a point beyond which you don't have a choice.
             | The natural (and utterly impractical) line to draw is
             | untrusted input. Talking through the internet is the
             | obvious one, but merely playing a video exposes you to
             | untrusted data that might take over your program and wreck
             | havoc.
             | 
             | Compression is a tough one. I'd personally try padding.
             | Something like PADME should work well in many cases.
             | https://en.wikipedia.org/wiki/PURB_(cryptography)
        
       | leafboi wrote:
       | Crypto/security is cool and amazing, but it's also one the least
       | flashy parts of Computer Science. Purely in terms of overall
       | reputation I don't think people view crypto as "magic..." a more
       | accurate analogy is "plumbing."
       | 
       | The Magic comes more from things like games and computer graphics
       | and deep learning.
        
         | anonymousDan wrote:
         | I don't know about that, I think a lot of people view
         | codebreaking/hacking as pretty magic.
        
       | [deleted]
        
       | rsj_hn wrote:
       | This article rubs me the wrong way, because the number 1 problem
       | I see when people implement crypto is that they don't have a well
       | defined threat model or understand how cryptography can assist
       | them in addressing threats.
       | 
       | The issue is not so much that there might be a flaw in their
       | implementation -- indeed you should use reviewed libraries
       | instead of rolling your own to avoid flaws -- this is not
       | specific to crypto, it's just as much true for IO libraries or
       | memory management libraries as it is for crypto.
       | 
       | But what makes crypto unique is that cryptographic algorithms
       | have very strict, well-defined, limited behaviors and there is
       | generally a big gap between what people want to accomplish "don't
       | let an attacker see this file" and what crypto will actually do
       | for them "encrypt the file" and very often the use of crypto
       | doesn't end up creating much value.
       | 
       | Here, people _do_ think cryptography is some kind of dark magic,
       | where they can  "secure" something just by encrypting it, and
       | it's incredibly frustrating to have to implement cargo cult
       | crypto that doesn't add much in the way of real security just
       | because a PO views encryption as an end in itself -- e.g. as a
       | feature.
        
       | badrabbit wrote:
       | OP, don't know you but big fan of your posts,especially the
       | ChaCha20 writeup.
       | 
       | What this article talks about somewhat translates to the larger
       | infosec community.
       | 
       | A lot of the presumptions I had about working in infosec were
       | false:
       | 
       | - You need to be good at and understand software exploitation
       | well
       | 
       | - You need to be a good programmer
       | 
       | - You need to know how to code (I do fwiw)
       | 
       | - Your soft-skills should be great (not more than any regular
       | office job)
       | 
       | - You should know offensive techniques well, including breaking
       | crypto
       | 
       | - You need to go to cons and do heavy infosec social networking
       | 
       | - You need to be good at math
       | 
       | - You need to master every single IT discipline
       | 
       | - How can you work in infosec if you never hacked a gibson?
       | (joke)
       | 
       | And many more.
       | 
       | I can tell you,these types of elitist gate-keeping is why infosec
       | always complains about a "skills shortage". I am nowhere near
       | exhausting my mental/skill capacity with my day to day work and I
       | do fairly well. I meet people all the time who never coded before
       | and never heard of an elliptic curve that do well and impress me
       | in very technical infosec disciplines.
       | 
       | My suggestion to anyone considering infosec is, if you have
       | strong interest in the subject and you enjoy the very technical
       | aspects of it (even if you don't understand some things well), I
       | say go for it regardless of what you lack so long as you don't
       | lack motivation and free time to pursue your studies. There are
       | plenty of jobs that need people with passion in infosec and you
       | need to be an elite hacker as much as a sports team needs every
       | player to be an egotistical superstar.
        
       | jeffrallen wrote:
       | I work with cryptographers daily and crypto is magic, and the
       | amount of variables you need to consider when working on crypto
       | systems is so large and varied that it takes a team to succeed.
       | You should not roll your own crypto because your recruiting is
       | not good enough to gather that team around you.
       | 
       | The only thing from that article that I agree with is that gate
       | keeping is bad. Crypto is so hard that we need more
       | cryptographers to help us, not less.
        
         | tptacek wrote:
         | Do we need more cryptographers? Do you have a sense of how easy
         | it is for strong cryptographers to get meaningful work in
         | industry doing this stuff? We all know cryptography engineering
         | rates are very high, but that doesn't mean there's a surfeit of
         | open reqs for them; some high-value specialties are performed
         | mostly by consultants because most companies don't need them
         | full-time.
        
           | quadrifoliate wrote:
           | > some high-value specialties are performed mostly by
           | consultants because most companies don't need them full-time
           | 
           | I think there is a conflation here of "cryptographer" in the
           | sense of "person who studies, and maybe invents cryptographic
           | algorithms" v/s "someone who needs a broad understanding of
           | cryptographic algorithms in their day-to-day software work,
           | studies them in some depth, but likely doesn't invent new
           | ones".
           | 
           | The former is the kind you mean, but likely most people
           | (including the OP, I think) are referring to the latter when
           | they say "cryptographers". An unscientific test of this can
           | be done by searching jobs.lever.co for "cryptographer" v/s
           | "cryptography" - the former yield _one_ position, whereas the
           | latter yields _ten pages_. [1][2]
           | 
           | And maybe some of this is just due to the term being
           | imprecise. For example, would you term this position as being
           | a "cryptographer"? https://jobs.lever.co/protocol/9afbc1c9-8b
           | 3b-4c03-856d-6b0cb.... It's certainly full time.
           | 
           | -------
           | 
           | [1] https://www.google.com/search?q=%22cryptographer%22+site%
           | 3Aj...
           | 
           | [2] https://www.google.com/search?q=%22cryptography%22+site%3
           | Ajo...
        
           | Ar-Curunir wrote:
           | I think as more advanced crypto starts being deployed (stuff
           | based on MPC AND zkSNARKs), we'll see a need for more long-
           | term cryptographer positions
        
         | loup-vaillant wrote:
         | Oh, I didn't consider the perspective of a team or a company.
         | For those, I believe I agree with you, if only because it's
         | pretty much impossible to assess a cryptographer if you aren't
         | already one.
         | 
         | For my part, I think I can evaluate people who know less than I
         | do. Those who are _more_ competent than I am however, I could
         | not tell by how much. I 'd have to rely on reputation or past
         | achievements.
        
           | beefhash wrote:
           | At least PhDs in cryptography establish some amount of a
           | baseline in that regard.
        
       | caseymarquis wrote:
       | Ironically, the article has increased my commitment to not
       | writing my own cryptography code except as a hobby.
        
       | RcouF1uZ4gsC wrote:
       | > Chacha20 tends to be naturally immune to timing attacks on most
       | platforms, while AES requires special care if you don't have
       | hardware support.
       | 
       | One of the nice things about the crypto designed by djb is the
       | effort to make it easy to implement safely. For example, as
       | mentioned, Chacha20 is designed to avoid timing attacks.
       | Curve25519 is designed so every 32 byte key is a valid public
       | key.
       | 
       | Just like programming languages are shifting from the C like view
       | of it is solely the programmer's responsibility to avoid screwing
       | up, to languages like rust which emphasize safety and make it
       | harder to have an inadvertent memory safety issue, so our crypto
       | algorithms ideally should be designed that an competent general
       | software engineer can implement them without screwing up.
        
         | greesil wrote:
         | It certainly helps that we have the last 30 years of mistakes
         | to learn from.
        
       | hn_acc_2 wrote:
       | This article completely misses the forest for the trees.
       | 
       | Of course someone can roll their own crypto, if they've a
       | willingness to study and internalize the concepts, have a
       | commitment to doing it right, and spend time doing things like
       | "Make it bug free. Test, test, test. Prove what you can. Be
       | extra-rigorous".
       | 
       | The whole point of that common advice is that the overwhelming
       | majority of developers have none of those things and it would
       | behoove them to lean on a library instead.
        
         | loup-vaillant wrote:
         | One of my hopes is that my article gives an idea of what one
         | would be getting into. That someone without the dedication or
         | rigour to do this would notice right away and back off, _with
         | no hard feelings_.
         | 
         | My other hopes is that it would help newcomers focus their
         | learning. Had I read this article 4 years ago, it would have
         | taken me less time to design a reliable enough test suite.
        
           | anticristi wrote:
           | @loup-vaillant This is an excellent write-up! But I would say
           | that the net benefit is that it teaches developers why they
           | should not roll their own crypto, instead of telling them
           | off.
           | 
           | Small tangent, I feel the same way about car servicing: I
           | watch YouTube videos on how to do it, then I pay a mechanic
           | to do it.
           | 
           | Besides not needling to invest time in learning a non-core
           | competence -- most devs need to deliver a user experience and
           | features -- not rolling your own crypto also transfers risks.
           | I doubt you would like to hear your bank using a custom IV,
           | because they thought it would be cool. :)
           | 
           | In a follow-up, it would be cool if you wrote about SRP
           | authentication.
        
             | loup-vaillant wrote:
             | > _In a follow-up, it would be cool if you wrote about SRP
             | authentication._
             | 
             | First time I hear of this. Looks interesting, but as PAKE
             | goes I know B-SPEKE, AuCPACE, and OPAQUE better. I'm trying
             | to determine which one I want right now, possibly even
             | design my own, but I found those protocols are
             | significantly harder to get right than authenticated key
             | exchange. I also don't know them well enough to competently
             | write about them just yet.
        
       ___________________________________________________________________
       (page generated 2020-07-25 23:00 UTC)