[HN Gopher] Removing the Linux /dev/random blocking pool
       ___________________________________________________________________
        
       Removing the Linux /dev/random blocking pool
        
       Author : lukastyrychtr
       Score  : 234 points
       Date   : 2020-01-07 09:50 UTC (13 hours ago)
        
 (HTM) web link (lwn.net)
 (TXT) w3m dump (lwn.net)
        
       | quotemstr wrote:
       | Not blocking under insufficient entropy does not suddenly make
       | that entropy available. Punting entropy collection to userspace
       | doesn't magically allow for DoS-free random number generation ---
       | it just transforms, silently, a condition of insufficient entropy
       | into a subtle security vulnerability. It feels like a form of
       | reality denial, a bit like overcommit. The more time goes by, the
       | more I wish there were a unixlike built on robustness,
       | determinacy, and strict resource accounting.
        
         | Hendrikto wrote:
         | > Not blocking under insufficient entropy does not suddenly
         | make that entropy available.
         | 
         | That's why it is still blocking until it has been sufficiently
         | initialized. After it has gathered sufficient entropy, the
         | pool's entropy is _not_ exhausted by reading from it.
         | /dev/random assumes that reading 64 bits from it will decrease
         | the entropy in its pool by 64 bits, which is nonsense.
         | 
         | Linux's PRNG is based on cryptographically strong primitives,
         | and reading output from /dev/random does _not_ expose its
         | internal state.
         | 
         | Your pointless rant just indicates that you do not really
         | understand what's going on.
        
           | cesarb wrote:
           | > /dev/random assumes that reading 64 bits from it will
           | decrease the entropy in its pool by 64 bits, which is
           | nonsense.
           | 
           | It's not complete nonsense. Suppose for explanation purposes
           | that you had a pool with only 256 bits (2^256 possible
           | states), and you read 64 bits from it. Of these 2^256
           | possible states, most would not have output that exact 64-bit
           | value you just read; on average only 2^192 of the possible
           | states would have resulted in that output value. Therefore,
           | once you know that 64-bit value, the pool now has only 192
           | bits of entropy (2^192 possible states). Read three more
           | 64-bit values, and on average only one of the 2^256
           | originally possible states could have resulted in these four
           | 64-bit values; since there's only one possible state, the
           | pool's entropy is zero.
           | 
           | However, like you said "Linux's PRNG is based on
           | cryptographically strong primitives": the _only_ known way to
           | find which of the 2^256 possible states could have resulted
           | on these four 64-bit values would be to try all of them.
           | Which is simply not viable, even with all the computing power
           | in the world. That is, once the number of possible states
           | (the pool 's entropy) gets above a threshold, even if you
           | later exhaust all the theoretical entropy from it, there's
           | still no way to know the pool's state.
        
             | tptacek wrote:
             | This is not how the LRNG works, or has ever worked. Think
             | of CSPRNG output as the keystream of a stream cipher, which
             | is what it practically is. Draw 256 bytes of keystream out
             | of a stream cipher, and you have in no meaningful sense
             | depleted that cipher's remaining store of unpredictable
             | keystream bytes. Were it otherwise, no modern cryptography
             | would function.
        
           | jerf wrote:
           | "/dev/random assumes that reading 64 bits from it will
           | decrease the entropy in its pool by 64 bits, which is
           | nonsense."
           | 
           | To amplify Hendrikto's point, /dev/random is implemented to
           | "believe" that if it has 128 bits of randomness, and you get
           | 128 bits from it, it now has 0 bits of randomness in it. 0
           | bits of randomness means that you ought to now be able to
           | tell me exactly what the internal state of /dev/random is. I
           | don't mean it vaguely implies that in the English sense, I
           | mean, that's what it _mathematically means_. To have zero
           | bits of randomness _is_ to be fully determined. Yet this is
           | clearly false. There is no known and likely no feasible
           | process to read all the  "bits" out of /dev/random and tell
           | me the resulting internal state. Even if there was some
           | process to be demonstrated, it would still not necessarily
           | result in a crack of any particular key, and it would be on
           | the order of a high-priority security bug, but nothing more.
           | It's not an "end of the world" scenario.
        
             | throwaway2048 wrote:
             | Yes, this depleting entropy argument is like arguing a 128
             | bit AES key is no longer secure after it has encrypted 128
             | bits of data, and encrypting more will give up the AES
             | private key, so the ONLY thing to do is block.
             | 
             | Its completely nuts.
        
             | quotemstr wrote:
             | > There is no known and likely no feasible process to read
             | all the "bits" out of /dev/random and tell me the resulting
             | internal state
             | 
             | That's fine if you trust the PRNG. Linux used to at least
             | attempt to provide a source of _true_ randomness. You and
             | Hendrikto are essentially asserting that everyone ought to
             | accept the PRNG output in lieu of true randomness. Given
             | various compromises in RNG primitives over the years, I 'm
             | not so sure it's a good idea to completely close off the
             | true entropy estimation to userspace. I prefer punting that
             | choice to applications, which can use urandom or random
             | today at their choice.
             | 
             | Maybe everyone _should_ be happy with the PRNG output. T
             | 'so goes further and argues, however, that if you provide
             | any mechanism to block on entropy (even to root only),
             | applications will block on it (due to a perception of
             | superiority) and so the interface must be removed from the
             | kernel. I see this change as an imposition of policy on
             | userspace.
        
               | aidenn0 wrote:
               | > That's fine if you trust the PRNG. Linux used to at
               | least attempt to provide a source of true randomness. You
               | and Hendrikto are essentially asserting that everyone
               | ought to accept the PRNG output in lieu of true
               | randomness. Given various compromises in RNG primitives
               | over the years, I'm not so sure it's a good idea to
               | completely close off the true entropy estimation to
               | userspace. I prefer punting that choice to applications,
               | which can use urandom or random today at their choice.
               | 
               | Linux never provided a source of true randomness through
               | /dev/random. The output of both /dev/random and
               | /dev/urandom is from the same PRNG. The difference is
               | that /dev/random would provide an _estimate_ of the
               | entropy that was input to the PRNG, and if the estimate
               | was larger than the number of bits output, it would
               | block.
        
               | jerf wrote:
               | "You and Hendrikto are essentially asserting that
               | everyone ought to accept the PRNG output in lieu of true
               | randomness."
               | 
               | No, what I am asserting is simply that the idea that you
               | drain one bit of randomness out of a pool per bit you
               | take is not practically true unless you can actually
               | _fully_ determine the state of the randomness generator
               | when you 've "drained" it. No less, no more. You can't
               | have "zero bits of entropy" and also "I still can't tell
               | you the internal contents of the random number" generator
               | at the same time, because the latter _is_ "non-zero bits
               | of entropy". Either you've got zero or you don't.
               | 
               | As of right now, nobody can so determine the state of the
               | random number generator from enough bits of output, we
               | have no reason to believe anybody ever will [1], and the
               | real kicker is _even if they someday do_ , it's a _bug_ ,
               | not the retroactive destruction of all encryption ever. A
               | SHA-1 pre-image attack is a much more serious practical
               | attack on cryptography than someone finding a way to
               | drain /dev/random today and get the internal state.
               | 
               | It's only true in theory that you've "drained" the
               | entropy when you have access to amounts of computation
               | that do not fit into the universe. Yes, it is still true
               | in theory, but not in a useful way. We do not need to
               | write our kernels as if our adversaries are turning
               | galaxy clusters into computronium to attack our random
               | number generator.
               | 
               | [1]: Please carefully note distinction between "we have
               | no reason to believe anyone ever will" and "it is
               | absolutely 100% certain nobody ever will". We have no
               | choice but to operate on our best understandings of the
               | world now. "But maybe somebody may break" doesn't apply
               | to just the current random generator... it applies to
               | _everything_ including all possible proposed
               | replacements, so it does not provide a way to make a
               | choice.
        
             | jlokier wrote:
             | The entropy value was designed to be a _known
             | underestimate_ , not an accurate estimate of the entropy
             | available.
             | 
             | What that in mind, zero is ok as a value. You may not be
             | able to calculate the state of /dev/random given the tools
             | and techniques available to you, but that doesn't make zero
             | an incorrect lower bound on what you could _mathematically_
             | calculate from the extracted data.
        
               | tptacek wrote:
               | In reality, the entropy estimate is of no value. See
               | Ferguson and Schneier, who have a chapter on this.
               | 
               | The meaningful states of a CSPRNG are "initialized" or
               | "not". Once initialized, there is never a diminishment of
               | "entropy".
        
               | jlokier wrote:
               | I agree with that, subject to the assumption that your
               | CSPRNG is built on CS-enough primitives.
               | 
               | (What I disagreed with is the argument made by the GP,
               | not you, that the Linux entropy value was incompatible
               | with their in-principle mathematical description of
               | "true" entropy. Pretty irrelevant to real cryptography.)
        
         | saagarjha wrote:
         | Isn't overcommitment a feature of most modern operating
         | systems?
        
         | imtringued wrote:
         | Actually you are making exactly the mistake that this change is
         | intended to avoid. The distinction between /dev/random and
         | /dev/urandom makes it appear that /dev/urandom is inferior and
         | /dev/random is the "true" random number generator but that
         | isn't the case. They are equally good since they both wait for
         | an initial amount of entropy and receive additional entropy
         | from the OS even after boot. Additional entropy does make the
         | RNG more secure [0] but reusing existing entropy doesn't make
         | your random numbers less secure because they were created from
         | an unpredictable source.
         | 
         | [0] Let's say you plug in a USB based hardware RNG, after
         | booting. You turn a software entropy source on, after booting.
         | /dev/random would immediately take advantage of the RNG.
         | Already opened /dev/urandom streams wouldn't, until they are
         | closed. (For how many people is this a critical feature?)
        
           | ryukafalz wrote:
           | > The distinction between /dev/random and /dev/urandom makes
           | it appear that /dev/urandom is inferior and /dev/random is
           | the "true" random number generator but that isn't the case.
           | They are equally good since they both wait for an initial
           | amount of entropy and receive additional entropy from the OS
           | even after boot.
           | 
           | I was under the impression that urandom does not wait for
           | initial entropy on Linux. Am I mistaken/has this changed?
        
         | [deleted]
        
       | edoceo wrote:
       | This looks like it explains why syslog-ng is hanging on boot?
       | It's trying to read a random. At least, hangs until there is some
       | random (have to just mash the keyboard a bit)
        
         | beefhash wrote:
         | I am now somewhat curious why syslog-ng needs random bytes on
         | boot.
        
         | lanternslight wrote:
         | Ugh! That explains so much! I did not correlate the mashing of
         | keys and mouse clicks with RNG.
        
         | downerending wrote:
         | This reminds me of the (possibly apocryphal) story of old VMS
         | (?) systems hanging if their console, which was often a
         | printer, ran out of paper.
        
       | latchkey wrote:
       | Filed this one in 2011 and it got a lot of heated discussion...
       | 
       | https://bugs.launchpad.net/bugs/706011
        
         | jancsika wrote:
         | A professional response on a bug report seeks to narrow down
         | the possible source of a bug (if any) so it may be understood,
         | tested, and addressed properly.
         | 
         | The first response to start such a process is taligent in
         | response #22.
         | 
         | A useful addition to that is #23 where Steven Ayre suggests
         | opening that as a separate bug that focuses solely on this
         | issue.
         | 
         | I'm not sure what the purpose is for the other responses you
         | received. They seem to seek to use the breadth of your issue
         | report to _widen_ the discussion to maximally contentious
         | security topics.
        
           | JdeBP wrote:
           | That's not a fair assessment of the other responses. Steve
           | McIntyre's in #7, for one example.
        
       | saagarjha wrote:
       | Am I correct in my understanding that /dev/random will not block
       | anymore and behave similarly to /dev/urandom after it has been
       | initialized? Or is there still some inherent difference between
       | the two?
        
         | hannob wrote:
         | The "after it has been initialized" is the inherent difference.
        
       | jonesdaisy wrote:
       | https://genericbuddy.com/careprost-3ml-eye-drops
        
       | devit wrote:
       | There are two threat models against code using RNGs:
       | 
       | 1. The adversary has an amount of computing power that is
       | feasible as currently foreseeable: in this case, all you need are
       | K "truly random" bits where K=128/256/512 and can then use a
       | strong stream cipher or equivalent to generate infinite random
       | bits, so you only need to block at boot to get those K bits, and
       | you can even store them on disk from the previous boot and have
       | them passed from an external source at install time
       | 
       | 2. The adversary has unlimited computing power: in this case, you
       | need hardware that can generate truly random bits and can only
       | return randomness at the rate the hardware gives you the bits
       | 
       | Now obviously if you are using the randomness to feed an
       | algorithm that is only strong against feasible computing power
       | (i.e. all crypto except one-time pads) then it doesn't make sense
       | to require resistance against unlimited computing power for the
       | RNG.
       | 
       | So in practice both /dev/urandom, /dev/random, getrandom(), etc.
       | should only resist feasible computing power, and resisting
       | against unlimited computing power should be a special interface
       | that is never used by default except by tool that generate one-
       | time pads or equivalent.
        
         | fooker wrote:
         | With truly unlimited computing power, you would just brute
         | force it.
         | 
         | Hence, this is not a credible threat model.
        
         | xyzzyz wrote:
         | > 2. The adversary has unlimited computing power: in this case,
         | you need hardware that can generate truly random bits and can
         | only return randomness at the rate the hardware gives you the
         | bits
         | 
         | What would you need those bits for in that case? Literally the
         | only things that comes to my mind is generating one time pads,
         | as standard cryptography is useless in such scenario.
        
           | firethief wrote:
           | Game-theoretically, you want a source of random numbers when
           | you need to make a decision your adversary can't predict.
           | Traditionally some cultures have (accidentally?) used bird
           | augury for this, but obviously that won't do when you're up
           | against Unlimited Computing Power, as birds are basically
           | deterministic.
        
           | [deleted]
        
         | pdonis wrote:
         | _> There are two threat models against code using RNGs_
         | 
         | Actually, there are three:
         | 
         | 3. The adversary has the ability to put backdoors in your
         | hardware so you can't trust it to give you truly random bits at
         | all.
        
       | Erlich_Bachman wrote:
       | Summarizing the article, `cat /dev/random` will still work but
       | will never block, possibly returning random data based on less
       | entropy than before. They claim that in the modern situation
       | there is already enough entropy in it even for secure key
       | generation. There seemingly will still exist a programmatic way
       | to get a random stream based on predictable amount of entropy,
       | but not through reading this filesystem node.
        
         | gioele wrote:
         | > Summarizing the article, `cat /dev/random` will still work
         | but will never block
         | 
         | `cat /dev/random` may still block, but only once per reboot. It
         | may block if it is called so early that not enough entropy has
         | been gathered yet. Once there enough entropy has been gathered
         | it will never block again.
        
           | simias wrote:
           | As mentioned by the article that's already the default
           | behaviour of getrandom() and the BSDs have symlinked
           | /dev/random to /dev/urandom for a long time already.
           | 
           | I think this is a change for the best, in particular this bit
           | sounds completely true to my ears:
           | 
           | > Theodore Y. Ts'o, who is the maintainer of the Linux
           | random-number subsystem, appears to have changed his mind
           | along the way about the need for a blocking pool. He said
           | that removing that pool would effectively get rid of the idea
           | that Linux has a true random-number generator (TRNG), which
           | "is not insane; this is what the *BSD's have always done".
           | He, too, is concerned that providing a TRNG mechanism will
           | just serve as an attractant for application developers. He
           | also thinks that it is not really possible to guarantee a
           | TRNG in the kernel, given all of the different types of
           | hardware supported by Linux. Even making the facility only
           | available to root will not solve the problem: Application
           | programmers would give instructions requiring that their
           | application be installed as root to be more secure, "because
           | that way you can get access the _really_ good random
           | numbers".
           | 
           | The number of time I've had to deal with security-related
           | software and scripts who insisted in sampling /dev/random and
           | stalling for minutes at a time...
        
             | JdeBP wrote:
             | A minor note:
             | 
             | * Only FreeBSD symbolically links, and it does it in the
             | other direction. urandom is the symbolic link to random.
             | 
             | * OpenBSD has four distinct character device files: random,
             | arandom, srandom, and urandom.
             | 
             | * NetBSD (as of 2019) has two distinct character device
             | files: random and urandom. They have different semantics
             | from each other. https://netbsd.gw.com/cgi-bin/man-
             | cgi?rnd+4+NetBSD-current
        
               | aquabeagle wrote:
               | On OpenBSD:                   $ ls -l /dev/*random*
               | lrwxr-xr-x  1 root  wheel         7 Dec 10 15:05
               | /dev/random@ -> urandom         crw-r--r--  1 root  wheel
               | 45,   0 Jan  6 15:30 /dev/urandom
        
               | JdeBP wrote:
               | That must be a recent change.                   $ ls -F
               | /dev/*random*         /dev/arandom  /dev/random
               | /dev/srandom  /dev/urandom         $
        
               | ben_bai wrote:
               | Deleted in 2017. https://marc.info/?l=openbsd-
               | cvs&m=151069089605712&w=2 you can delete arandom and
               | srandom
               | 
               | Edit: better link
        
         | ale_jrb wrote:
         | I understood it that reading from `/dev/random` will still
         | block just after boot (i.e. before it's initialised) unless you
         | pass the new flag `GRND_INSECURE` to `getrandom`. After it's
         | initialised, however, it will never block because it's now just
         | using the CRNG.
        
           | richardwhiuk wrote:
           | Please don't confuse `/dev/random` and `getrandom` - they are
           | separate interfaces.
        
       | JdeBP wrote:
       | Interestingly, this follows the systemd people back in 2018
       | changing its seed-at-boot tool, systemd-random-seed, to write the
       | machine-ID as the first 16 bytes of seed data to /dev/random at
       | every seed write.
       | 
       | * https://github.com/systemd/systemd/commit/8ba12aef045ba1a766...
       | 
       | * https://www.freedesktop.org/software/systemd/man/systemd-ran...
       | 
       | * http://jdebp.uk./Softwares/nosh/guide/commands/machine-id.xm...
        
       | walterbell wrote:
       | On the subject of TRNG, John Denker wrote a 2005 paper for using
       | soundcard data as a source of randomness,
       | http://www.av8n.com/turbid/
       | 
       |  _> We discuss how to configure and use turbid, which is a
       | Hardware Random Number Generator (HRNG), also called a True
       | Random Generator (TRNG). It is suitable for a wide range of
       | applications, from the simplest benign applications to the most
       | demanding high-stakes adversarial applications, including
       | cryptography and gaming. It relies on a combination of physical
       | process and cryptological algorithms, rather than either of those
       | separately. It harvests randomness from physical processes, and
       | uses that randomness efficiently. The hash saturation principle
       | is used to distill the data, so that the output is virtually 100%
       | random for all practical purposes. This is calculated based on
       | physical properties of the inputs, not merely estimated by
       | looking at the statistics of the outputs. In contrast to a
       | Pseudo-Random Generator, it has no internal state to worry about.
       | In particular, we describe a low-cost high-performance
       | implementation, using the computer's audio I /O system._
       | 
       | On randomness, http://www.av8n.com/turbid/paper/turbid.htm#sec-
       | raw-randomne...
       | 
       |  _> Understanding turbid requires some interdisciplinary skills.
       | It requires physics, analog electronics, and cryptography._
        
         | b2ccb2 wrote:
         | Reminds me of Cloudflares wall of lava lamps[1], discussed
         | here: https://news.ycombinator.com/item?id=16041295
         | 
         | [1] https://www.cloudflare.com/learning/ssl/lava-lamp-
         | encryption...
        
           | brian_herman wrote:
           | How long does the lava lamp system take to start from a cold
           | boot?
        
             | RL_Quine wrote:
             | About half an hour to get a solid glob going for mine, too
             | long and you tend to end up with a large bias towards the
             | top. I looked into this because I wanted to pre warm my
             | lava lamp because it's depressing to wake up to cold blobs.
        
         | hannob wrote:
         | This is one of these approaches that are for all practical
         | purposes completely useless.
         | 
         | There's basically only three problems generating good
         | randomness: 1. Very early during boot you don't have a lot of
         | good sources. 2. On very constrained devices you have limited
         | external input. 3. Bugs in the implementation.
         | 
         | Randomness from soundcards doesn't help with any of these. They
         | probably aren't yet initialized at the point in time where it
         | matters most, they don't exist for the most problematic devices
         | and bugs are bugs, no matter what your source of randomness.
        
           | bigiain wrote:
           | > Randomness from soundcards doesn't help with any of these.
           | 
           | "Randomness from soundcards" is also spectacularly unlikely
           | to help on the cloud. Pretty sure they don't fit
           | SoundBlaster16s to EC2 instances or Digital Ocean Droplets...
        
             | skrebbel wrote:
             | I want a cloud version of Dr. Sbaitso!
        
             | simias wrote:
             | Given that these instances are typically virtualized I
             | wouldn't be surprised if you could extract a decent amount
             | of entropy just by using the system timings (interrupts,
             | RTC sampling etc...) given that they would be affected by
             | the other running systems. And of course there's always the
             | network card.
             | 
             | In my experience it's not usually super difficult to get a
             | decent amount of entropy on complex desktop or servers,
             | it's only a real issue on simple embedded hardware where
             | you might have no way to sample the environment and all the
             | timings are ultra-deterministic. In this case I've resorted
             | to using temperature measurements and fan speed as a source
             | of entropy during early boot, which isn't ideal.
        
               | pjc50 wrote:
               | > given that they would be affected by the other running
               | systems
               | 
               | Philosophically, does that make it more or less random?
               | The whole category of side-channels suggests that there
               | are problems with sharing a system like this.
               | 
               | There's also the philosophical question of how
               | cryptographically secure a fully virtual system can be
               | when the host has full inspection and control capability.
               | If you're running in the cloud you need to think
               | carefully about how to delegate this to the platform if
               | at all possible.
        
               | tinus_hn wrote:
               | It would seem quite helpful if these virtual machines
               | provided a virtual device that gives random numbers
               | provided by the host device.
        
           | simias wrote:
           | It could potentially help with 1 if you have some early
           | bootstrap code to configure the sound card and get some
           | samples. I agree with your general point however.
        
             | atoav wrote:
             | I probably don't know enough about computers, but wouldn't
             | there be a way to use e.g. the digital outputs of a DAC
             | directly even in the earliest part of boot? Is there a good
             | reason why CPUs aren't doing this already?
        
               | pjc50 wrote:
               | Well, if you have one, and if the thing it's listening to
               | is sufficiently random - both of which are in question
               | here.
               | 
               | This is sort of the problem the CPU random number
               | generator is intended to solve, but see the discussion on
               | trust.
        
         | amelius wrote:
         | CPUs already have a physics-based random number generator.
         | 
         | https://en.wikipedia.org/wiki/RDRAND
        
           | CodeArtisan wrote:
           | https://sharps.org/wp-content/uploads/BECKER-CHES.pdf
           | 
           | This paper demonstrates that by adding a small amount of
           | dopant[1] to the RDRAND circuitry, you can weaken it enough
           | while it still pass NIST suite. And the modification is
           | undetectable.
           | 
           |  _In this paper we introduced a new type of sub-transistor
           | level hardware Trojan that only requires modification of the
           | dopant masks. No additional transistors or gates are added
           | and no other layout mask needs to be modified. Since only
           | changes to the metal, polysilicion or active area can be
           | reliably detected with optical inspection, our dopant Trojans
           | are immune to optical inspection, one of the most important
           | Trojan detection mechanism. Also, without the ability to use
           | optical inspection to distinguish Trojan-free from Trojan
           | designs, it is very difficult to find a chip that can serve
           | as a golden chip, which is needed by most post-manufacturing
           | Trojan detection mechanisms. To demonstrate the feasibility
           | of these Trojans in a real world scenario and to show that
           | they can also defeat functional testing, we presented two
           | case studies. The first case study targeted a design based on
           | Intel's secure RNG design. The Trojan enabled the owner of
           | the Trojan to break any key generated by this RNG.
           | Nevertheless, the Trojan passes the functional testing
           | procedure recommended by Intel for its RNG design as well as
           | the NIST random number test suite.This shows that the dopant
           | Trojan can be used to compromise the security of a meaningful
           | real-world target while avoiding detection by functional
           | testing as well as Trojan detection mechanisms. To
           | demonstrate the versatility of dopant Trojans, we also showed
           | how they can be used to establish a hidden side-channel in an
           | otherwise side-channel resistant design. The introduced
           | Trojan does not change the logic value of any gate, but
           | instead changes only the power profile of two gates. An
           | evaluator who is not aware of the Trojan cannot attack the
           | Trojan design using common side-channel attacks. The owner of
           | the Trojan however can use his knowledge of the Trojan power
           | model to establish a hidden side-channel that reliably leaks
           | out secret keys._
           | 
           | [1] https://en.wikipedia.org/wiki/Doping_(semiconductor)
        
             | littlestymaar wrote:
             | Amazing!
        
           | noodlesUK wrote:
           | An implementation that is not good by itself. Not least
           | because on some AMD cpus the CPU takes the XKCD approach [1]
           | [2].
           | 
           | [1] https://www.xkcd.com/221/ [2]
           | https://arstechnica.com/gadgets/2019/10/how-a-months-old-
           | amd...
        
           | saagarjha wrote:
           | Which has concerns that it may be backdoored by intelligence
           | agencies: https://github.com/torvalds/linux/blob/6398b9fc818e
           | ea79dcd6e...
        
             | kalleboo wrote:
             | I'm a lot less worried about an NSA backdoor than I am that
             | Intel (or mediatek, or whatever cheap ARM license is in my
             | router) just fucked up the implementation
        
               | throwaway2048 wrote:
               | First generation AMD Zen CPUs returned all 1s for RDRAND
               | instructions for instance
        
             | Jasper_ wrote:
             | What's stopping the NSA from inserting a backdoor to
             | recognize it's running kernel randomness code and change
             | the results too? If you don't trust your CPU, you can't
             | trust anything it does. Expecting the backdoor to show up
             | in one solely instruction is hopelessly naive.
        
               | feanaro wrote:
               | Why does anyone even continue to bother arguing this?
               | 
               | There are ways of mixing RDRAND into the entropy pool
               | safely and this can be done easily. Why would you
               | deliberately _choose to_ not mix RDRAND and use it
               | directly instead? You wouldn 't. It makes no sense.
               | Therefore, RDRAND should be mixed into the pool, it _is_
               | being mixed into the pool and there is no more reason to
               | debate this.
        
               | tytso wrote:
               | Yes, and Linux has done it for years. The problem is
               | whether or not RDRAND should be trusted in the absence of
               | sufficient estimated entropy that it should be used to
               | unblock the CRNG during the boot process. This is what
               | CONFIG_RANDOM_TRUST_CPU or the random.trust_cpu=on on the
               | boot command is all about. Should RDRAND be trusted in
               | isolation? And I'm not going to answer that for you; a
               | cypherpunk and someone working at the NSA might have
               | different answers to that question. And it's
               | fundamentally a social, not a technical question.
        
               | throwaway2048 wrote:
               | The blocking CRNG (besides the necessary early seeding)
               | is an entirely artificial problem however..
        
               | littlestymaar wrote:
               | "you can't be safe from all attacks" doesn't means you
               | should not attempt to protect yourself from the obvious
               | ones.
        
               | Jasper_ wrote:
               | If you expect RDRAND to change its behavior to be
               | nefarious, also expect ADD and MUL to do the same. The
               | RDRAND conspiracy theory is bizarre because underneath
               | lies a core belief that a malicious entity would go so
               | far as to insert a backdoor, but put it strictly in a
               | simple and easily avoidable place? So they're malicious
               | entities, but can only touch certain instructions?
               | 
               | Malicious entities don't play by made up rules. RDRAND
               | being a convenient scapegoat seems like exactly the thing
               | they'd want, too.
        
               | CJefferson wrote:
               | There have been bugs in RDRAND, AMD processors would
               | keeping returning 0 in some cases.
        
               | saagarjha wrote:
               | 0xffffffff: https://arstechnica.com/gadgets/2019/10/how-
               | a-months-old-amd...
        
               | imtringued wrote:
               | Because obtaining a seed is the only non deterministic
               | part of a RNG. Once you have the seed you can trivially
               | predict the next numbers. Since random numbers are used
               | to generate encryption keys, being able to manipulate
               | random numbers also allows you to defeat encryption. The
               | way the RDRAND backdoor would work is pretty simple. When
               | it is activated (system wide) it simply needs to return
               | numbers based on a deterministic RNG (with a seed known
               | to the owners of the backdoor). To an observer it would
               | still work as intended and there is no way to prove that
               | there is a backdoor.
               | 
               | Verifying an ADD instruction is very easy and if it were
               | to return wrong results then it would be obvious that it
               | is buggy. Programs would cease to work. Alternatively it
               | would have to be incredibly smart and detect the
               | currently executed program and exfiltrate the encryption
               | key during the execution of the encryption function.
               | 
               | The first backdoor scales to millions of potential
               | targets. The second is a carefully targeted attack
               | against a known enemy. Targeted attacks have cheaper
               | alternatives than silicon back doors.
        
               | littlestymaar wrote:
               | I don't get your point. Should we assume there is no
               | backdoor at all just because we can't predict all kind of
               | backdoors that can be?
               | 
               | Being defensive against RDRAND is just one (relatively
               | easy) way to defend oneself against a (relatively easy to
               | implement) backdoor. Yes, this defense isn't perfect,
               | because there can be other (more difficult to escape)
               | backdoors, but that's not a compelling reason not to
               | avoid the "easy" ones...
        
               | pdpi wrote:
               | There's a reasonable argument to be made that limiting
               | the backdoor to very specific instructions that
               | specifically target the domains you care about makes it
               | less likely that your backdoor will be triggered by
               | accident. Escaping detection is just as important here.
        
               | tialaramex wrote:
               | The concern for RDRAND was really that it might be used
               | to exfiltrate data.
               | 
               | Imagine a parallel universe where everybody happily just
               | uses RDRAND to get random numbers. For example when you
               | connect to an HTTPS server, as part of the TLS protcol it
               | sends you a whole bunch of random bytes (these are
               | crucial to making TLS work, similar approaches happen in
               | other modern protocols). But in that universe those bytes
               | came directly from RDRAND, after all it's random...
               | 
               | Except, if RDRAND is actually an exfiltration route, what
               | you've just done is carve a big hole in your security
               | perimeter to let RDRAND's owners export whatever they
               | want.
               | 
               | XORing RDRAND into an apparently random bitstream is thus
               | safe because the bitstream is now random if either RDRAND
               | works as intended OR your apparently random bitstream is
               | indeed random.
        
               | mnw21cam wrote:
               | > XORing RDRAND into an apparently random bitstream is
               | thus safe because the bitstream is now random if either
               | RDRAND works as intended OR your apparently random
               | bitstream is indeed random.
               | 
               | That's making assumptions. For instance, it wouldn't be
               | beyond the realms of possibility for the compromised CPU
               | to also track which registers contain a result produced
               | from RDRAND, and make (X XOR RDRAND) produce a
               | predictable result. After all, RDRAND is already an
               | undefined number, so the system can legitimately decide
               | later what it would like it to be. Yes, it would require
               | more silicon in the registers, re-ordering and dispatch
               | system, and ALU, but it would be feasible.
        
               | cwzwarich wrote:
               | A change that large (probably requiring new custom RAMs
               | and modifications to fairly timing-constrained register
               | renaming logic) doesn't seem feasible for somebody to
               | insert below the RTL level without being noticed. It
               | would be much easier to just make RDRAND somehow less
               | random, while still passing whatever randomness test is
               | used for qualification.
        
               | jlokier wrote:
               | I don't see how the RDRAND change would be gotten away
               | with either, if someone else is looking at the silicon
               | design.
               | 
               | To modify RDRAND so that it is less random in a way
               | that's useful for an attacker, yet passes statistical
               | randomness testing by the OS and other software, would
               | require RDRAND to implement something cryptographic, so
               | that only the attacker, knowing a secret, can "undo" the
               | not-really-randomness.
               | 
               | A new crypto block would surely be very noticable at the
               | RTL level.
        
               | littlestymaar wrote:
               | Another comment[1] gave a link to an existing
               | implementation of such backdoor using only doping. It
               | doesn't implement a cryptographic scheme but weakens the
               | randomness in a way that still pass NIST test suite.
               | 
               | [1]: https://news.ycombinator.com/item?id=21979268
        
               | cesarb wrote:
               | > If you expect RDRAND to change its behavior to be
               | nefarious, also expect ADD and MUL to do the same.
               | 
               | There's a very important difference: ADD and MUL are
               | deterministic, while the output of RDRAND is random. If
               | ADD or MUL or FDIV return incorrect results, that can be
               | detected (as the Pentium has shown). If RDRAND returns
               | backdoored values, it cannot be detected by only looking
               | at its output; you have to check the circuit.
        
               | pbhjpbhj wrote:
               | CodeArtisan posted upstream,
               | https://news.ycombinator.com/item?id=21979268, about a
               | method of doping that effectively makes a side-channel
               | (crazy!).
               | 
               | Surely you can check the output and see if it's random?
               | Don't these attacks rely on perturbing the RNG do it's no
               | longer a TRNG, isn't output the only way to tell??
        
               | imtringued wrote:
               | >Surely you can check the output and see if it's random?
               | 
               | No you can't. That's an inherent property of randomness.
               | If you're lucky you can win the lottery 10 times in a
               | row. The only thing you can verify is that the random
               | number generator is not obviously broken (see AMD's
               | RDRAND bug) but you can't verify that it's truly random.
               | 
               | > isn't output the only way to tell??
               | 
               | Looking at the implementation is the only way to tell.
               | 
               | Let's say I generate a list of one million random numbers
               | through a trustworthy source. Now I build a random number
               | generator that does nothing but just return numbers from
               | the list.
               | 
               | There is an impractical way of verifying a random number
               | generator that is only useful in theory. Remember how you
               | can flip a coin and take 1000 samples and you get roughly
               | 1/2 probability for each side? The number of samples you
               | have to take grows with the number of outcomes. If your
               | RNG returns a 64 bit number you can take 2^64*x samples
               | where x is a very large number (the larger the better). x
               | = 1 is already impractical (50 years @ 3 billion
               | RDRAND/s) but to be really sure you would probably need x
               | > 10000. Nobody on earth has that much time. Especially
               | not CPU manufacturers that release new chips every year.
        
               | littlestymaar wrote:
               | > but you can't verify that it's truly random.
               | 
               | Even if you cannot be 100% sure that something is not
               | really random, there are plenty of statistical
               | measurements that can be used to assess the quality of
               | the output (in terms of entropy).
               | 
               | > Let's say I generate a list of one million random
               | numbers through a trustworthy source. Now I build a
               | random number generator that does nothing but just return
               | numbers from the list.
               | 
               | In less than a second your generator would be stalled
               | which is a pretty obvious flaw to see.
               | 
               | The real issue is cryptographic algorithms because they
               | are designed to simulate randomness, and they are
               | adversarially improving when statistic methods of studies
               | become more powerful. At every single point in time,
               | state of the art cryptography is going to be able to
               | produce fake random that the current state of the art
               | cryptanalysis cannot prove as not random.
        
               | jlokier wrote:
               | > At every single point in time, state of the art
               | cryptography is going to be able to produce fake random
               | that the current state of the art cryptanalysis cannot
               | prove as not random.
               | 
               | That may not be true. (As in, I'm not sure it is.)
               | 
               | For many useful crypto algorithms where we give a nominal
               | security measure (in bits), there is a theoretical attack
               | that requires several fewer bits but is still infeasible.
               | 
               | For example, we might say a crypto block takes a 128-bit
               | key and needs an order of magnitude 2^128 attempts to
               | brute force the key.
               | 
               | The crypto block remains useful when academic papers
               | reveal how they could crake it in about 2^123 attempts.
               | 
               | The difference between 2^128 and 2^123 is irrelevant in
               | practice as long as we can't approach that many
               | calculations. But it does represent a significant bias
               | away from "looks random if you don't know the key".
               | 
               | It seems plausible to me that a difference like that
               | would manifest in statistical analysis that state of the
               | art cryptanalysis _could_ prove as not random by
               | practical means, while still unable to crack (as in
               | obtain inputs) by practical means.
        
               | saagarjha wrote:
               | Detecting backdoored random number generators is quite
               | difficult.
        
               | littlestymaar wrote:
               | Or even impossible: if the not-so-random value comes from
               | an AES stream (or any encryption method actually), it's
               | impossible to prove as long as AES is not broken (proving
               | that an AES stream can be distinguished from random is a
               | sufficient definition of "broken" in the crypto
               | community).
        
               | cesarb wrote:
               | > What's stopping the NSA from inserting a backdoor to
               | recognize it's running kernel randomness code and change
               | the results too?
               | 
               | It's much much harder. They'd have to insert something on
               | the frontend (where the instruction decoder is) or on the
               | L1 instruction cache to recognize when it's running that
               | specific piece of code; both parts are very critical for
               | the processor performance, so every gate of delay counts.
               | And that's before considering that the Linux kernel code
               | changes unpredictably depending on the compiler, kernel
               | configuration options, and kernel release. Oh, and you
               | have to be very precise in detecting that code, to make
               | sure nothing else misbehaves or even gets slower (some
               | people count cycles on parts of their code, so an
               | unexpected slowness would get noticed).
               | 
               | Contrast with RDRAND, an instruction which is _defined_
               | to return an unpredictable value; it would be simple to
               | make its output depend on a counter mixed with a serial
               | number and a couple of bits of real randomness, instead
               | of being fully random. It 's not even on the performance-
               | critical part of the chip; it's isolated on its own
               | block, so adding a backdoor to it would cause no
               | performance problems, and would break no other software.
        
       | emilfihlman wrote:
       | Changing getrandom was just idiotic and breaks god damn
       | userspace. The man page was extremely clear in documentation in
       | the first place.
        
       | CiPHPerCoder wrote:
       | Making /dev/random behave like getrandom(2) will finally put to
       | rest one of the most frustrating arguments in the public
       | understanding of cryptography. Please do it.
        
         | jdormit wrote:
         | What argument are you referring to?
        
           | geofft wrote:
           | The question of whether "true random numbers" as defined by
           | this weird government standard exist.
           | 
           | More fundamentally, an important conceptual part of crypto is
           | that you can use a random key of a very small, fixed size,
           | like 16 bytes, to generate an infinite amount of output, such
           | that knowing any part of the output doesn't help you guess at
           | any other part of the output nor the input key. If true, this
           | is an amazingly powerful property because you can securely
           | exchange (or derive) a short key once and then you can send
           | as much encrypted data back and forth as you want. If you
           | needed to re-seed the input with as many bits as you take
           | from the output, disk encryption wouldn't be sound (you'd
           | need a key or passphrase as long as your disk), SSL would be
           | way more expensive (and couldn't be forward-secure) if it
           | even worked at all, Kerberos wouldn't work, Signal wouldn't
           | work, etc.
           | 
           | The claim of /dev/random and this wacky government standard
           | is that in fact disk encryption, SSL, etc. are flawed
           | designs, good enough for securing a single communication but
           | suboptimal because they encrypt more bits of data than the
           | size of the random key, and so when generating random keys,
           | you really ought to use "true random numbers" so that
           | breaking one SSL connection doesn't help you break another.
           | Whether it's a pen-and-paper cipher like Vigenere or a fancy
           | modern algorithm like AES, anything with a fixed-size key can
           | be cryptanalyzed and you shouldn't provide too much output
           | with it, for your secure stuff you must use a one-time pad.
           | The claim of the cryptography community is that, no, in fact,
           | there's nothing flawed about this approach and stretching
           | fixed-size keys is the very miracle of cryptography. We know
           | how to do it securely for any worthwhile definition of
           | "securely" (including definitions where quantum computers are
           | relevant) and we should, because key-based encryption has
           | changed our world for the better.
        
             | throw0101a wrote:
             | > ... _like 16 bytes, to generate an infinite amount of
             | output, such that knowing any part of the output doesn 't
             | help you guess at any other part of the output nor the
             | input key._
             | 
             | Isn't that the theory behind every stream cipher? (And
             | stream ciphers are generally just 'simplified' one-time
             | pads.)
             | 
             | That's what OpenBSD's arc4random(4) start as: the output of
             | RC4.
        
               | ben_bai wrote:
               | Yep. The OpenBSD bootloader reads an entropy file from
               | hard disk, mixes it with RDRAND from CPU (if available)
               | and passes it to the Kernel.
               | 
               | The Kernel starts an ChaCha20 stream cipher with this
               | supplied entropy while constantly mixing in timing
               | entropy from devices.
               | 
               | This chipherstream supplies the Kernel with random data
               | and once userland is up this is good enought and also
               | used for /dev/random and /dev/urandom, which on OpenBSB
               | is the same device(non blocking).
               | 
               | Now the fun part: When a userland process gets created it
               | has a randomdata ELF segment that the Kernel fills and
               | which is used as entropy for a new ChaCha20 stream, just
               | for the application should it decide to call arc4random
               | or use random data in any other way (like calling malloc
               | or free, which on OpenBSD make heavy use of random data).
        
               | cosarara wrote:
               | From a more recent OpenBSD man page:
               | 
               | > The original version of this random number generator
               | used the RC4 (also known as ARC4) algorithm. In OpenBSD
               | 5.5 it was replaced with the ChaCha20 cipher, and it may
               | be replaced again in the future as cryptographic
               | techniques advance. A good mnemonic is "A Replacement
               | Call for Random".
        
           | ATsch wrote:
           | The idea of "randomness" being "used up", and then "running
           | out of randomness", somehow.
           | 
           | So let's look at how a hypothetical CSPRNG might work. We get
           | our random numbers by repeatedly hashing a pool of bytes, and
           | then feeding the result, and various somewhat random events,
           | back into the pool. Since our hash does not leak any
           | information about the input (if it did, we'd have much bigger
           | problems), this means attackers must guess, bit for bit, what
           | the value of the internal pool of entropy is.
           | 
           | This is essentially how randomness works on Linux (they just
           | use a stream cipher instead for performance)
           | 
           | This clarifies a few things:
           | 
           | 1. even if you assume intels randomness instructions are
           | compromised, it still is not an issue to stirr them into the
           | pool. Attackers need to guess every single source of
           | randomness.
           | 
           | 2. "Running out of randomness" is nonsensical. If you
           | couldn't guess the exact pool before, you can't suddenly
           | start guessing the pool after pulling out 200 exabytes of
           | randomness either.
        
             | throw0101a wrote:
             | > _2. "Running out of randomness" is nonsensical. If you
             | couldn't guess the exact pool before, you can't suddenly
             | start guessing the pool after pulling out 200 exabytes of
             | randomness either._
             | 
             | Not entirely.
             | 
             | /dev/random and arc4random(4) under OpenBSD originally used
             | the output of RC4, which has a finite state size:
             | 
             | * https://en.wikipedia.org/wiki/RC4
             | 
             | Rekeying / mixing up the state semi-regularly would reset
             | things. It's the occasional shuffling that really helps
             | with forward security, especially if a system has been
             | compromised at the kernel level.
        
               | tptacek wrote:
               | No, Arc4random didn't reveal its internal RC4 state as it
               | ran, in the same sense that actually encrypting with RC4
               | doesn't deplete RC4's internal state.
        
               | cperciva wrote:
               | Many implementations didn't do enough mixing before
               | generating output, though.
               | 
               | Also, when you look at cache side channel attacks -- RC4
               | _definitely_ publishes its internal state.
        
               | tptacek wrote:
               | That's obviously true, but in the most unhelpful way
               | possible, where you introduce a complex additional topic
               | without explaining how it doesn't validate the previous
               | commenter's misapprehension about how "state" works in
               | this context.
        
               | cperciva wrote:
               | I wasn't entirely sure if the previous commenter was
               | confused or merely saying things in a confusing way. The
               | fact is that with a small entropy pool and a leaky
               | mechanism like RC4, you absolutely _can_ run out of
               | entropy.
        
               | ben_bai wrote:
               | That's why OpenBSD cut away the start of the RC4 stream
               | (don't remember how many bytes) to make backtracking
               | harder.
               | 
               | But the point is mood b.c. the stream cipher used
               | switched from RC4 to ChaCha20 like 5 years ago. And there
               | is no side channel attack on ChaCha20, yet.
        
           | jacobush wrote:
           | That you must never use urandom for serious stuff?
        
       | pczy wrote:
       | This is the best explanation of this issue that i know of:
       | https://www.2uo.de/myths-about-urandom
        
       | csours wrote:
       | I was looking at Java properties the other day and I thought to
       | myself, "We still need to set /dev/./urandom in 2019?"
        
         | ktpsns wrote:
         | This is a valid point -- most high level programming languages
         | provide some kind of function to provide random numbers in a
         | given interval, such as [0,1]. See also
         | https://stackoverflow.com/questions/2572366/how-to-use-dev-r...
         | 
         | This is even true for shell scripting, see for instance
         | http://www.tldp.org/LDP/abs/html/randomvar.html
        
       | brohee wrote:
       | That this happens right after Thomas Pornin ridiculing the
       | blocking pool (https://research.nccgroup.com/2019/12/19/on-
       | linuxs-random-nu..., HN discussion
       | https://news.ycombinator.com/item?id=21843081) is obviously
       | purely coincidental, right? Especially as it was read and
       | commented upon by Theodore Tso that at last changed his mind...
        
         | ATsch wrote:
         | Many people have been ridiculing entropy tracking as pointless
         | pseudeo-security vodoo for a very long time.
         | 
         | https://media.ccc.de/v/32c3-7441-the_plain_simple_reality_of...
         | 
         | This talk from 2015 springs to mind to me as the previous
         | widely discussed one.
         | 
         | But the ncc article might of course have been the straw that
         | healed the camels back.
        
           | tytso wrote:
           | There was a time when some people thought that it was just
           | _fine_ to use MD5 and SHA1. Yarrow-160, a random number
           | generator devised by Kelsey, Schineier, and Ferguson, used
           | SHA1.
           | 
           | Entropy tracking was used in the original versions of PGP
           | because there were those people who had a very healthy (for
           | that time) lack of confidence in "strong cryptographic
           | algorithms" actually being strong.
           | 
           | As PBS Space Time once said, discussing Pilot Wave Theory and
           | why it's considered unorthodox when compared to the Many
           | Worlds interpretation of Quantuum Theory, "Orthrodoxy ==
           | Radicalism plus Time". There was a time when the Many Worlds
           | interpretation was considered _out_ _there_.
           | 
           | Similarly, there was a time when not trusting crypto
           | algorithms as being Forever Strong was normal, and designing
           | a network protocol like Wireguard without algorithm agility
           | would have been considered highly radical. Today, trusting
           | "strong cryptographic primitives" is considered an axiom. But
           | remember that an axiom is something that you assume to be
           | true and use as the basis of further proofs. Just as a Voodoo
           | practitioner assumes that their belief system is true....
        
             | tptacek wrote:
             | PGP was designed without message authentication. The people
             | who designed PGP had a lack of understanding of
             | cryptography, full stop. To an extent that is because PGP
             | is a 1990s design, and very few people had a thorough
             | understanding at the time. But to a significant extent it
             | is also because the PGP engineering community consisted
             | largely of stubborn amateurs attempting to (re-)derive
             | cryptographic science from first principles. An appeal to
             | the healthy paranoia of PGP is not a persuasive argument.
        
               | azinman2 wrote:
               | Has it not evolved at all in any implementation,
               | particularly gnupg?
        
               | upofadown wrote:
               | It has evolved in the standard as well as the
               | implementations. It is a bit silly to claim that the
               | current thing is bad because there was a program once
               | with a similar name.
        
               | tptacek wrote:
               | No, not really. The Efail attack is a pretty good example
               | of how PGP's flawed design really just sets the system up
               | for researchers to dunk on it; the GnuPG team's belief
               | that they can't make breaking changes without
               | synchronizing with the IETF OpenPGP working group ensures
               | it'll remain like this for a long time.
               | 
               | See also: https://latacora.micro.blog/2019/07/16/the-pgp-
               | problem.html
        
               | upofadown wrote:
               | The Efail attack was almost, if not entirely, a client
               | issue where those clients were leaking information from
               | html emails. There were no real weaknesses in the OpenPGP
               | standard or the GnuPG implementation of that standard.
               | 
               | >... the GnuPG team's belief that they can't make
               | breaking changes without synchronizing with the IETF
               | OpenPGP working group ...
               | 
               | That does not actually sound like a bad thing to me.
               | 
               | The linked rant against OpenPGP/GnuPG takes the form of a
               | semi-random list of minor issues/annoyances associated
               | with the OpenPGP standard and the GnuPG implementation
               | mixed together in no particular order. It ends with the
               | completely absurd solution of just abandoning email all
               | together. So you have to explain which parts of it
               | support your contention.
               | 
               | The OpenPGP standard is in reality one of the better
               | written and implemented standards in computing (which
               | isn't saying much). There may in the future be something
               | better but it is downright irresponsible to slag it
               | without coming up with any sort of alternative. It is
               | here and it works.
        
               | tptacek wrote:
               | I think it's interesting that when a pattern of
               | vulnerabilities is discovered that _exfiltrates the
               | plaintext of PGP-encrypted emails_ , a pattern that
               | simply cannot occur with modern secure messaging
               | constructions, the immediate impulse of the PGP community
               | is to say "it's not our fault, it's not OpenPGP's fault,
               | it's not GnuPG's fault". Like, it happened, and it
               | happened to multiple implementations, including the most
               | important implementations, but it's nobody's fault; it
               | was just sort of an act of God. Like I said, interesting.
               | Not reassuring, but interesting.
        
               | upofadown wrote:
               | It is well known that end-points are the weak parts of
               | any sort of privacy system involving cryptography. So not
               | really all that interesting.
               | 
               | It was literally not GPG's fault or the fault of the
               | standard.
        
               | tptacek wrote:
               | And when a practical application of SHA-1 collision
               | generation to PGP is found, it won't be their fault
               | either. After all, the OpenPGP standard says they have to
               | support SHA-1! Blame the IETF!
               | 
               | Stuff like this happened to TLS for a decade, and then
               | the TLS working group wised up and redesigned the whole
               | protocol to foreclose on these kinds of attacks. That's
               | never going to happen with PGP, despite it having a tiny
               | fraction of the users TLS has.
        
               | upofadown wrote:
               | Support is different than utilization. GPG no longer uses
               | SHA1 for message digests and has not done so for a fair
               | time now. This what the preferences in a public key
               | generated with gpg2 recently look like:
               | Cipher: AES256, AES192, AES, 3DES          Digest:
               | SHA512, SHA384, SHA256, SHA224, SHA1
               | Compression: ZLIB, BZIP2, ZIP, Uncompressed
               | Features: MDC, Keyserver no-modify
               | 
               | So SHA1 is the last choice. Note that 3DES is there at
               | the end of the symmetrical algorithm list. It ain't
               | broken either so they still include it for backward
               | compatibility. This is a good thing. It is essential in a
               | privacy system for a store and forward communications
               | medium.
        
               | akerl_ wrote:
               | What portion of the implementations have to fall victim
               | to the exact same misbehavior before, in your opinion,
               | it's plausible to suggest that the issue is a foot-gun on
               | the part of the overall standard/ecosystem?
        
               | upofadown wrote:
               | The list is at the bottom of this:
               | 
               | * https://efail.de/
               | 
               | Throwing out the discontinued things (Outlook 2007) and
               | the webmail things that PGP can't be even sort of secure
               | on (Roundcube, Horde) we end up with 7 bad clients out of
               | a total of 27 good clients. So 26%. To get that they
               | allegedly had to downgrade GPG.
        
               | akerl_ wrote:
               | Sorry, to clarify, I'm not asking what percentage of
               | clients were vulnerable in this case. I'm asking what the
               | threshold is, beyond which you would consider the
               | possibility that the issue was with the broader
               | spec/ecosystem rather than the individual tools.
        
             | andrepd wrote:
             | Many-worlds is still very much not the orthodox
             | interpretation.
        
               | jabl wrote:
               | Well, less unorthodox than hidden variable
               | interpretations.
        
         | dfc wrote:
         | It's version three of this patch.
        
         | tytso wrote:
         | Hardly; the first version of this patch series was from August
         | 2019 (which is before the brouhaha caused by ext4 getting
         | optimized and causing boot-time hangs for some combinations of
         | hardware plus some versions of systemd/udev), and the second
         | version was from September 2019. In the second version, Andy
         | mentioned he wanted to make further changes, and so I waited
         | for it to be complete. I had also discussed making this change
         | with Linus in Lisbon at the Kernel Summit last year. So this
         | was a very well considered change that had been pending for a
         | while, and it predates the whole getrandom boot hang issue last
         | September. I don't like making changes like this without
         | careful consideration.
         | 
         | The strongest argument in favor of not making this change was
         | there are some (misguided) PCI compliance labs which had
         | interpreted the PCI spec as requiring /dev/random, and changing
         | /dev/random to work like getrandom(2) might cause problems for
         | some enterprise companies that need PCI compliance. However,
         | the counter-argument is that it wasn't clear that the PCI
         | compliance labs somehow thought that /dev/random was better
         | than getrandom(2); it was just as likely they were so clueless
         | that they hadn't even heard about getrandom(2). And if they
         | were that clueless, they probably wouldn't notice that
         | /dev/random had changed.
         | 
         | If they really _did_ want TrueRandom (whatever the hell that
         | means; can _you_ guarantee your equipment wasn 't intercepted
         | by the NSA while it was in-transit to the data center?) then
         | the companies probably really should be using some real
         | hardware random number generator, since on some VM's with
         | virtio-rng, /dev/random on the guest was simply getting
         | information piped from /dev/urandom on the host system --- and
         | apparently _that_ was Okey-Dokey with the PCI labs. Derp derp
         | derpity derp....
        
           | [deleted]
        
           | rrauenza wrote:
           | For anyone following along not familiar with all security
           | acronyms, in this context PCI is Payment Card Industry not
           | Peripheral Component Interconnect.
           | 
           | I was confused for a bit since we're talking about the
           | kernel...
        
       | zaarn wrote:
       | It's very amusing that the various kernel developers are bashing
       | on GnuPG, going as far as calling it's behaviour a "misuse. Full
       | stop."
       | 
       | PGP/GPG has certainly falled out of favor.
        
         | tptacek wrote:
         | In related news:
         | https://twitter.com/matthew_d_green/status/12145739830871080...
        
         | tytso wrote:
         | I actually thought that was a bit unfair. Is it a misuse to use
         | three times as much concrete as is strictly necessary? That
         | would make the Empire State Building a "misuse" of concrete.
         | Even if you aren't doing Empire State Building levels of
         | overkill having engineering margins is a very well accepted
         | thing. Is extracting 4096 bits from /dev/random for a 4096-bit
         | RSA key "misuse" when said key only has between 200-300 bits of
         | cryptographic strength? Meh.... I've got more important things
         | to worry about; public key generation happens so rarely. And I
         | do use a hardware random number generator[1] as a supplement
         | when I generate new GPG keys.
         | 
         | [1] https://altusmetrum.org/ChaosKey/
        
           | zaarn wrote:
           | The problem is that extracting more than you need is what
           | ruined /dev/random for everyone.
        
         | [deleted]
        
         | grammarxcore wrote:
         | So is GnuPG bad because it reads directly from /dev/random
         | instead of using an interface like getrandom()? I'm naive
         | enough to not know reading directly from /dev/random is bad and
         | would love to know more.
        
           | ploxiln wrote:
           | It should have read just 16 or 32 bytes from /dev/random in
           | order to seed its own CSPRNG (at most once per process
           | invocation, only when first needed)
        
           | toast0 wrote:
           | The getrandom() syscall is relatively new. Before it was
           | available, you had two choices.
           | 
           | Use a non-Linux OS with reasonable /dev/(u)random or use
           | Linux with its Sophie's choice:
           | 
           | /dev/random will give you something that's probably good, but
           | will block for good and bad reasons.
           | 
           | /dev/urandom will never block, including when the random
           | system is totally unseeded.
           | 
           | GnuPG could not use /dev/urandom, since there was no
           | indication of seeding, so it had to use /dev/random which
           | blocks until the system is seeded and also when the entropy
           | count of nebulous value was low. Most (all) BSDs have
           | /dev/urandom the same as /dev/random, where it blocks until
           | seeded and then never blocks again . This behavior is
           | available in Linux with the getrandom() syscall, but perhaps
           | GnuPG hasn't updated to use it? Also, there was some
           | discussion in the last few months of changing the behavior of
           | that syscall, which thankfully didn't happen, in favor of
           | having the kernel generate some hopeful entropy on demand in
           | case there is a caller blocked on random with an unseeded
           | pool.
        
             | grammarxcore wrote:
             | So the issue is the block? I make a blocking call and
             | another app attempts to make a call during the block and
             | will fail if it's not expecting to wait? Is that (one of)
             | the problem(s)?
             | 
             | Thanks for breaking that down for me!
        
               | toast0 wrote:
               | So, if the random system hasn't been properly seeded, you
               | do _need_ to block, if you 're using the random for
               | security; especially for long term security, ex long
               | lived keys.
               | 
               | The problem is, before this patch, Linux keeps track of
               | an entropy estimate for /dev/random, and if the estimate
               | gets too low, read requests will block. Each read reduces
               | the estimate significantly, so something that does a lot
               | of reads makes it hard for other programs to do any reads
               | in a reasonable amount of time.
               | 
               | If you knew the system was seeded, you could use urandom
               | instead, but there's not a great way to know. Perhaps,
               | you could read from random the first time, and urandom
               | for future requests in the same process... but that only
               | helps in long running processes; also reading once from
               | random and using it as a seed to an in-process secure
               | random generator works almost as well. The getrandom()
               | syscall is really the way forward, but you would need to
               | keep old logic conditionally or accept loss of
               | compatibility with older releases.
               | 
               | In summary, it's not really fair to say GnuPG is doing it
               | wrong, when they didn't have a way to do it right.
        
               | grammarxcore wrote:
               | Thanks! That makes sense. I appreciate you taking the
               | time to break all that down.
        
         | JdeBP wrote:
         | It is worth considering how things appear from the perspectives
         | of the application developers.
         | 
         | * https://dev.gnupg.org/T3894
        
           | zaarn wrote:
           | Well, I got as far as "just have your distro dynamically edit
           | grypt conf to use urandom only after startup" before I
           | considered that the GPG devs are being weird about it. Still
           | took them half a year to replace "read(/dev/random)" with
           | "getrandom()".
        
       ___________________________________________________________________
       (page generated 2020-01-07 23:00 UTC)