[HN Gopher] random(random(random(random())))
       ___________________________________________________________________
        
       random(random(random(random())))
        
       Author : Alifatisk
       Score  : 109 points
       Date   : 2023-05-15 19:23 UTC (3 hours ago)
        
 (HTM) web link (openprocessing.org)
 (TXT) w3m dump (openprocessing.org)
        
       | davesque wrote:
       | Since `random(x)` returns a random number in [0,x), it will
       | always output a number that is less than `x` (the initial value
       | of x is 1 in this case). Therefore, the outcome where repeated
       | applications of `random(...)` to its own result has little
       | shrinking effect is vanishingly small. It's the result of these
       | repeated applications that is being subtracted from the length of
       | the unit vector that is used to position each pixel. So the
       | pixels naturally accumulate around that unit length. No broken
       | crypto here.
       | 
       | It's sort of like asking, "What would happen if I multiply a
       | bunch of numbers together that are guaranteed to be less than 1
       | (and even much less than 1 much of the time)?"
        
         | gabereiser wrote:
         | Exactly, like Perlin noise, the output is not indicative of the
         | function. Works as intended. The beauty is in the fact that
         | when using pseudorandom number generation and the right output
         | function, you can simulate a lot of things. Star systems,
         | planets, stock market games, aim handicap, and itano circus.
        
         | awegio wrote:
         | In more formal terms, since it is a uniform distribution you
         | can write                 random(x) = random()*x
         | 
         | then you can easily see that
         | random(random(random())) = random()*random()*random()
         | 
         | The resulting distribution is not equal to random()^3, because
         | e.g. the probability that all 3 random calls give you a number
         | <0.5 is only 0.125.
        
       | muti wrote:
       | Fork wrapping the random radius in sqrt() so the first disk is
       | uniformly distributed
       | 
       | https://openprocessing.org/sketch/1929418
        
       | chkas wrote:
       | Shameless copied:
       | https://easylang.dev/ide/#run=textsize%203%0Afor%20i%20%3D%2...
        
       | jtsiskin wrote:
       | This would be way clearer as a Cartesian plot with the output of
       | random as X, and sample number as Y.
        
       | dathinab wrote:
       | It's a funny one.
       | 
       | On the first look it looks that random is broken then you
       | realize, no random(n) is a random value between [0;n) so it's
       | behaving exactly right.
       | 
       | It also shows really nice how "naively" choosing random points in
       | a non rectangular shape can easily lead to a non uniform
       | distribution.
        
       | SideQuark wrote:
       | Lots of math all over the place computing the expected value :)
       | 
       | So, here's a derivation: if rand(x) returns uniformly a real in
       | [0,x) or equivalently for expected value, in [0,x], then here are
       | the first few iteration expected values from which you can see
       | the general answer. 1 term                   E(r) = (1/1) *
       | Integral_0^1 x_0 dx_0 = (x_0^2/2) from 0 to 1 = 1/2
       | 
       | 2 terms                   E(r(r)) = (1/1) * Int_0^1  (1/x_0)
       | Int_0^x_0 x_1 dx_1 dx_0 = 1/4
       | 
       | 3 terms:                   E(r(r(r))) = (1/1) * Int_0^1 (1/x_0)
       | Int_0^x_0 (1/x_1) Int_0^x_1 x_2 dx_2 dx_1 dx_0 = 1/8
       | 
       | And so on. The nth term is exactly (1/2)^n
        
       | tbalsam wrote:
       | What language? What are the parameters for the random function
       | here (seed? upper range value in a uniform distribution? stddev
       | for a random distribution?) Why? What is this demonstrating?
       | 
       | I can grok a lot of the dimensional effects of randomness but
       | without these things specified, the picture is rather meaningless
       | to me.
        
         | maxbond wrote:
         | 1. The language is Processing
         | 
         | 2. The argument is the high end of the range (0 to n
         | [https://processing.org/reference/random_.html])
         | 
         | 3. I dunno why, it seems like it's just cool. It seems to
         | demonstrate that random(random(...)) collapses to 0, which is
         | exactly what I expected but it's pretty cool.
        
           | justinpombrio wrote:
           | Q4: What is it plotting? `random()` returns a float, right?
           | Is it plotting... r*cis(theta), where r is the random
           | invocation and theta varies uniformly across the circle,
           | normalized so that all the circles are the same size (where
           | cis(theta) = (cos(theta), sin(theta))?
        
             | maxbond wrote:
             | Nah, it's plotting polar vectors, and it uniformly picks an
             | angle. The random(...) is just the magnitude of the vector.
        
               | justinpombrio wrote:
               | FYI, r*cis(theta) _is_ a polar vector. So it's the thing
               | I said, but with the angle chosen randomly.
        
           | [deleted]
        
           | tbalsam wrote:
           | I'm a little confused then what the value of this
           | demonstration is.
           | 
           | The expectation marches by half the distance towards the
           | outer edge each time, so it becomes a soap bubble at a rate
           | of 1/2^n
           | 
           | Which is nice and sorta cool but I'm not quite sure what the
           | "whoa" factor here is. It's like one of those zip cinch tops
           | for those bags. Sorta cool how it works on observation,
           | really cool in detail, but also sorta...meh? At the same
           | time?
           | 
           | Maybe I'm a bit jaded here? Admittedly, a continuous
           | animation of this would be an improvement on the cool factor
           | to me, personally.
        
             | maxbond wrote:
             | Well, all I can tell you is that I'm having a good time
             | discussing the nitty gritty of it in the comments here, and
             | it made the gears of my mind turn in a pleasant way. It
             | didn't make me go, "wow," but it did make me go, "hmm...."
        
               | tbalsam wrote:
               | Gotcha, yeah, that makes sense to me now.
               | 
               | Maybe the real article was the HN comments section that
               | we made along the way....
        
               | walterbell wrote:
               | _> Maybe the real article was the HN comments section
               | that we made along the way...._
               | 
               | Nominating this comment for
               | https://news.ycombinator.com/highlights
        
               | [deleted]
        
             | sobellian wrote:
             | I think the sequence tends toward the edge at exp(-n), not
             | 2^-n. The distance from the edge is a product of n I.I.D.
             | variables, so the logarithm is a sum of n I.I.D. variables,
             | and the central limit theorem gives us the result.
             | 
             | You can confirm in a python terminal (or anything else)
             | that the product of n `random()`s decreases more rapidly
             | than 2^-n.
             | 
             | So maybe there's some value in it after all :)
        
               | tbalsam wrote:
               | How do you mean?
               | 
               | I'm going based on the propagation of expectations
               | through the system.
               | 
               | The expectation of a uniform distribution is half the
               | bound. Unless there's some math shenanigans going on, I
               | believe that the expected expected value of a
               | distribution drawn from distributions should be 1/2 * 1/2
               | * 1 in this case.
               | 
               | Of course it's not a bound but right now I'm having a
               | hard time determining otherwise.
               | 
               | Is there a mathematical motivation for e^-n here? That
               | seems an odd choice since the uniform distribution is 0-1
               | bounded generally. However I could understand if it
               | emerges in a few conditions.
        
               | sobellian wrote:
               | e shows up because we're doing an iterated product (of
               | random variables, but still). If you look at the central
               | limit theorem, sum(log(rand())) tends to N *
               | E[log(rand())] for large N. Well, what's E[log(rand())]?
               | -1!
               | 
               | Like I said, it's fairly easy to test this in a python
               | terminal. I encourage anyone who doubts me to try it :)
        
               | tbalsam wrote:
               | Alright, I tried it in the terminal and the numbers
               | confirmed my earlier math. .5->.25->.125 mean as
               | N->infinity. I chained the python uniform function like
               | the above code in the originally linked post does and
               | averaged the results.
               | 
               | The snark to me and the other commenters is a bit
               | unnecessary in either case. I'm not really sure where
               | you're getting a natural log or an iterated product of
               | random variables in this instance.
               | 
               | Could you perhaps show where you're transforming
               | uniform(uniform(uniform(uniform(0, 1)))) into the math
               | you're showing above? I'm trying to follow along here but
               | am having difficulty connecting your line of reasoning to
               | the problem at hand.
        
               | sobellian wrote:
               | Not trying to be snarky by any means. I'm sorry if you
               | interpreted it that way.
               | 
               | I don't think the difference will show up for small N.
               | This is an asymptotes thing. Try it for N = 100, that's
               | what I did. For example:                 >>>
               | np.product(np.random.random(100))
               | 5.469939152265385e-43            >>> 1 / np.e**100
               | 3.7200759760208555e-44            >>> 1 / 2**100
               | 7.888609052210118e-31
               | 
               | The underlying thing here is that random(random()) in
               | this case is the same as random() * random(). So
               | random(random(random(...))) is the same as random() *
               | random() * random() and then the analysis goes on. And
               | sure, random() * random() has a mean close to 1/4. But
               | the dynamics change as N becomes large.
               | 
               | Edit - and just in case you doubt whether random() *
               | random() * ... is a valid way of doing this, I also just
               | checked the laborious way of doing it:
               | >>> def foo(n):       ...   result = 1.0       ...   for
               | _ in range(n):       ...     result =
               | np.random.uniform(0, result)       ...   return result
               | ...       >>> foo(100)       1.4267531652344414e-46
               | >>> foo(100)       7.852496730908136e-49       >>>
               | foo(100)       1.3216070221780724e-41
        
               | tbalsam wrote:
               | > I'm sorry if you interpreted it that way.
               | 
               | is probably a good marker that it is time for me take my
               | bow in this conversation, however, for an alternative
               | approach I recommend SideQuark's comment on iterative
               | random variables vs chains of multiplied random
               | variables, which have different characteristics when
               | defined as a sequence.
        
               | sobellian wrote:
               | I checked his comment, I think he's incorrect on the
               | approaches being different. Using my function `foo` that
               | does the iterative approach, we can compare the
               | distributions and they're fairly identical.
               | >>> np.mean([foo(100) for _ in range(100000)])
               | 3.258425093913613e-33            >>>
               | np.mean([np.product(np.random.random(100)) for _ in
               | range(100000)])       8.814732867008917e-33
               | 
               | (There's quite a bit of variance on a log scale, so 3 vs
               | 8 is not a huge difference. I re-ran this and got varying
               | numbers with both approaches. But the iterated code is
               | very slow...)
               | 
               | Note that the mean is actually quite a bit closer to
               | 2^-100 even though the vast majority of numbers from
               | either distribution fall below 2^-100. Even so, the mean
               | for both is approximately a factor of 100 less than
               | 2^-100. Suspicious! Although I think we've both burned
               | enough time on this.
        
               | SideQuark wrote:
               | >The distance from the edge is a product of n I.I.D.
               | variables
               | 
               | It is not a product of random variables; it is an
               | iterated random variable. The output of one influences
               | those higher in the chain. Redo your python code with
               | rand(rand(rand...))) not rand() * rand() * rand()...
               | 
               | The expectation of composition of functions is not the
               | composition of expectations, so there is some work to do.
               | 
               | For uniform over [0,1] for the innermost item, it becomes
               | an iterated integral, with value (1/2)^n.
        
               | sobellian wrote:
               | I did redo it, and the distributions are identical. It's
               | just a very heavily skewed distribution, so the expected
               | value is not very intuitive. I still think even E[x]
               | decreases faster than 2^-n though. See my other comments.
        
               | Nevermark wrote:
               | Except that your logarithms can have any consistent base,
               | including 2.
               | 
               | So I don't think using a logarithm to introduce a special
               | dependence on e is warranted in this case.
        
               | sobellian wrote:
               | Well the expected value of E[log2(rand())] is not the
               | same as E[log(rand())] and therein lies the difference.
               | See my sibling comment.
               | 
               | Like I said, you can check this in a python terminal.
        
             | nimih wrote:
             | Speaking personally, a webpage with nothing beyond a cool
             | or interesting sequence of images is already in the 90th
             | percentile of good HN posts.
        
             | [deleted]
        
             | [deleted]
        
         | jlhawn wrote:
         | This is a drawing app using P5.js
         | 
         | Click the "</>" at the top to read the code.
         | 
         | There's a popular "Let's Code" YouTube channel called The
         | Coding Train that uses it. It's quite acccessible.
        
         | dist-epoch wrote:
         | The documentation suggests random() without arguments is not
         | supported                   random(high)         random(low,
         | high)
         | 
         | https://processing.org/reference/random_.html
        
           | TaylorAlexander wrote:
           | Oooh it's a high value, I thought it was a seed and that this
           | was demonstrating a flaw in the algorithm.
        
             | tbalsam wrote:
             | Same, at first at least.
        
       | romwell wrote:
       | It's almost criminal to post things like that without any
       | annotations.
       | 
       | The first picture plots points where radius and angle (in polar
       | coordinates) are uniformly distributed. Of course, this is _not_
       | uniform in the disc.
       | 
       | I don't have the time right now, but if I do later, I'll type up
       | the math behind what a uniform distribution in the disc looks
       | like, and what's going on in other pictures.
        
         | [deleted]
        
         | klyrs wrote:
         | the bigger crime is that this is plotting
         | 1-Random(Random(Random(Random())))
         | 
         | and not                   Random(Random(Random(Random())))
        
       | fsckboy wrote:
       | just in terms of pure randomness (which is not what this is
       | testing), random(random()) doesn't make more sense than random().
       | If your random number generator is good, one is enough. If it's
       | not good, multiple times is not going to help.
       | 
       | I'm putting this in the past tense, it randumb enough times for
       | me.
        
         | the_af wrote:
         | The argument to random() in this case is the upper bound, so
         | `random()` and `random(random())` are different.
         | 
         | See: https://processing.org/reference/random_.html
        
       | alhirzel wrote:
       | Yep, just about what one would expect from randomly cutting the
       | top of your uniform distribution off multiple times!
       | 
       | What's interesting is the transforms used to sample in some new
       | space from e.g. uniform random inputs. For instance, disks [1] or
       | hemispheres [2] or an arbitrary density function [3]. Common
       | stuff in light transport simulation, and easy to get wrong.
       | 
       | [1]:
       | https://stats.stackexchange.com/questions/481543/generating-...
       | 
       | [2]: https://alexanderameye.github.io/notes/sampling-the-
       | hemisphe...
       | 
       | [3]: https://en.wikipedia.org/wiki/Inverse_transform_sampling
        
       | maxbond wrote:
       | If you're wondering why the dots approach the circumference and
       | not the center, the length of the vector is `diameter * (1 -
       | random(random(...)))`
       | 
       | It's interesting that the second circle is a pretty uniform
       | distribution. `1 - random(random())` must be approximately
       | `sqrt(random())`, iirc that's how you correct for the error
       | demonstrated in the first circle
       | (https://youtube.com/watch?v=4y_nmpv-9lI).
        
         | bonzini wrote:
         | ... That means each point is at a distance of sqrt(f(x)) from
         | the circumference. And nesting random() n times is similar to
         | random()^n, which explains why the first plot has them amassed
         | in the center and the second has a more uniform distribution.
         | 
         | Why are they similar? For a rough idea, random() is a random
         | number between 0 and 1, which on average is 0.5;
         | random(random()) is a random number between 0 and on average
         | 0.5, which on average is 0.25; and so on.
        
           | ozfive wrote:
           | Oh man if my math teachers taught math with application, I
           | would have been so much better off. It's so much easier to
           | understand all of this after writing code for a while.
        
         | avodonosov wrote:
         | A better title would be "1 - Random(Random(Random(Random())))",
         | than it's more understandable why the points lean to the
         | circumference.
        
         | [deleted]
        
         | contravariant wrote:
         | The cumulative distribution of random(random(...)) repeated k
         | times is 1-(1-x)^k, so 1 - random(random()) should have a
         | cumulative distribution of x^(2).
         | 
         | I think that should even make it exactly uniform, but somehow
         | it doesn't look like it. I may have missed a bit somewhere.
        
           | maxbond wrote:
           | Perhaps it looks nonuniform because we're not using literal
           | points (our dots have nonzero area), so when they get too
           | close they blend together and make the distribution look
           | denser than it is in that area
        
             | contravariant wrote:
             | Could be, but it is also possible I'm missing something
             | somewhere, especially because I can't explain why
             | min(random(), random()) looks different from
             | random(random()) or random()*random().
        
               | penteract wrote:
               | An attempt to explain the difference without just
               | calculating the things:
               | 
               | random(random()) has an identical distribution to
               | random()*random() (it may even behave identically for a
               | given rng state), although this is different to
               | Math.pow(random(),2) since in that case there's 100%
               | correlation between both parts which makes the expected
               | value product bigger.
               | 
               | random(random()) is also distributed equivalently to
               | x=random()         y=random()         while(y>=x)
               | y=random()         return min(x,y)
               | 
               | (the last line could also read 'return y') Comparing that
               | to min(random(),random()), we can see that if the second
               | call to random is smaller than the first, they will
               | return the same result; otherwise, the program equivalent
               | to random(random()) will return a smaller value,
               | therefore the expected value of random(random()) must be
               | lower that that of min(random(),random()).
        
               | awegio wrote:
               | According to the internet what you wrote is the
               | distribution of min(random(), random()), but the
               | distribution of the product is different.
               | 
               | Product:
               | https://math.stackexchange.com/questions/659254/product-
               | dist...
               | 
               | Minimum:
               | https://stats.stackexchange.com/questions/220/how-is-the-
               | min...
               | 
               | I can't explain it either, but I also don't think there
               | is a reason they should look the same.
        
       | paulddraper wrote:
       | Sorry to be blunt, but what is of interest here?
       | 
       | Aesthetics?
       | 
       | P.S. In case anyone is confused, the arg to random is the upper
       | limit (not the seed), and r = diameter * (1 - value)
        
         | kibwen wrote:
         | I find it cool that this website lets you upload code for
         | visualizations, and then lets any user modify that code and re-
         | run it. It's like if Gist let you render images from code,
         | rather than from just SVG.
        
           | cheeze wrote:
           | Yeah, this was a very fun illustrative view of why this is a
           | bad idea.
        
         | edrxty wrote:
         | Yeah the code under the </> tab is very important, on first
         | blush it looks like random isn't a uniform distribution for
         | some reason. It's just because they plotted it with polar
         | coordinates. It would be much clearer what's actually going on
         | here if an XY plot was used.
        
           | paulddraper wrote:
           | No, it's because random(random(random())) is iterating the
           | upper limit, not the seed.
        
             | qingcharles wrote:
             | This is the answer. I think the author is confused, as most
             | platforms do have the seed as the lone parameter to their
             | Random function.
        
           | maxbond wrote:
           | It is uniform I'm pretty sure, that's what plotting a polar
           | coordinates look like when you use a uniform distribution for
           | the length.
        
             | edrxty wrote:
             | Isn't that what I said?
        
               | maxbond wrote:
               | Oh I misunderstood, my bad. Upon rereading I see what you
               | meant.
        
         | andybak wrote:
         | Most things are interesting.
        
       | utunga wrote:
       | This would've been a lot less confusing if they'd used
       | 
       | let radius = d * random(random(random()))
       | 
       | instead of
       | 
       | let radius = d * (1 - random(random(random())))
       | 
       | But, if so, probably so much less confusing that folks wouldn't
       | be talking about it here, to be honest.
        
       | [deleted]
        
       | rjbwork wrote:
       | Just whipped this up in Linqpad real quick to see how quickly
       | random(random(...)) tends to converge to 0.
       | 
       | Fun lil script.
       | 
       | https://media.discordapp.net/attachments/519790546601115649/...
       | 
       | EDIT: Realized I cut off the full script. First two lines are:
       | var ran = new Random();       var counts = new
       | Dictionary<int,int>();
        
       | thriftwy wrote:
       | I like using Math.min(nextInt(), nextInt()) for non-uniformly
       | distributed random data.
        
       | [deleted]
        
       | gnramires wrote:
       | Tips: The source code can be seen by clicking on '</>'; The
       | random() function takes an upper limit of a desired uniform
       | distribution (from 0 to x), this shows the effect of the
       | distribution of those iterated random functions.
       | 
       | I think this isn't so intuitive because the polar coordinates
       | already imply greater density near center (which could be
       | corrected with a suitable map).
        
         | em-bee wrote:
         | right, if the points are distributed in a square then the
         | distribution looks more like what one would expect:
         | 
         | (i am to lazy to register to make a fork, here is the code
         | change):
         | 
         | replace:                 let a = random(Math.PI*2);       let r
         | = d * (1 - random());       point(cos(a)*r, sin(a)*r);
         | 
         | with:                 let x = random(d);       let y = d * (1 -
         | random());       point(x, y);
         | 
         | in every block
         | 
         | you could even simplify it to:                 let x =
         | random(d);       let y = random(d);       point(x, y);
         | 
         | and for more fun apply the nested random() call to both axis:
         | let x = random(random(d));       let y = random(random(d));
         | point(x, y);
        
         | acjohnson55 wrote:
         | The reason it's counterintuitive is because the randomly chosen
         | distance variable is 1 minus the nested randoms. Each random
         | call becomes the upper bound of future random calls, so the
         | more times you nest it, the closer to zero you're likely to be.
         | Subtracting that from one puts you close to the unit circle.
        
       ___________________________________________________________________
       (page generated 2023-05-15 23:00 UTC)