[HN Gopher] Float Toy
       ___________________________________________________________________
        
       Float Toy
        
       Author : agomez314
       Score  : 81 points
       Date   : 2022-08-19 08:52 UTC (1 days ago)
        
 (HTM) web link (evanw.github.io)
 (TXT) w3m dump (evanw.github.io)
        
       | TT-392 wrote:
       | I had to implement floats and doubles a bit ago. Was a fun bit of
       | puzzling to figure that out as a small break from other work.
        
         | TT-392 wrote:
         | Then I found out about weird poorly defined NaNs, which were
         | kinda annoying, but fun nonetheless.
        
       | bobek wrote:
       | Also vaguely relevant https://0.30000000000000004.com/
        
       | cercatrova wrote:
       | Ah Evan Wallace, the creator of esbuild, CTO of Figma, and maker
       | of one of my favorite tools, Diskitude [0] which shows disk space
       | usage on Windows, all under 10 kilobytes.
       | 
       | [0] https://madebyevan.com/diskitude/
        
       | tialaramex wrote:
       | It's interesting today to see people act as though half (f16) is
       | a completely normal obvious type, whereas when I was first
       | writing code nobody had this type, it was unheard of, and it was
       | this weird new special case when the 3DFX Voodoo used it
       | (hardware accelerated 3D video for the PC, as a special case, the
       | "pass through" for 2D video is a physical cable at first). The
       | give away, if you're younger, is that it's sometimes called
       | _half_ precision. That 's because single precision (32-bit
       | floating point) was seen as the starting point years before.
       | 
       | I remember this when somebody says f128 will never be a thing,
       | because if you'd asked me in 1990 whether f16 will be a thing I'd
       | have laughed. What use is a 16-bit floating point number? It's
       | not precise enough to be much use in the narrow range it can
       | represent at all. In _hindsight_ the application is obvious of
       | course, but that 's hindsight for you.
        
         | djhaskin987 wrote:
         | I still don't see the point of half precision. What
         | applications are you implying are obvious? Actually curious.
        
           | NavinF wrote:
           | It's by far the most popular data type for training neural
           | networks.
        
             | brrrrrm wrote:
             | for lack of bfloat16 support
        
               | hwers wrote:
               | whats the difference between float16 and bfloat16?
        
               | brrrrrm wrote:
               | number of exponent bits. bfloat16 has a larger dynamic
               | range with lower precision. e.g. this would be infinity
               | in fp16 https://float.exposed/b0x5f80
        
           | TazeTSchnitzel wrote:
           | It's very popular in hardware-accelerated computer graphics.
           | It has much more range, and a bit more precision, than the
           | traditional integer 8-bits-per-channel representation of
           | colour, so it is used for High Dynamic Range framebuffers and
           | textures. It's also ubiquitous as an arithmetic type in
           | mobile GPU shaders, where it's used for things (like colours)
           | that need to be floats but don't need full 32-bit precision.
           | In many cases it doesn't just save memory bandwidth and
           | register space, but also the shader core may have higher
           | throughput for half precision.
        
           | Stratoscope wrote:
           | There is a good discussion and examples here:
           | 
           | https://en.wikipedia.org/wiki/Half-precision_floating-
           | point_...
        
             | mysterydip wrote:
             | Makes me wonder if there's a use case for "dynamic fixed
             | point" numbers: say for a 16 bit value the upper 2 bits are
             | one of four values that says where the decimal is in the
             | remaining 14. Say 0 (essentially an int), two spots in the
             | middle, and 14 (all decimal). The CPU arithmetic for any
             | operation is (bitshift+math), which should be an order
             | faster than any float operation. The range isn't nearly as
             | dynamic, but would allow for fractional influence. Maybe
             | such a system would lack enough precision for accuracy?
        
               | pdpi wrote:
               | What you just described is exactly floating point
               | numbers, you're just using a different split for the
               | exponent and mantissa and not using the "integer part is
               | zero" simplification.
        
               | mysterydip wrote:
               | hmm interesting, I never saw them that way. But now that
               | you say that, it makes them "click" a lot more. Thanks!
        
           | davvid wrote:
           | The OpenEXR file format, used a lot in graphics applications
           | (compositing, rendering), is a fairly well-known application
           | of half-floats.
           | 
           | There's some notes about the advantages of half-float pixel
           | in the openexr documentation: https://openexr.readthedocs.io/
           | en/latest/TechnicalIntroducti...
           | 
           | I don't think "obvious" was the best adjective, but "small
           | memory/file size footprint" is probably the quality that's
           | easiest to understand.
        
       | monkeydom wrote:
       | Also see https://float.exposed/0x44bf9400 by the great
       | ciechanowski, with a lengthy article here:
       | https://ciechanow.ski/exposing-floating-point/
        
         | quickthrower2 wrote:
         | Is that the man who did the introduction to category theory for
         | programmers. Although I never grokked it as much as I would
         | like, I appreciated that video series, for trying to make the
         | subject so accessible.
        
       | quickthrower2 wrote:
       | I didn't know different binary numbers could map to NaN. That is
       | interesting.
        
         | brrrrrm wrote:
         | a popular trick is to encode information in NaNs
        
       | seanalltogether wrote:
       | One thing that took me a long time to grok that I wish this toy
       | showed is how the fraction is included in the calculation
       | underneath.
       | 
       | Right now the following bits
       | 
       | 0 10001 1000000000
       | 
       | show the calculation as
       | 
       | 1 x 2^2 x 1.5 = 6
       | 
       | when it's more clear to actually say
       | 
       | 1 x 2^2 x (1 + (512 / 1024)) = 6
        
       | naillo wrote:
       | Makes me think of this https://float.exposed/
        
       | amelius wrote:
       | It has a hidden UI trick: click+drag over the digits to turn them
       | all to 0 or 1.
        
       | barbito wrote:
       | During my research projects I used this website incredibly often.
       | I even cited it in one of my papers :)
        
         | askafriend wrote:
         | Fun fact, this site is from the co-founder of Figma.
        
       | wyldfire wrote:
       | Might be worth an explicit link to an example illustrating
       | subnormals [1] - they seem to be one of the not-well-understood
       | features of floating-point.
       | 
       | [1] https://en.wikipedia.org/wiki/Subnormal_number
        
       ___________________________________________________________________
       (page generated 2022-08-20 23:00 UTC)