[HN Gopher] TinyRenderer - how OpenGL works: software rendering ...
       ___________________________________________________________________
        
       TinyRenderer - how OpenGL works: software rendering in 500 lines of
       code
        
       Author : graderjs
       Score  : 174 points
       Date   : 2022-03-15 14:59 UTC (8 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | tylermw wrote:
       | I used and extended this tutorial to write the rayvertex [1][2]
       | package, which is a deferred parallel software rasterizer in R.
       | Explicitly building the rendering pipeline really helped me
       | understand the underlying data flow and rasterization techniques,
       | which in turn made me understand GLSL much better. And extending
       | the rasterizer with order-independent transparency, multicore
       | rendering, antialiasing, and other niceties made me appreciate
       | the work that goes into modern renderers.
       | 
       | Highly recommend.
       | 
       | [1] https://www.rayvertex.com/ [2]
       | https://github.com/tylermorganwall/rayvertex
        
       | jzer0cool wrote:
       | Pretty cool. Did something similar for undergraduate to write
       | implementations of openGL and create modeler and renderer. I got
       | a bit stuck with the maths for UV mappings and ray tracing and
       | would love to see some of this if ever I decide to rework my old
       | project.
        
       | pjmlp wrote:
       | Very nice learning material, thanks for sharing.
        
       | glouwbug wrote:
       | I did one in C a while ago if anyone is interested:
       | https://github.com/glouw/gel
        
       | jokoon wrote:
       | I wonder if someday GPUs will be so powerful that we will just
       | program them with small parallelized software renderers. And I
       | don't mean shaders or CUDA, because those are awkward and not
       | easy to use.
       | 
       | Will be the case though, since I never really know how GPU are
       | made and wired, and what their constraints are?
       | 
       | GPU often had a lot of "fixed" computing capabilities, meaning
       | you have to do things in a certain way to exploit its
       | performance, but with Vulkan being too complex, and chips getting
       | more and more transistors, maybe a day will come where GPUs can
       | finally become easier to program, at the expense of top
       | performance, a bit like interpreted languages being slower to run
       | but faster to write with. In the software industry, it was always
       | better to use faster computers and write less than optimal
       | programs, because chips are always cheaper than developers.
       | 
       | I guess most game developers would gladly want to use a GPU for a
       | fraction of the performance, if it was just easier to program
       | with. Modern GPU are so fast that it would feel okay to use 5 or
       | 10% of the speed if it meant not having to deal with vulkan or
       | shaders.
        
         | raphlinus wrote:
         | I think your question is partly answered by the cudaraster
         | work, which is well over a decade old at this point. They
         | basically did write a software rasterizer (that ran on CUDA,
         | but is adaptable to other GPU intermediate languages). The
         | details are interesting, but the tl;dr is that it's
         | approximately 2x slower than hardware. To me, that means you
         | _could_ build a GPU out of a fully general purpose parallel
         | computer, but in practice the space is extremely competitive
         | and nobody will leave that kind of performance on the table.
         | 
         | I think this also informs recent academic work on RISC-V based
         | GPU work such as RV64X. These are mostly a lot of generic cores
         | with just a little graphics hardware added. The results are not
         | yet compelling compared with shipping GPUs, but I think it's a
         | promising approach.
         | 
         | [1]: https://research.nvidia.com/publication/high-performance-
         | sof...
        
         | azornathogron wrote:
         | Wasn't this the idea behind Intel's Larrabee hardware?
         | 
         | Didn't succeed at the time. Maybe it'll happen one day. Or
         | maybe not.
         | 
         | If Vulkan and co. are too difficult personally I suspect it's
         | more fruitful to build better abstractions on top of the
         | underlying constraints dictated by the need for massive
         | parallelism, not trying to make the x86-style programming
         | paradigms fast enough for graphics-type workloads.
        
         | rowanG077 wrote:
         | I don't see how shaders or CUDA are not what you talking about?
         | You do have higher level languages like SAC or Futhark that can
         | target GPUs but they essentially can do what CUDA can just with
         | a different lick of paint.
        
       | GabeIsko wrote:
       | Shout out to Neon Helium Productions-
       | https://nehe.gamedev.net/tutorial/lessons_01__05/22004/
       | 
       | Man, it was impossible to find a decent OpenGL tutorial in 2008.
        
         | andai wrote:
         | The tutorial was "completely rewritten January 2000", seems the
         | first version was posted in 1997!
        
         | arcticbull wrote:
         | This is how I learned OpenGL too :)
        
       | h2odragon wrote:
       | This is great. The other side of "how to use OpenGL" is "what is
       | it doing in there?" and this looks like an excellent explanation
       | of that.
        
       | amne wrote:
       | This github page brings back NeheGL vibes from ~20 years ago or
       | so. very nice. thanks
        
       | corysama wrote:
       | More resources like this if you are interested:
       | 
       | If you want to understand how the GPU driver thinks under the
       | hood read through
       | https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...
       | 
       | If you want to see the OpenGL state machine in action, check out
       | https://webglfundamentals.org/webgl/lessons/resources/webgl-...
        
         | samstave wrote:
         | Would you be able to point me in direction of understanding
         | diff btwn GPU and CPU from ELI5 --> [whatever]
        
           | nightfly wrote:
           | CPU = Lots of branching and complicated operations, small
           | number of cores GPU = Lots of cores, not much branching and
           | simple operations
        
       | ggambetta wrote:
       | Wow, this looks fantastic :) Very inspiring!
        
       | alasr wrote:
       | Very nice. After a quick skimming of the wiki's contents, I
       | really like the contents and step-by-by approach of all the
       | materials shared.
       | 
       | Does anyone here know of something of similar good-quality OpenGL
       | resources[1] for mathematical/scientific visualization[2].
       | Currently, I've started going through the source codes and
       | documentation of manim-web[3] and CindyJs[4] to learn the basics;
       | however, I would love to learn the fundamentals and be able to
       | write my own visualization library (or confidently be able to
       | extend an existing one) if/when needed. Thanks in advance.
       | 
       | ---
       | 
       | [1] - Personally I prefer books but I don't mind any other high-
       | quality resources something similar to the one OP has shared
       | here.
       | 
       | [2] - https://github.com/topics/scientific-visualization
       | 
       | [3] - https://github.com/manim-web/manim-web
       | 
       | [4] - https://cindyjs.org/gallery/main/
        
       ___________________________________________________________________
       (page generated 2022-03-15 23:00 UTC)