[HN Gopher] Path Tracing Quake in Blender
       ___________________________________________________________________
        
       Path Tracing Quake in Blender
        
       Author : kipi
       Score  : 125 points
       Date   : 2021-06-20 17:03 UTC (1 days ago)
        
 (HTM) web link (matthewearl.github.io)
 (TXT) w3m dump (matthewearl.github.io)
        
       | ogurechny wrote:
       | A lot of valuable work on Blender side, but the main goal is
       | questionable, and author explains why.
       | 
       | Pre-calculated lighting had very little to do with physical
       | correctness, it was purely an artistic placement of light sources
       | in a specific map processed with a specific tool (official or
       | third party) with specific defaults. Two adjacent rooms could be
       | lit in a very different manner because map maker decided it
       | looked good; two maps coming from different sources could not be
       | expected to share any common lighting principles. Quirks like
       | overbright values were not even consistent among officially
       | supported renderers, and were inherited by later Quake and Source
       | engines (which would add their own quirks and light processing
       | options). To put it shortly, there was no reference for the final
       | result except the final result itself, and it often was unstable
       | (changing textures, geometry, or processing constants would shift
       | the look too much, and require fixes).
       | 
       | To make the game look as intended, you have to somehow rely on
       | original lightmaps that are tightly coupled with original low
       | resolution geometry and textures. Given that people still argue
       | which of the original renderers gives the "true" picture, I have
       | my doubts about making a new one that pretends some average light
       | model is good enough for everything. Even for episode 1, hand-
       | placed lights had to be reintroduced into the system, and ad-hoc
       | fixes had to be done, but manual fixes are not an option for all
       | the existing maps.
        
       | nimbius wrote:
       | any effort to exemplify the characteristics or dynamics of path
       | tracing is rendered immediately pointless by the gameplays
       | insufferably sadistic nineties era bunny-hop autism as it makes a
       | visible account of the gameplay all but impossible to follow.
       | Path tracing a nauseating speed run holds almost zero value.
        
         | thomashabets2 wrote:
         | Quake demos can be "recammed", though. See
         | https://www.youtube.com/watch?v=jzcevsd5SGE.
        
       | jsmcgd wrote:
       | Please can you do the same for Thief? :P
        
         | tanepiper wrote:
         | The Thief engine and Dromed Editor were interesting - you
         | essentially carved out the level, and sometimes it wouldn't get
         | the lighting maps right - which were very costly to generate.
         | 
         | I had a lot of fun making Thief levels though.
        
       | ahefner wrote:
       | I'd love to see if the fake water surface texture could be
       | replaced with a procedural deformation faking waves so that the
       | path tracing could render caustics.
        
         | slavik81 wrote:
         | Unfortunately, Blender not very good at caustics. In theory,
         | any path tracer can do them, but in practice, the sampling
         | needs to prioritize caustic-generating regions to be even
         | remotely efficient.
         | 
         | There might be add-ons that help, but vanilla Cycles will take
         | orders of magnitude more samples to create a good image when
         | there's caustics involved than when there's not. For that
         | reason, it's still common to fake caustics in Blender using a
         | texture or projection effect.
        
       | smoldesu wrote:
       | This was actually one of the first things I did with Blender when
       | I got it! When I was a teen and Blender was at 2.5 or 2.6, I
       | booted it up and tried to recreate the lighting in some Quake 2
       | maps that I had downloaded. Then I used the Shift + ` shortcut to
       | go into first-person and re-explore my childhood in 15-fps ray-
       | traced perfection.
        
       | abledon wrote:
       | Dungeon keeper (I & II) next!
        
       | sjm wrote:
       | Love this. There are high-res textures for Quake which would have
       | made this look even better. The Quake Revitalization Project[0]
       | has created new textures "without changing the mood, theme, or
       | atmosphere", and I believe they're packaged by default with
       | projects like nQuake.
       | 
       | [0]: http://qrp.quakeone.com/retexture/
        
         | willis936 wrote:
         | If lighting and texture swaps are tasteful to a large crowd, I
         | wonder if model swaps would be. Model swaps are more effort,
         | but they were popular in the case of the modded FF7 community.
         | I'd be interested in seeing a fully overhauled version of
         | Quake.
        
         | dkonofalski wrote:
         | I never like projects like this. They always claim that they
         | don't change the mood, theme, or atmosphere but maybe my
         | definition of that is different. They definitely feel like they
         | change the mood and atmosphere to me.
        
       | klodolph wrote:
       | > Texture coordinates can be encoded as Blender UV maps.
       | 
       | Will note that one minor detail about the Quake map format you
       | may find interesting... Quake does not encode the texture
       | coordinates for vertexes in the map. Instead, Quake encodes the
       | coordinate _system_ for textures of each face. This is
       | transformed to screen-space during rendering.
       | 
       | This is different from modern renderers, which record the texture
       | coordinates of vertexes.
       | 
       | Quake's system is less flexible, can't be used for character
       | models, and can be inconvenient during asset creation, but it is
       | a bit faster and more convenient at run-time. When you're
       | rendering, you want to know the gradient of texture coordinates
       | in the X and Y direction on screen, which is easy to calculate
       | using Quake's system.
       | 
       | (Obviously the author knows this, but it wasn't spelled out in
       | the article.)
        
       | egypturnash wrote:
       | All this work and then Youtube turns it into a smeary, illegible
       | mess.
        
         | danuker wrote:
         | I suspect it is the added motion blur :(
        
         | FartyMcFarter wrote:
         | Even the screenshots look smeared compared to the game ones
         | though.
        
       | neurotrace wrote:
       | For anyone interested in seeing a side-by-side comparison:
       | https://viewsync.net/watch?v=uX0Ye7qhRd4&t=4&v=hcareEsIXHM&t...
        
       | ginko wrote:
       | The author mentions using frustum culling for performance. Won't
       | this lead to lighting artifacts with indirect illumination when
       | path tracing? Even when an object is behind the camera it could
       | still affect the lighting of what's in front of it.
        
         | kipi wrote:
         | Author here, I intersect the view frustum with the light's
         | sphere of influence (and also take into account PVS info) so it
         | still includes lights that are reachable by one bounce.
         | Technically this means I might omit two bounce reflections but
         | in practice this doesn't appear to be a problem.
        
         | Jasper_ wrote:
         | Since it's coarse as the BSP, you won't lose objects behind
         | you. The BSP data structure doesn't take into account the
         | camera's looking direction, just the camera's position. Thus,
         | it includes every object visible from all possible looking
         | directions visible at a given point, so it will always include
         | objects behind you.
        
       | willis936 wrote:
       | The motion blur is quite good here. Here is some light
       | reading/watching on the subject.
       | 
       | https://docs.blender.org/manual/en/latest/render/cycles/rend...
       | 
       | https://youtu.be/VXIrSTMgJ9s
       | 
       | In my estimation of things: motion blur is only beneficial to try
       | to work around sample and hold displays. If there were fully
       | continuous (1000 Hz is close enough) or sub 1 ms strobed displays
       | then motion blur would add nothing.
        
         | codeflo wrote:
         | Counter point: Motion blur is a simple form of low-pass
         | filtering (in the time direction). You need some kind of
         | filtering to prevent shutter artifacts (think video of a
         | spinning wheel), and that stays true even at a million fps.
        
           | willis936 wrote:
           | I don't think this is how it works. We have a discrete number
           | of rods and cones which work as a well behaved spatial
           | sampler. Human visual system temporal sampling is smeared
           | stochastically across the rods and cones rather than being
           | clocked. If you truly displayed 1 million fps and there were
           | no shutter artifacts (as there are none in any fixed-pixel
           | displays that we are currently looking at), then the motion
           | would be life-like. The human visual system doesn't take a
           | temporally clocked set of frames then apply a low-pass filter
           | to it and doing it as an approximation of actual perceived
           | motion blur would look off (as many gamers lament).
           | 
           | Blurbusters has a fair amount of literature on this topic.
           | 
           | https://blurbusters.com/blur-busters-law-amazing-journey-
           | to-...
        
             | codeflo wrote:
             | This has nothing to do with biology, it's an argument from
             | signal processing, which is well-understood theory
             | (Nyquist's theorem and so on). If an object oscillates at 1
             | MHz, and you take 1 million still frames per second, it
             | will rest in the same place in every frame, and thus look
             | static. In reality, such an object would look blurred to
             | the human eye.(+) It's this kind of artifact that motion
             | blur (to be more precise, low-pass filtering) can avoid.
             | 
             | Edit: The article you linked to is very confused about some
             | basic terminology. It equates response time artifacts of an
             | LCD monitor that display sharp, digital images with motion
             | blur. That's so wildly wrong I'm not even sure where to
             | start. Maybe here: When displaying video, motion blur is a
             | property of the source, response time one of the output
             | device.
             | 
             | (+) Edit 2: To expand on this, the human vision system
             | integrates arriving photons over time, and this way
             | implicitly behaves a lot like a low-pass filter. A low-pass
             | filter is different from a fixed-rate shutter, which is
             | what people mean when they say the eye doesn't have a fixed
             | framerate. However, there is a limited temporal resolution.
             | 
             | A more everyday example of this effect would be a dimmed
             | LED. You can pulse an LED at 1 MHz, it will look dimmed,
             | not blinking. But when filming/rendering this LED at 1
             | million _still images_ per second, it will either be all on
             | or all off, both of which are wrong (i.e., an artifact of
             | your chosen shutter speed).
        
               | willis936 wrote:
               | >This has nothing to do with biology
               | 
               | >look blurred to the human eye
               | 
               | Ah but it has everything to do with biology. You are
               | proposing a far too simple model for the signal
               | processing actually at play. Unfortunately there is no
               | clock going to the rods and cones, they simply fire
               | signals off asynchronously and the timebase is
               | reconstructed in your noggin. How would you go about
               | filtering a few million samples all on their own
               | timebases that are themselves not uniformly (or even
               | periodically) sampled? It would be a truly awful
               | approximation.
        
         | danuker wrote:
         | I find the motion blur distracting :(
        
           | codeflo wrote:
           | If you notice it, it's not done correctly (which in games
           | unfortunately seems to be always; presumably because it's
           | faked with a blur in post-processing). Few people complain
           | about motion blur in movies, where cameras produce it through
           | physics alone. It's artificially added to CGI scenes so that
           | they _don't_ stand out and distract the viewer.
        
             | beebeepka wrote:
             | What do you mean "if" you notice. I complain about it every
             | single time. Of course, it's much worse in games. Only
             | vsync is worse
        
         | hortense wrote:
         | To take advantage of a 1000 Hz monitor, you'd need to render
         | 1000 different images per seconds!
         | 
         | And if you can renderer 1000 images per seconds, then
         | generating motion blur that work on a 100 fps monitor is as
         | simple as displaying an average of 10 frames.
         | 
         | The various motion blur techniques allow simulating higher
         | frame rate without paying the full cost of rendering all the
         | extra frames.
        
         | arximboldi wrote:
         | Yes. Motion blur is temporal interpolation, this is, it's a way
         | of representing what happens "in between frames". The shorter
         | the time between frames, the more subtle the effects of
         | interpolation (like the higher the resolution of an image, the
         | less blurry it becomes in between pixels).
        
       | thomashabets2 wrote:
       | I did something similar here:
       | https://blog.habets.se/2015/03/Raytracing-Quake-demos.html [2015]
       | 
       | The author's write-up is better, and his target (Blender) enables
       | some things mine (POV-Ray) doesn't.
       | 
       | I also really like the idea to use emissive textures. I just used
       | point light sources.
       | 
       | I'm still rendering, and you can join if you want:
       | https://qpov.retrofitta.se/
        
       ___________________________________________________________________
       (page generated 2021-06-21 23:01 UTC)