[HN Gopher] Ray Tracing with POV-Ray: 25 scenes in 25 days (2013)
       ___________________________________________________________________
        
       Ray Tracing with POV-Ray: 25 scenes in 25 days (2013)
        
       Author : todsac
       Score  : 94 points
       Date   : 2020-05-01 15:30 UTC (7 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | erjiang wrote:
       | Wow, the state of the art in 3D rendering has changed
       | dramatically. The state of the art in _open source_ 3D rendering
       | has changed even more dramatically.
       | 
       | Compare these screenshots from 2013 (although I think POV-Ray was
       | looking pretty dated by then) to renders that come out of
       | Blender's Cycle renderer now.
       | 
       | The big change is that everyone has moved to "physically based
       | rendering" that do path-tracing for propagating light through a
       | scene. Old-school raytracing cannot know how light indirectly
       | bounces off a wall, for example, leading to artificial-looking
       | shadows and flat lighting.
       | 
       | Anyways, anyone interested in making neat little 3D scenes like
       | in this GitHub should try out Blender - it's shockingly easy to
       | make realistic renders compared to several years ago.
       | 
       | Edit: Blender's Cycles rendering engine seems to have been
       | included with Blender since 2011. POV-Ray probably represents
       | 2000s-era tech, although I think it can do more than what's
       | demonstrated in this post.
        
         | ginko wrote:
         | Even in 2013 this wasn't really the state of the art anymore.
         | LuxRender came out in 2008.
        
         | selfsimilar wrote:
         | I agree the quality of the render engine is much better with
         | physical rendering, but I still love constructive solid
         | geometry and the ideal of pure curves to define the scene
         | geometry. All these meshes with their triangles! Whatever
         | happened to using NURBS or metaballs or other non-polygonal
         | modeling?
        
           | zodiac wrote:
           | You can have physically based rendering with those non-
           | triangle primitives too!
        
             | weinzierl wrote:
             | You can have, there is just no renderer that supports it,
             | or is there?
        
           | BubRoss wrote:
           | Polygons can be smoother and subdivided efficiently at render
           | time without artifacts, which is how they are used now. Nurbs
           | and other curved surface representations end up with huge
           | problems pragmatically when it comes to tools, workflow,
           | visualization, texture coordinates, keeping the surfaces
           | together etc. The list is long. Polygons are very simple in
           | all these areas and can be made smooth at render time so you
           | get the best of both worlds. If you were in a production
           | situation it wouldn't be long before you gave up these
           | ideals.
        
             | selfsimilar wrote:
             | Pragmatically, I think the real issue is editor tooling.
             | Polygons are much simpler to subdivide and edit and are
             | perfect for real-time interaction, at least much better for
             | high-complexity scenes. At render time, though, you need to
             | use a bunch of extra tricks to get the right smoothing -
             | bump/texture mapping, increased subdivision, etc. But some
             | of these other models might actually decrease the scene
             | complexity and increase quality if you could convert the
             | model at render-time.
        
               | BubRoss wrote:
               | It isn't just tooling.
               | 
               | > At render time, though, you need to use a bunch of
               | extra tricks to get the right smoothing
               | 
               | Nurbs or any smooth geometry needs to be subdivided too.
               | You can set levels, max size of polygons, smoothness
               | constraints subdivide based on the pixel size from the
               | camera projection or any combination. In practice this is
               | not a problem for polygons or nurbs.
               | 
               | > - bump/texture mapping,
               | 
               | This is orthogonal to the geometry type, with the
               | exception that UV coordinates are far easier to deal with
               | with polygons.
               | 
               | > increased subdivision,
               | 
               | There isn't any increased subdivision, both geometry
               | types need to subdivided. Blue Sky's renderer raytraced
               | nurbs directly but this isn't generally as good as just
               | tracing subdivided polygons.
               | 
               | Polygonal geometry, even subdivided, is typically not a
               | big part of memory or time in rendering in all but the
               | most pathological cases. 4k would still mean that one
               | polygon per pixel would be 8 million polygons, which is
               | going to pale in comparison to texture data typically.
        
               | berkut wrote:
               | > Polygonal geometry, even subdivided, is typically not a
               | big part of memory or time in rendering in all but the
               | most pathological cases. 4k would still mean that one
               | polygon per pixel would be 8 million polygons, which is
               | going to pale in comparison to texture data typically.
               | 
               | Erm, at VFX level at least, that's not really true: once
               | you have hero geometry that needs displacement (not just
               | bump/normal mapping), you effectively have to dice down
               | to micropoly level for everything in the camera frustum.
               | And with path tracing (what everyone's using these days,
               | at least in VFX), geometry caching/paging is too
               | expensive to be used in practice with incoherent rays
               | bouncing everywhere. Disney's Hyperion renderer does do
               | that, but it spends a considerable amount of time sorting
               | ray batches, and it was built to do exactly that.
               | 
               | Image textures, on the other hand, _can_ be paged fairly
               | well, and generally in shade-on-hit pathtracers (all of
               | the commercial ones), this works reasonably well with a
               | fairly limited texture cache size (~8 - > 16 GB).
               | Mipmapped textures are used, so for most non-camera rays
               | that haven't hit tight specular BSDFs not much texture
               | data is actually needed.
               | 
               | Once things like hair/fur curves come into the picture,
               | generally geometry takes up even more memory.
        
               | michal_f wrote:
               | 3delight also traces SDS and nurbs analitycally. In
               | offline renderers geometry data constitutes to most of
               | memory usage. Texture RAM usage is kept under a few GB by
               | use of page-based cache.
        
               | BubRoss wrote:
               | Multiple gigabytes of geometry is a lot. That ends up
               | working out to potentially dozens of hundreds of polygons
               | per pixel. Even so the person I was replying to seemed to
               | wonder why everything converted to polygons, which is
               | because of a more holistic pragmatism.
        
               | berkut wrote:
               | Now try it with displacement :)
               | 
               | Agree in general about geometry memory though: in high-
               | end VFX, displacement is pretty much always used, so you
               | have to dice down to micropoly for hero assets (unless
               | they're far away or out of frustum).
        
               | dahart wrote:
               | > Nurbs or any smooth geometry needs to be subdivided too
               | 
               | Not true; lots of smooth surfaces, including NURBS, can
               | be and are ray traced without subdividing.
               | 
               | > Polygonal geometry, even subdivided, is typically not a
               | big part of memory or time in rendering in all but the
               | most pathological cases.
               | 
               | I don't buy this either, speaking from experience using
               | multiple commercial renderers. It is true that texture is
               | larger, but not true that polygonal geometry is not a big
               | part of memory consumption. RenderMan, for example, does
               | adaptive tessellation of displacement mapped surfaces
               | because they will run out of memory with a uniform
               | displacement.
               | 
               | The balance of geometry vs texture usages is also
               | changing right now with GPU ray tracers, and geometry is
               | taking up a larger portion because it has to be resident
               | for intersection, while textures can be paged.
        
               | BubRoss wrote:
               | > I don't buy this either, speaking from experience using
               | multiple commercial renderers.
               | 
               | It of course depends on exactly what is being rendered,
               | but typically texture maps of assets for high quality cg
               | are done at roughly the expected resolution of the final
               | renders (rounded to a square power of 2). Typical assets
               | will have three or four maps applied to each group of
               | geometry, with higher quality hero assets having more
               | groups.
               | 
               | > RenderMan, for example, does adaptive tessellation of
               | displacement mapped surfaces because they will run out of
               | memory with a uniform displacement.
               | 
               | It is specifically screen space displacement and this has
               | been effective, but was originally crucial in the days
               | where 8MB of memory cost the same as someone's yearly
               | salary. In PRman actually polygons are even less of a
               | burden on memory because of this with micropolygon and
               | texture caches for efficiency, even with raytracing.
               | 
               | The real point here though is that nurbs don't really
               | have much of an advantage, even in memory, because
               | polygons are already lightweight and can be smoothed.
               | Subdividing of polygons is typically not going to be too
               | different from burbs and heavy polygonal meshes are
               | likely to be extremely difficult to replicate with nurbs.
               | 
               | Don't get too caught up in exactly what is technically
               | possible, this is about why nurbs are not an ideal form
               | of geometry that anyone is trying to use again. Their
               | disadvantages outweigh their advantages by a huge margin.
        
             | Aardwolf wrote:
             | Every time I see a pipe, bucket, goblet or round thing in a
             | game with otherwise very realistic rendering, I'm reminded
             | I'm just playing a rendered game, due to seeing polygonal
             | shapes rather than a true round bucket/goblet/pipe, and
             | wonder if a quadratic surface would not look better and be
             | more efficient.
             | 
             | Well, that and banding in the gradient of a sky. Those two
             | things break the otherwise so realistic rendering quite
             | commonly.
        
               | klodolph wrote:
               | I think you may have missed what the parent comment was
               | saying--that the polygons can be _made smooth_ before
               | render time. This is not a question of just faking it
               | with normals. Instead, you can actually just work with a
               | polygonal mesh and then post-process it to make it
               | actually smooth. The classic technique for this is
               | Catmull-Clark subdivision. If you think polygons on
               | screen are offensive, you can just run the algorithm
               | until the individual polygons are under the size of a
               | pixel.
               | 
               | The fact that you see polygons on an otherwise circular
               | object in a game just means that the game isn't giving
               | you a more detailed mesh when objects are close to the
               | screen. There are a lot of reasons for this, and it's
               | important to consider that you often get the best overall
               | quality in modern real-time graphics with retopologized
               | meshes. It's easy enough to make these with a given
               | quality and make lower-LODs from them, but just as a
               | matter of consequence you won't see higher-LODs than the
               | retopologized version. And why bother making super-high-
               | LOD models anyway? If you look closely at an object
               | there's a finite amount of texture/model/etc. detail that
               | the game can present. Might as well make the LOD for the
               | model complement the amount of detail in the texture.
               | 
               | The whole process is rather complicated these days, with
               | different workflows (even different programs) for organic
               | objects (like people, animals, demons, whatever) and hard
               | surfaces like goblets, stone tiles, architecture, etc.
               | The two _main_ things people want to do when modeling are
               | sculpt and create a sensible topology, and surfaces like
               | NURBs (or worse, Bezier curves) turned out to be a bit
               | cumbersome for both sculpting and creating meshes.
        
               | tlb wrote:
               | Yes, but then any edges that should have been sharp get
               | rounded too.
        
               | holoduke wrote:
               | You can apply the algorithm on a subset of vertices.
               | Exclude all the sharp edges. An extra falloff algorithm
               | can make it even better
        
               | [deleted]
        
               | michal_f wrote:
               | You can have sharp edges and vertices with subdiv
               | surfaces.
        
             | dahart wrote:
             | This seems like a strange way to frame it. Polygons aren't
             | what get subdivided. Subdivision surfaces & NURBS are what
             | get subdivided, and those are routinely used in production
             | and have tooling. Polygons by themselves don't get any
             | smoother or provide the advantages you're talking about,
             | nor do they solve all problems of workflow, tex coords,
             | stitching, etc.
        
               | holoduke wrote:
               | You can apply subdivision on any type of geometry. Even
               | voxels. Tesselation is also a form of subdividing.
        
             | TylerE wrote:
             | That's the beauty of CSG though... it's volumetric, not
             | surface based
        
               | BubRoss wrote:
               | I'm not sure what you are trying to say here,
               | constructive solid geometry does not solve any of the
               | problems I mentioned and would have to be treated
               | specially to be raytraced (while being likely being much
               | slower).
               | 
               | Converting it to polygons at a modeling or effects stage
               | is workable but rendering it directly is unlikely to be
               | widely valuable any more.
        
               | TylerE wrote:
               | There is no need to "subdivide" because the exact volume
               | of the object can be calculated exactly to any arbitrary
               | detail.
        
               | BubRoss wrote:
               | And what do you subdivide it to? How do you trace rays
               | against it? How do you map textures on to it? How do you
               | visualize it in real time? How do you work with it in a
               | different program?
        
               | TylerE wrote:
               | You don't subdivide. It's a mathematical intersection.
        
               | BubRoss wrote:
               | Right, but the rest of the questions remain, along with
               | the usefulness of CSG in real scenarios. CSG can be
               | interesting, but it does end up being very impractical
               | for anything except for some specific effects that are
               | then turned into polygons. It is technically possible to
               | create sdfs and trace against those I'm sure, but csg is
               | rarely used and baking it to sdf instead of polygons even
               | more so.
        
               | TylerE wrote:
               | POV-Ray disagrees. It _renders CSG_. It never converts to
               | polygons. So stop saying what a 30 year old program does
               | is  "impossible".
        
               | BubRoss wrote:
               | I didn't say it was impossible, I'm talking about why
               | these other geometry types aren't typically used instead
               | of polygons.
               | 
               | I asked 'how do you trace rays against it' because to do
               | it directly is not fast, yet you are left with all the
               | problems I stated that you skipped over. Think about what
               | it would take to directly trace lots of overlapping
               | primitives. What you gain from tracing it directly is
               | minimal and what you give up is substantial.
        
               | TylerE wrote:
               | Look at the POV-Ray source code. It does it. Again, I am
               | literally telling you how the software under discussion
               | works. and you keep insisting it is not so. It's open
               | source!
        
               | CyberDildonics wrote:
               | How would that make it easier to work with?
        
               | TylerE wrote:
               | You can do things like take shape A and then subtract
               | shape B from it and all works, no matter how complex the
               | intersection.
        
         | TylerE wrote:
         | That's rather unfair to POV-Ray. These are hardly
         | representative of what it's capable of.
         | 
         | Look through the old IRTC archives, for instance:
         | http://ftp.irtc.org/stills/index.html
         | 
         | People were doing stuff like this
         | (http://oz.irtc.org/ftp/pub/stills/1999-04-30/13hystri.jpg) in
         | 1999.
        
           | beering wrote:
           | Using this very nice POV-Ray render from Wikipedia: https://u
           | pload.wikimedia.org/wikipedia/commons/thumb/e/ec/Gl...
           | 
           | It features a lot of effects (radiosity, HDR maps, etc.)
           | which are added on top of its basic functionality. There's
           | been a big shift in how rendering is approached, from the old
           | way of adding a pile of special effects onto your original
           | non-realistic renderer, to a newer way of simulating light as
           | it physically works and using that as the foundation of the
           | renderer.
           | 
           | And there's still a lot of in-between as well, but having
           | gone from 3ds Max's scanline renderer to 3ds Max + Mental
           | Ray, to Blender + Cycles, it feels very different to use.
           | 
           | There are still some effects that Mental Ray (and it looks
           | like POV-Ray) can do that Cycles can't. Photon-mapped
           | caustics seems to be one, although I think LuxRender is FOSS
           | and can do that.
        
             | TylerE wrote:
             | > It features a lot of effects (radiosity, HDR maps, etc.)
             | which are added on top of its basic functionality
             | 
             | Huh? POV-Ray has supported radiosity for literally decades,
             | since sometime around 1995.
        
               | anthk wrote:
               | Yep, I'm tired of these kids saying PovRay coudn't do
               | shit as if PovRay had in 1997 the same capabilities of an
               | Irix machine from 1987. They couldn't even be more wrong
               | with that.
               | 
               | I remember seeing photoreallistic images made with PovRay
               | in 1997 you coudn't even do with a GPU today in real
               | time.
               | 
               | Kids today have a lot of ignorance on the 90's
               | technologies, guess why they confuse the 80's and the
               | 90's a lot thanks to that shitty vaporwave culture,
               | having fake nostalgia on something they never truly
               | experienced.
               | 
               | Man, I was playing 720p video under a Divx code in early
               | 00's with a Pentium3 and multimedia was on its heayday,
               | thus, people showing up a CGA pallete has no sense, it
               | already was retro back in the day, you had those in the
               | old MSDOS games you were running under W98 or DOSEmu
               | under Linux, among the rest of the emulators for the ZX
               | Spectrum and MSX for example.
               | 
               | Sorry for my rant, but I had to say it. The late 90's had
               | nothing to do with early 90's, the technology shift we've
               | seen it was outstanding. From DOS under a 286 in my early
               | Elementary school, to W98 emulating Pokemon in my pre-HS
               | days among recording TV streams in a computer, all of
               | that in 5-6 years.
               | 
               | From 30Mhz and 5 1/4 floppies to ~450/600 MHZ a bunch of
               | GB in 1999. For sure PovRay could do a lot more than
               | these kids think.
        
               | cambalache wrote:
               | The explosion of JS and web development created a culture
               | of people totally ignorant of the hard-learned lessons
               | since the 1960s. That is why you see constant
               | reinventions of the wheel, a shitty wheel at that.
        
               | anthk wrote:
               | No, I didn't mean that. I mean people today looks like
               | disabled on looking up reliable sources and just parrot a
               | simple opinion on software over 20 years old without
               | looking the Hall Of Fame on its homepage.
               | 
               | That and their selfish issue being unable to acknowledge
               | the 90's legacy and we could achieve in early 00's. For
               | example their previous comment stating as if everything
               | was invented in late 00's/early 10's and we were badly
               | surviving with DOS and Amigas in late 90's, when, FFS,
               | people began to __emulate __Amigas in '99 with UAE.
               | 
               | Man, we have Voodoo's and Geforce's exploded in late
               | 90's, raytracing was done in software but we didn't do
               | the crippled examples people is trying to show off to the
               | rest as if it was what we truly do in the 90's. Not even
               | close. Even a 286 could do these under half and hour or a
               | full one, but that was the 3D lore from _several_ years
               | ago.
        
               | morsch wrote:
               | > From 30Mhz and 5 1/4 floppies to ~450/600 MHZ a bunch
               | of GB in 1999.
               | 
               | From rare instances of people accessing their local BBS
               | at 9600 Baud to accessing a worldwide communications
               | network as a matter of course, often at broadband speed.
               | 
               | The past 20 years really have been rather dull in
               | comparison.
        
               | holoduke wrote:
               | When you are inside a time or period not a lot of change
               | is felt, but once you look back you can see incredible
               | changes. You mention that last 20 years are dull. I think
               | that the last 10 years are the era of smart phone
               | revolution. A pretty big thing. Certainly belongs in the
               | top 50 most impactful inventions and adoptations in human
               | history. In 200 years from now the late 00s will be seen
               | as the start of global connectivity.
        
               | anthk wrote:
               | Not so much. PocketPC's were on par on the 1st
               | iPhone/Android phones with similar 3D gaming/multimedia
               | capabilities, albeit as much as expensive and not as
               | usable.
        
               | joefourier wrote:
               | I haven't felt a lot of incredible changes in the last
               | ten years. In 2010 I had an iPhone 4, and I don't think
               | there is a major qualitative difference between it and
               | the latest smartphones. The computing performance may
               | have improved since, but apart from loading increasingly
               | bloated websites faster and allowing for higher-quality
               | photographs, I haven't felt any major changes.
               | 
               | Otherwise, the changes in lifestyle since 2010 have been
               | incremental at best. 10 years ago I could buy most things
               | online, watch YouTube videos, consulted Google maps, had
               | smartphone text, audio and video chat. Now I can watch
               | videos in 4K and the internet connection is faster, and
               | although computer graphics have indeed improved, it is
               | nothing like the leap from 1990 to 2000.
               | 
               | The only new exciting development is virtual reality,
               | which is unfortunately still fairly niche.
        
               | anthk wrote:
               | ISDN was a good boost over a 56k modem too. Not DSL
               | speeds, but bearable. With Opera and its proxy (and its
               | awesome cacheing options) you had a pretty smooth
               | browsing, almost a clone of DSL speed and usability
               | standards.
        
               | holoduke wrote:
               | You are right, but the same can be said between 85 and
               | 95. The evolution of realtime 3d graphics was insane.
               | From some low FPS 3d line engines to full blown textured
               | 3d engines.
        
               | anthk wrote:
               | 90-96 is big enough. From the NES/Genesis/286 to the
               | Pentium MMX and the multimedia PC playing MPEG videos and
               | games like Quake. In some PC's you could even _emulate_
               | the Genesis under DOS, and a year later, the NeoGeo
               | fully, which was  "the big thing" in the early 90's. A
               | huge step in six years.
        
           | nunodonato wrote:
           | ahh good ol' IRTC.
           | 
           | I remember looking at this entry in particular:
           | http://www.irtc.org/ftp/pub/stills/2006-06-30/hideaway.jpg
           | and thinking "how the hell is that possible?"
        
         | minxomat wrote:
         | That's like saying Flatland (the movie) was representative of
         | the animation tech at the time.
         | 
         | Remember that The Third and The Seventh was done by a single
         | person in 2009, and is entirely modeled and rendered with tech
         | available back then: https://vimeo.com/7809605
        
         | the_cat_kittles wrote:
         | i dunno, i found it pretty good for making a big 3d datavis-
         | shameless plug:
         | https://somestuffforyoutolookat.ml/pages/hourly_carbon_la.ht...
        
         | anthk wrote:
         | http://hof.povray.org/mouille.html
         | 
         | Year 2000. You don't know a lot, and I guess you didn't have a
         | look on the PovRay's hall of fame.
        
       | thefifthelf wrote:
       | I entered some of those irtc comps. Povray appealed to the
       | programmer and the artist in me. Writing algorithms (macros) in
       | povray to create trees or place raindrops made you really look
       | into how nature works. It's a challenge that requires a certain
       | mindset.
        
       | gorgoiler wrote:
       | (Edit: oh wow, it's really heartening to read how POVRay has such
       | a positive impact on so many others as well, almost 30 years
       | ago!)
       | 
       | There is a special place in my heart for POVray. After BBC Basic,
       | it was the first coding I ever did all the way back in '94.
       | 
       | The thrill of changing an object from opaque to glass, and the
       | anticipation of watching the ray scan grind to a treacle-like
       | pace as it passed over any glass objects. Happier times, simpler
       | times!
       | 
       | May it live forever.
        
       | selfsimilar wrote:
       | I credit POV-Ray with getting me into programming in the early
       | 90s. A fascination with computer graphics led to Fractint, POV-
       | Ray, and dreams of someday being able to play Kai's Powertools
       | and SGI machines. I think I even had a subscription to some black
       | and white POV-ray zine. Can't remember what it was called though.
        
       | EvanAnderson wrote:
       | POV-Ray was the reason that I wanted a 486 DX back in the
       | mid-90's, and not a puny FPU-less 486 SX like my friends had.
       | 
       | I didn't do much with it, ultimately, but I really enjoyed
       | noodling around w/ POV-Ray. Rendering a bunch of TGA files and
       | then stringing them together into an animated GIF (or was it an
       | FLC?) was a major exercise.
       | 
       | I recall 15 y/o me trying to explain it to the "oldster" who my
       | father purchased the PC from (a guy who was probably in his late
       | 30s). "No-- there's no camera. It's a _virtual_ camera that I
       | place in code for the scene. UGH! You don 't understand!" (To be
       | fair, this was a guy who mainly sold PCs and accounting software
       | and wrote code for dBase/Clipper...)
        
         | Synaesthesia wrote:
         | Lol I made really great FLC files of those 4D fractals,
         | Quarternions, which you could animate by changing the
         | parameters. Good times
        
       | ujeezy wrote:
       | I got into POV-Ray in the late 90s when I wanted 3D graphics for
       | my Geocities page. I was in over my head in every dimension
       | (scripting, math, artistic ability), but it was incredibly
       | rewarding, and the newsgroup gave me a very positive early
       | impression of what a community can feel like on the internet.
        
       | nikodunk wrote:
       | I love POV ray so much. It's what originally got me into
       | programming in the late 90s/early 2000s. Found out about it from
       | a PlayStation 1 Magazine.
        
         | philsnow wrote:
         | My interest in POVRay helped me in my high school math classes:
         | I was really into making animations in POVRay and at the time I
         | was in calculus and it really made parametric equations "click"
         | for me.
        
         | Macuyiko wrote:
         | It holds a special place in my heart as well. In 2002, my
         | parents gave me a book "Multitool Linux" [1] which wasn't a
         | spectacularly well received one, containing a mix of wildly
         | different topics, but this was exactly what I needed as a teen
         | experimenting with Linux for the first time. One of the
         | chapters was about POV-Ray, and I remember being amazed with
         | this newly-discovered canvas. I spend hours looking at images
         | others had created and wondering how they'd pulled it off. I
         | think I never got much further than rendering small animations
         | on my (if I recall correctly) 400Mhz pc. Good times.
         | 
         | (As an aside, I'd completely forgotten the title of the book,
         | although vaguely remembering the cover and time frame. It's so
         | hard to find older stuff using Google when only having some
         | vague descriptions. I found it by remembering that I read a
         | book review years later and after some digging around could
         | locate the review back to https://www.linux.com/news/book-
         | review-multitool-linux/ based on the style of writing, which I
         | remembered. I must have read that book to pieces. The chapter
         | using Wireshark was also amazing to me.)
         | 
         | [1]
         | https://books.google.be/books?id=g0CPF6MEFcUC&printsec=front...
        
       | ur-whale wrote:
       | This has not aged well. Modern OpenSource renders like e.g.
       | luxcorerender produce way more realistic images:
       | 
       | https://luxcorerender.org/gallery/
        
         | anthk wrote:
         | And povray could do that far earlier than luxcorender.
         | 
         | http://hof.povray.org/
        
         | DanBC wrote:
         | I'm not sure it's fair to compare a gallery of images created
         | while learning the software with a gallery of images created by
         | people who are very familiar with their software.
        
       | trevortheblack wrote:
       | If anyone is interested in getting started on Ray Tracing
       | _today_. I recommend Peter Shirley's _Ray Tracing In One Weekend_
       | Series.
       | 
       | https://raytracing.github.io/
       | 
       | (I edited the later editions)
        
         | packetslave wrote:
         | also Jamis Buck's "The Ray Tracer Challenge"
        
         | grep_it wrote:
         | Real Time Rendering maintains a list of high quality free
         | resources including "Ray Tracing: In One Weekend"
         | http://www.realtimerendering.com/#intro
        
         | s800 wrote:
         | Another recommendation: DKBtrace, which is a predecessor to
         | Povray, has easier to grok code. IMHO.
        
       | weinzierl wrote:
       | This is a bit off-topic but I really love POV-Ray and I think it
       | is a fun fact people reading this thread might like:
       | 
       | POV-Ray was actually run in space by none less than Mark
       | Shuttleworth when he was on the ISS in 2002.
       | 
       | [1] http://www.povray.org/posters/
        
       | oceanghost wrote:
       | My lord, had no idea POV-ray was still around. POV-Ray and
       | Fractint inspired years of asembley graphics programming when I
       | was a teenager...
        
       | jordache wrote:
       | how is POVRay used in modern workflows these days?
       | 
       | It's extremely counter intuitive to articulate a 3d scene in
       | code. Maybe for some applications, exactness in the 3d scene via
       | code is useful.
        
         | elihu wrote:
         | As far as I know, it's not. The turing-complete scene
         | description language and the many ways of representing geometry
         | that makes POV-Ray so powerful and fun to use also makes it
         | very difficult to interoperate with other tools.
         | 
         | POV-Ray is mostly used by hobbyists, students, and people who
         | need to visualize some data and need a tool that can be easily
         | scripted for their needs.
        
       | klodolph wrote:
       | I loved POV-Ray in the 1990s and early 2000s. The thing is--it's
       | ridiculous to try and make something remotely complicated or
       | organic with POV-Ray, unless you are using some modeling program
       | that can export to POV-Ray format.
       | 
       | POV-Ray scenes were dominated by procedural textures and
       | geometric primitives, for the most part. The rendering engine was
       | very strong, and supported all sorts of features like area
       | lighting, depth of field, motion blur, global illumination,
       | caustics, volumetric lighting, etc. All of these were supported
       | way back in the day before they became more common in other
       | engines, and of course, using these features made your render
       | times horrific back on early-2000s single-CPU machines.
       | 
       | The way a lot of us did modeling in POV-Ray was with a pencil and
       | some graph paper. Without a good modeling program, you were
       | setting yourself up for a ton of work. So I'd try to get the most
       | out of simple models, and make it look as good as possible with
       | lighting.
       | 
       | Funny enough, if you are used to CSG then you may need some time
       | to adapt to modern workflows. Blender supports CSG, of course,
       | but there are some caveats that you should pay attention to.
        
         | CrLf wrote:
         | POV-Ray was my introduction to programming, before actual
         | programming. Graph paper and a lot of trial and error.
         | 
         | My longest render was 50 hours for a single 640x480 image, with
         | caustics and area lighting. 50 hours of a Pentium 100 MHz
         | buzzing near my bed.
        
           | gorgoiler wrote:
           | Pentium? Pentium! We were lucky to have half a 486 and even
           | then it were powered by ferrets, wi'owt e'en DX floating
           | point coprocessor to its name.
        
             | W-Stool wrote:
             | Monty Python voice: "You were lucky!"
        
           | holoduke wrote:
           | How long would it take now on a modern machine? :) 5 , 10min?
        
             | dan-robertson wrote:
             | Pentiums weren't that old. It would depend a lot on gpu
             | rather than cpu.
        
         | gedy wrote:
         | I really loved this aspect of POV-Ray back then. As a "normal"
         | programmer it was really nice to be able to script very complex
         | scenes procedurally. E.g. https://vimeo.com/105317159
        
       | beagle3 wrote:
       | I printed the source code of POV-Ray in 1989 when it was still
       | called DKBTrace (named after its' author, David Kirk Buck) and
       | studied it carefully over a few weeks. It was my introduction to
       | the underlying implementation of (then) modern OO - Turbo C++ 1.0
       | was released in 1990. It was also my introduction to CSG.
       | 
       | Ah, the nostalgia. Thanks, David, and the entire POVRay team.
        
       ___________________________________________________________________
       (page generated 2020-05-01 23:00 UTC)