[HN Gopher] DreamWorks releases OpenMoonRay source code
       ___________________________________________________________________
        
       DreamWorks releases OpenMoonRay source code
        
       Author : dagmx
       Score  : 455 points
       Date   : 2023-03-15 16:38 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | dimator wrote:
       | i'm curious: what is the incentive for dreamworks to open-source
       | this? surely having exclusive access to a parallel renderer of
       | this quality is a competitive advantage to other studios?
        
         | StrangeATractor wrote:
         | I can imagine a few reasons why they'd do this, but some of it
         | may just be 'why not'. Studio Ghibli has done the same thing
         | with their animation software and it hasn't turned into a
         | disaster for them. Making movies, especially movies that people
         | will pay to watch is hard, and any serious competitors already
         | have their own solutions. If people use moonray and that
         | becomes a popular approach, competitors who don't use it are at
         | a disadvantage from a hiring perspective. Also, DreamWorks
         | controls the main repo of what may become a popular piece of
         | tooling. There's soft power to be had there.
        
         | Vt71fcAqt7 wrote:
         | They mentioned in this video[0] that they expect others, mainly
         | other studios, to contribute to the project.
         | 
         | [0]https://www.youtube.com/watch?v=Ozd4JqquG3k&t=117
        
         | Someone1234 wrote:
         | Unreal is eating everyone's lunch. If they cannot get anyone
         | else to contribute to their renderer, it will wind up getting
         | shelved for Unreal with a lot of smaller animation studios
         | already using Unreal instead of more traditional 3D Rendering
         | solutions like Maya.
        
           | dahart wrote:
           | > If they cannot get anyone else to contribute to their
           | renderer, it will wind up getting shelved for Unreal
           | 
           | Why do you think this? Nobody in film or vfx is using Unreal
           | for final rendering, Unreal is built for games not offline
           | path tracing.
        
             | Someone1234 wrote:
             | Tons of studios are now using Unreal for final rendering,
             | including Disney and several blockbuster movies.
             | 
             | The fantastic thing about Unreal is that you can do
             | realtime rendering on-set (e.g. for directorial
             | choices/actor feedback) and then post-production upscale it
             | with the ceiling only being cost. Unreal in the TV/Movie
             | industry is already huge and only getting bigger, year-on-
             | year.
             | 
             | You've definitely seen a TV or Movie that used Unreal.
        
               | packetslave wrote:
               | _You 've definitely seen a TV or Movie that used Unreal._
               | 
               | Name a major-studio movie that rendered final camera-
               | ready VFX in Unreal.
               | 
               | For TV, you can name The Mandalorian season one, sure,
               | but even then, ILM switched The Volume to their own in-
               | house real-time engine for season two.
        
               | dagmx wrote:
               | DNeg did one sequence in the latest Matrix film. But it
               | looked very obviously "real-time".
               | 
               | But yeah otherwise I agree with your points. The person
               | you're replying to is vastly over estimating unreal use
               | for final CG.
               | 
               | Definitely isn't being primarily used for hero character
               | work.
        
               | packetslave wrote:
               | Yup. The state of the art for real-time rendering just
               | isn't there yet for hero work. Even ILM's custom Helios
               | renderer is only used for environments and environment-
               | based lighting, as far as I've read. Assets, fx shots,
               | and characters are still rendered offline.
               | 
               | Even with real-time rendering for environments, I'm sure
               | there's plenty of post-processing "Nuke magic" to make it
               | camera-ready. It's not like they're shooting UE straight
               | to "film".
               | 
               | I _have_ seen reports of Unreal Engine being used quite
               | successfully for pre-viz, shot planning, animatics, etc.,
               | though.
        
               | short_sells_poo wrote:
               | I'm inclined to believe you, but can you quote one
               | reasonably popular movie that was rendered with Unreal?
        
               | explaininjs wrote:
               | Unreal is used in TV quite often, yes. But no major
               | studios use it for theatric releases, and I'm not aware
               | of any who plan to. (Partner is in the industry)
        
               | dahart wrote:
               | Which Disney films use Unreal for final render? Disney
               | has two separate path tracing renderers that are in
               | active development and aren't in danger of being replaced
               | by Unreal.
               | 
               | https://disneyanimation.com/technology/hyperion/
               | 
               | https://renderman.pixar.com/
               | 
               | These renderers are comparable in use case & audience to
               | MoonRay, which is why I don't think you're correct that
               | MoonRay needs external contribution to survive.
               | 
               | "Used unreal" for on-set rendering is hand-wavy and not
               | what you claimed. Final render is the goal post.
        
           | Vt71fcAqt7 wrote:
           | I'm not really sure if they are competing with Unreal. Large
           | studios will probably never use real time rendering for the
           | final render unless it achieves the same quality. Dreamworks
           | have built a renderer specifically for render farms (little
           | use of GPUs, for example) which means they are not targeting
           | small studios at all, rather something like Illumination
           | Entertainment or Sony (think Angry Birds movie).
        
           | aprdm wrote:
           | Except no one uses unreal for movies..
        
             | naikrovek wrote:
             | yes, unreal is used in movies and TV shows, both. usually
             | not entire movies or shows, but lots of individual scenes.
        
               | aprdm wrote:
               | Not for final shots that make to the cinema.. for tv yes
        
         | ronsor wrote:
         | The other large studios that compete with them (e.g. Pixar)
         | already have their own just-as-good renderers.
        
           | fuzzybear3965 wrote:
           | Even still, what's the incentive to open-source? So the
           | community can bootstrap a better solution than Pixar?
        
             | chungy wrote:
             | Why not?
             | 
             | The tool may be good, but the output visuals are only as
             | good as the artists that use said tools. They can open
             | source the tools all they want and try to hire all the
             | talent that can use it. :)
        
             | mdorazio wrote:
             | By showing they care about open source, there's a chance
             | they'll attract developers and animators who care about
             | that.
        
               | pipo234 wrote:
               | Seems targeted at CentOS7, support for which is sunset in
               | little over a year. Smells a bit like abandonware, hoping
               | for adoption by unpaid volunteers.
               | 
               | Still, dumping it into FOSS community is not the worst
               | graveyard for commercial software...
        
               | packetslave wrote:
               | re: CentOS7, the VFX Reference Platform
               | (https://vfxplatform.com/) is probably relevant here.
               | Their latest Linux Platform Recommendations report from
               | August last year already covers migrations off of CentOS
               | 7 / RHEL7 before the end of maintenance in 2024
               | 
               | Studios don't _want_ to upgrade fast (e.g. they 're not
               | interested in running Debian unstable or CentOS's
               | streaming updates thing)... they're interested in
               | _stability_ for hundreds of artists ' workstations.
               | 
               | Getting commercial Linux apps like Maya, Houdini, Nuke,
               | etc. working well at scale is hard enough without the
               | underlying OS changing all the time.
        
               | not2b wrote:
               | It's not a big deal to take something built for CentOS7
               | and port to a later Red Hat (or clone) distro. It appears
               | that they released a setup for what they use, which is
               | CentOS7.
        
               | op00to wrote:
               | Major animated movies take years to develop, and they
               | don't like to change the build process during. I used to
               | cover a major animation studio for a major Linux vendor
               | and they did in fact use very old shit.
        
               | Vt71fcAqt7 wrote:
               | They just released a feature film with this renderer,
               | grossing $462 million and widely praised for its
               | animation.
               | 
               | Large studios don't update so regularly vs e.g. a
               | startup. They have very specific setups, which is in fact
               | a large part of why it took them so long to release
               | moonray vs when they said they would last year. And they
               | are moving to Rocky Linux soon IIRC.
               | 
               | >dumping it into FOSS community
               | 
               | They are not "dumping" anything. Would it have hurt to
               | look into the facts before commenting?
        
             | toyg wrote:
             | Or create a pipeline of young talent who can come from
             | university already trained on their system.
        
         | satvikpendem wrote:
         | The competitive advantage is in storytelling, not necessarily
         | visual fidelity. People will watch a somewhat worse looking
         | movie with a better story than a better looking movie with a
         | worse story. And honestly, can anyone really tell slightly
         | worse graphical quality these days when so many animated movies
         | already look good?
         | 
         | The exception, of course, is James Cameron and his Avatar
         | series. People will absolutely watch something that looks 10x
         | better because the visual fidelity itself is the draw, it's the
         | main attraction over the story. This is usually not the case in
         | most movies however.
        
           | nineteen999 wrote:
           | The rendering in the Avatar movies is at the cutting edge.
           | But quite apart from the very uninteresting storytellying
           | there's something there that just doesn't work for me
           | visually - I don't know if it's the uncanny valley effect of
           | the giant skinny blue people with giant eyes or what, but I'd
           | definitely rather watching something creative and painterly
           | like the Puss in Boots movie, or even something like the Last
           | of Us with really well integrated CG visuals and VFX that
           | aren't necessarily top of the line, but well integrated and
           | support a good story.
        
             | satvikpendem wrote:
             | Did you watch in IMAX 3D? I watched in both 3D and 2D and
             | the 2D simply cannot compare to the 3D. The way most 3D
             | movies work is the 3D effects are done after the fact in
             | post-production. 3D in Avatar movies are done entirely in
             | the shooting phase, through 3D cameras. Hence, the 3D in
             | Avatar films is much more immersive to me than in something
             | like Dr Strange 2, which simply could not compare.
        
               | NateEag wrote:
               | I remember watching Avatar in 3D and being blown away
               | (both by the dreadful screenwriting and by the amazing 3D
               | effect).
               | 
               | It taught me that the key you immersive 3D is to treat
               | the screen as a window. Things have depth beyond it, but
               | never protrude out of it.
               | 
               | I noticed only two shots in the movie where anything
               | protruded from the screen towards me, and they really
               | caught my attention.
        
         | daniel-thompson wrote:
         | > surely having exclusive access to a parallel renderer of this
         | quality is a competitive advantage to other studios?
         | 
         | The renderer is an important of the VFX toolkit, but there are
         | more than a few production-quality renderers out there, some of
         | them are even FOSS. A studio or film's competitive advantage is
         | more around storytelling and art design.
        
         | bhouston wrote:
         | At this point every studio has their own renderer, Pixar has
         | RenderMan, Illumination has one from MacGuff, Disney has their
         | Hyperion, and Animal Logic has Glimpse.
        
           | packetslave wrote:
           | and there's still plenty of Arnold and Clarisse houses out
           | there.
        
       | [deleted]
        
       | homarp wrote:
       | previous discussion https://news.ycombinator.com/item?id=32357470
        
         | app4soft wrote:
         | It was an announcement of upcoming source opening, but at the
         | moment there was no public source repo.
        
       | mindcrime wrote:
       | Is anybody else intrigued by the mention of _multi-machine and
       | cloud rendering via the Arras distributed computation
       | framework._?
       | 
       | Is this something new? The code seems to be included as sub-
       | modules of OMR itself, and all the repos[1][2][3] show recent
       | "Initial Commit" messages, so I'm operating on the assumption
       | that it is. If so, I wonder if this is something that might prove
       | useful in other contexts...
       | 
       | [1]: https://github.com/dreamworksanimation/arras4_core
       | 
       | [2]: https://github.com/dreamworksanimation/arras4_node
       | 
       | [3]: https://github.com/dreamworksanimation/arras_render
        
       | ur-whale wrote:
       | Man, I can't wait for this to be properly (luxrender-level)
       | integrated to Blender.
       | 
       | Especially the shaders (materials), which I feel is currently the
       | weakest part of all the open source renders Blender supports
       | natively (eevee, cycles, lux)
        
       | app4soft wrote:
       | Anybody has a success with build & run it under Linux? (Debian
       | 11.x/Ubuntu 22.04.x)
        
         | zokier wrote:
         | The docs are for centos but from the looks of it I don't see
         | any major roadblocks for Debian based distros?
         | 
         | https://docs.openmoonray.org/getting-started/installation/bu...
        
         | ur-whale wrote:
         | The docker-based build worked fine for me on Ubuntu 22.04.
         | 
         | I haven't yet tried to pull the resulting binaries out of the
         | docker container to see if it still works natively on 22.04.
        
         | alhirzel wrote:
         | It appears they currently only support a Docker container build
         | and a CentOS build. I am working on a build script (PKGBUILD)
         | for Arch Linux.
        
       | ar9av wrote:
       | Quality 3d animation software is available to anyone with
       | Blender. If someone gets this renderer working as an addon (which
       | will obviously happen) artist will get a side by side comparison
       | of what their work looks like with both cycles and a professional
       | studio product, for free.
       | 
       | This is win, win, win for Blender, OSS and the community.
        
         | greenknight wrote:
         | Not just that, I am assuming that there are parts of that code,
         | that will be analysed by the Cycles developers, to enhance
         | Cycles to improve it.
        
           | gabereiser wrote:
           | This. Pixar's Renderman has been an "option" for a while. It
           | was out of band though. The cycles team will look at the
           | theory behind what's going on in renderers like this and will
           | make the tech work inside cycles. Maybe someone will port
           | this as another render option but really the sauce is their
           | lighting models and parallel vectorization which could
           | improve cycles already abysmally slow render times.
           | 
           | Renderman for Blender:
           | 
           | https://rmanwiki.pixar.com/pages/viewpage.action?mobileBypas.
           | ..
        
       | ykl wrote:
       | For anyone that is curious, this is the paper that describes the
       | core vectorized path tracing architecture in Moonray:
       | 
       | http://www.tabellion.org/et/paper17/MoonRay.pdf
       | 
       | Extracting sufficient coherency from path tracing in order to be
       | able to get good SIMD utilization is a surprisingly difficult
       | problem that much research effort has been poured into, and
       | Moonray has a really interesting solution!
        
         | bri3d wrote:
         | This paper is the first place I've found a production use of
         | Knights' Landing / Xeon Phi, the Intel massively-multicore
         | Atom-with-AVX512 accelerator system, outside of HPC / science
         | use cases.
         | 
         | And for this use case, it makes perfect sense!
        
           | ace2358 wrote:
           | Is that sort of code able to 'just run' on their arc series
           | of GPUs?
           | 
           | It feels like intel kinda hates AVX512 on the cpu side (or
           | wants to upsell you for it) so I'm wondering if they turned
           | those cards into their GPUs
        
             | zokier wrote:
             | I don't think Arc and Xeon Phi have much commonality at all
        
           | aprdm wrote:
           | AVX512 is used in VFX for rendering, as well as in tensorflow
           | ?
        
         | pengaru wrote:
         | > Extracting sufficient coherency from path tracing in order to
         | be able to get good SIMD utilization is a surprisingly
         | difficult problem
         | 
         | Huh, I'd have assumed SIMD would just be exploited to improve
         | quality without a perf hit, by turning individual paths into
         | ever so slightly dispersed path-packets likely to still
         | intersect the same objects. More samples per path traced...
        
           | berkut wrote:
           | If you only ever-so-slightly perturb paths, you generally
           | don't get anywhere near as much of a benefit from monte carlo
           | integration, especially for things like light transport at a
           | global non-local scale (might plausibly be useful for
           | splitting for scattering bounces or something in some cases).
           | 
           | So it's often worth paying the penalty of having to sort
           | rays/hitpoints into batches to intersect/process them more
           | homogeneously, at least in terms of noise variance reduction
           | per progression.
           | 
           | But very much depends on overall architecture and what you're
           | trying to achieve (i.e. interactive rendering, or batch
           | rendering might also lead to different solutions, like time
           | to first useful pixel or time to final pixel).
        
             | xeromal wrote:
             | [flagged]
        
         | softfalcon wrote:
         | This is exactly why I jumped into the comments. I was hoping
         | someone had some relevant implementation details that isn't
         | just a massive GitHub repo (which is still awesome, but hard to
         | digest in one sitting).
         | 
         | Thank you!
        
         | bhouston wrote:
         | Is there any comparisons to GPU-accelerated rendering? It seems
         | most people are going that direction rather than trying to
         | optimize for CPUs these days, especially via AVX instructions.
        
           | jsheard wrote:
           | CPUs are still king at the scale Dreamworks/Pixar/etc operate
           | at, GPUs are faster up to a point but they hit a wall in
           | extremely large and complex scenes. They just don't have
           | enough VRAM, or the work is too divergent and batches too
           | small to keep all the threads busy. In recent years the high-
           | end renderers (including MoonRay) _have_ started supporting
           | GPU rendering alongside their traditional CPU modes, but the
           | GPU mode is meant for smaller scale work like an artist
           | iterating on a single asset, and then for larger tasks and
           | final frame rendering it 's still off to the CPU farm.
           | 
           | Pixar did a presentation on bringing GPU rendering to
           | Renderman, which goes over some of the challenges:
           | https://www.youtube.com/watch?v=tiWr5aqDeck
        
             | pixelpoet wrote:
             | What's your opinion on renderers such as Redshift which
             | explicitly target production rendering and support out of
             | core rendering on GPUs? See e.g. https://www.maxon.net/en/r
             | edshift/features?categories=631816 (Disclosure: I work on
             | this.)
        
               | virtualritz wrote:
               | As someone said above: GPUs are fine & faster as long as
               | your scene stays simple. As soon as you hit a certain
               | scene complexity ceiling, they become much slower that
               | CPU renderers.
               | 
               | I would also argue that for this specific task, i.e.
               | offline rendering such frames, the engineering overhead
               | to make stuff work on GPUs is better spent making stuff
               | faster and scale more efficiently on CPUs.[1]
               | 
               | I worked in blockbuster VFX for 15 years. It's been a
               | while but I have network of people in that industry, many
               | working on these renderers. The above is kinda the
               | consensus whenever I talk to them.
               | 
               | [1] With the aforementioned caveat: if the stuff you work
               | on is always under that complexity ceiling targeting GPUs
               | can certainly make sense.
        
               | bhouston wrote:
               | So we just need GPUs with 128GB of ram then? Or move
               | towards the Apple M-series design where CPU+GPU both have
               | insanely fast access to all ram...
        
               | aseipp wrote:
               | GPUs need RAM that can handle a lot of bandwidth, so that
               | all of the execution units can remain constantly fed. For
               | bandwidth, there is both a width and a rate of transfer
               | (often bounded by the clock speed) which combined yield
               | the overall bandwidth, i.e. a 384-bit bus at XXXX million
               | transfers/second. It will never matter how much compute
               | or RAM they have if these don't align and you can't feed
               | the cores. Modern desktop DDR has bandwidth that is too
               | low for this, in general, on desktop platforms, given the
               | compute characteristics of a modern GPU, which has
               | shitloads of compute. Despite all that, signal integrity
               | on parallel RAM interfaces has very tight tolerances. DDR
               | sockets are very carefully placed with this in mind on
               | motherboards, for instance. GDDR, which most desktop
               | class graphics cards use instead of normal DDR, has much
               | higher bandwidth (e.g. GDDR6x offers 21gbps/pin while
               | DDR5 is only around 4.8gbp/s total) but even tighter
               | interface characteristics than that. That's one reason
               | why you can't socket GDDR: the physical tolerances
               | required for the interface are extremely tight and the
               | necessary signal integrity required means a socket is out
               | of the question.
               | 
               | Here is an example, go compare the RAM interfaces between
               | an Nvidia A100 with HBM versus an Nvidia 3080, and see
               | how this impacts performance. On compute-bound workloads,
               | an Nvidia A100 will absolutely destroy a 3080 in terms of
               | overall efficiency. One reason for this is because the
               | A100 will have a memory interface that is 3-4x wider
               | which is absolutely vital for lots of workloads. That
               | means 3-4x the amount of data can be fed into execution
               | units in the same clock cycle. That means you can clock
               | the overall system lower, and that means you're using
               | less power, while achieving similar (or better)
               | performance. The only way a 3080 with a 256-bit bus can
               | compare to a A100 with a 1024-bit bus is by pushing the
               | clocks higher (thus increasing the rate of
               | transfers/second), but that causes more heat and power
               | usage, and it scales very poorly in practice e.g. a 10%
               | clock speed increase might result in a measly 1-2%
               | improvement.
               | 
               | So now, a bunch of things fall out of these observations.
               | You can't have extremely high-bandwidth RAM, today,
               | without very tight interface characteristics. For
               | desktops and server-class systems, CPUs don't need
               | bandwidth like GPUs, so they can get away with sockets.
               | That has some knock on benefits; CPU memory can benefit
               | from economies-of-scale on selling RAM sticks, for
               | example. Lots of people need RAM sticks so you're in a
               | good spot to buy more. And because sockets exist "in
               | three dimensions", there's a huge increase in "density
               | per square-inch" on the motherboard. If you want a many-
               | core GPU to remain fed, you need soldered RAM which
               | necessitates a fixed SKU for deployment, or you need to
               | cut down on the compute so lower-bandwidth memory can
               | feed things appropriately, negating the reason you went
               | to GPUs in the first place (more parallel compute).
               | Soldered RAM also means that the compute/memory ratios
               | are now fixed forever. One nice thing about a CPU with
               | sockets is that you can more flexibly arbitrage resources
               | over time; if you find a way to speed something up with
               | more RAM, you can just add it assuming you aren't maxed
               | out.
               | 
               | Note that Apple Silicon is designed for lower power
               | profiles; it has good perf/watt, not necessarily overall
               | best performance in every profile. It uses 256 or 512-bit
               | LPDDR5X, and even goes as high as 1024-bit(!!!) on the
               | Max series apparently. But they can't just ignore the
               | laws of physics; at extremely high bandwidth and bus
               | widths you're going to be very subject to signal
               | interface requirements. You have physical limitations
               | that prevent the bountiful RAM sticks that each have
               | multiple, juicy Samsung DDR5 memory chips on them. The
               | density suffers. So Apple is only limited to so much RAM;
               | there's very little way around this unless they start
               | stacking in 3-dimensions or something. That's one of the
               | other reasons they likely have moved to soldered memory
               | for so long now; it simply makes extremely high
               | performance interfaces like this possible.
               | 
               | All in all the economies of scale for RAM sticks combined
               | with their density means that GPUs will probably continue
               | to be worse for workloads that benefit from lots of
               | memory. You just can't meet the combined physical
               | interface and bandwidth requirements at the same density
               | levels.
        
               | jsheard wrote:
               | It's easier said than done, there's consistently a huge
               | gulf between CPU and GPU memory limits. Even run-of-the-
               | mill consumer desktops can run 128GB of RAM, which
               | exceeds even the highest end professional GPUs VRAM, and
               | the sky is the limit with workstation and server
               | platforms. AMD EPYC can support 2TB of memory!
        
               | berkut wrote:
               | Those are _generally_ being used on much smaller
               | productions, or at least  "simpler" fidelity things (i.e.
               | non-photo CG animation like Blizzards's Overwatch).
               | 
               | So for Pixar/Dreamworks style things (look great, but not
               | photo-real) they're useable and provide a definite
               | benefit in terms of iteration time for lookdev artists
               | and lighters, but it's not there yet in terms of high-end
               | rendering at scale.
        
             | aprdm wrote:
             | I think it's mostly a question of price currently. AMD CPUs
             | are much cheaper per pixel produced that GPUs
        
             | polishdude20 wrote:
             | So the idea is one CPU can have hundreds of gigabytes of
             | ram at a time and the speed of the cpu is no problem
             | because you can scale the process over as many CPUs as you
             | want?
        
       | ShadowBanThis01 wrote:
       | It took a bit of hunting to find what an "MCRT" renderer is:
       | Monte Carlo raytracer.
        
       | abdellah123 wrote:
       | I'm a software engineer with no animation xp. Can someone explain
       | what this tool is (and is not) and how it fits in an animation
       | project.
       | 
       | e.g: can I make an animation movie using only moonray? what other
       | tools are needed? and what knowledge do I (we) need to do that?
        
         | antegamisou wrote:
         | It's by and large mathematical software, like all renderers. So
         | it isn't interactive in a manner like software that allows
         | moving a character model and sequencing frames to make an
         | animation. It's a kind of a 'kernel' in some sense for
         | animation and 3D modelling software.
         | 
         | The source files contain the algorithms/computations needed to
         | solve various equations that people involved in Computer
         | Graphics research have came up with to simulate various
         | physical - optical phenomena (lighting, shadows, water
         | reflections, smoke, waves) in the most efficient (fast) and and
         | usually photorealistic sense for a single image (static scene)
         | already created (character/landspace models, textures) in a
         | program.
         | 
         | Since there are various different techniques for the simulation
         | of one specific phenomenon, it's interesting to peek into the
         | tricks used by a very large animation studio.
        
         | hirako2000 wrote:
         | I have no experience with moonray, but it being a render, the
         | answer would be.. No.
         | 
         | The renderer is only one piece of the entire animated movie
         | production pipeline.
         | 
         | Modeling -> Texturing ~ rigging /Animation -> post processing
         | effects -> rendering - > video editing
         | 
         | That's a simplified view of the visual part of producing a
         | short or long cgi film
         | 
         | It is a lot of knowledge to aquire so a production team is
         | likely made of specialists and sub specialists (lighting?)
         | working to a degree together.
         | 
         | The best achieving software, especially given its affordability
         | is likely Blender. Other tools lile cinema4d, Maya and of
         | course 3d smax are also pretty good all in one products that
         | cover the whole pileline, although pricey.
         | 
         | Start with modeling, then texturing, then animation. Etc. Then
         | dive into the slice that attracts you the most. Realistically
         | you aren't going to ship a professional grade film so you may
         | as well just learn what you love, and who knows perhaps one day
         | become a professional and appear in the long credit name list
         | at the end of a Disney/Pixar, Dreamworks hit.
        
           | virtualritz wrote:
           | > Modeling -> Texturing ~ rigging /Animation -> post
           | processing effects -> rendering - > video editing
           | 
           | In animation (and VFX), editing comes at the beginning.
           | Throwing away frames (and all the work done to create them)
           | is simply too expensive. Handles (the extra frames at the
           | beginning and start of a shot) are usually very small. I'd
           | say <5 frames.
           | 
           | Also modeling & texturing and animation usually happen in
           | parallel. Later, animation and lighting & rendering usually
           | happen in parallel as well.
        
             | m1nz1 wrote:
             | Are there books that teach people about the sorts of
             | systems used to make animated movies? I've seen game engine
             | books and the like. Physically based rendering is on my
             | list, but I wonder if there are other interesting reads I'm
             | missing.
        
               | reactivenz wrote:
               | Yes, classic book where from 90's like "The Renderman
               | Companion" or "Advanced RenderMan", and then there is
               | toolings books, for each tool. I used to own many Maya
               | books and 3DS Max books.
        
         | thomastjeffery wrote:
         | In the most casual sense, a renderer is what "takes a picture"
         | of the scene.
         | 
         | A scene is made of objects, light sources, and a camera. The
         | renderer calculates the reflection of light on the objects'
         | surfaces from the perspective of the camera, so that it can
         | decide what color each pixel is in the resulting image.
         | 
         | Objects are made up of a few different data structures: one for
         | physical shape (usually a "mesh" of triangles); one for
         | "texture" (color mapped across the surface); and one for
         | "material" (alters the interaction of light, like adding
         | reflections or transparency).
         | 
         | People don't write the scene data by hand: they use tools to
         | construct each object, often multiple tools for each data
         | structure. Some tools focus on one feature: like ZBrush for
         | "sculpting" a mesh object shape. Other tools can handle every
         | step in the pipeline. For example, Blender can do modeling,
         | rigging, animation, texturing and material definition,
         | rendering, post-processing, and even video editing; and that's
         | leaving out probably 95% of its entire feature set.
         | 
         | If you are interested at all in exploring 3D animation, I
         | recommend downloading Blender. It's free software licensed
         | under GPLv3, and runs well on every major platform. It's
         | incredibly full-featured, and the UI is excellent. Blender is
         | competitive with nearly every 3D digital art tool in existence;
         | particularly for animation and rendering.
        
         | virtualritz wrote:
         | It's an offline 3D rendering software that turns a scene
         | description into a photorealistic image. Usually such a
         | description is for a single frame of animation.
         | 
         | Offline being the opposite of realtime. I.e. a frame taking
         | possibly hours to render whereas in a realtime renderer it must
         | take fractions of a second.
         | 
         | Maybe think of it like a physical camera in a movie. And a very
         | professional one for that. But then a camera doesn't get you
         | very far if you consider the list of people you see when
         | credits roll by. :]
         | 
         | Similarly, at the very least, you need something to feed the
         | renderer a 3D scene, frame by frame. Usually this is a DCC app
         | like Maya, Houdini etc. or something created in-house. That's
         | where you do your animation. After you created the stuff you
         | want to animate and the sets where that lives ... etc., etc.
         | 
         | Moonray has a Hydra USD delegate. That is an API to send such
         | 3D scenes to a renderer. There is one for Blender too[1]. That
         | would be one way to get data in there, I'd reckon.
         | 
         | Hope that makes sense.
         | 
         | [1] https://github.com/GPUOpen-
         | LibrariesAndSDKs/BlenderUSDHydraA...
        
       | ChrisMarshallNY wrote:
       | Lots of submodules. We were just talking about these, the other
       | day...
        
       | bhouston wrote:
       | Looks nice!
       | 
       | It has a Hydra render delegate so that is nice. Does Blender
       | support being a Hydra client yet? It would be nice to have it
       | supported natively in Blender itself. If it did, one could easily
       | switch renderers between this and others.
       | 
       | I understand Autodesk is going this way with its tooling.
        
         | semi-extrinsic wrote:
         | I had the same question. There exists a USD addon for Blender
         | that support Hydra, so probably you could get that to work with
         | a bit of trial and error!
         | 
         | https://github.com/GPUOpen-LibrariesAndSDKs/BlenderUSDHydraA...
        
         | dagmx wrote:
         | AMD have an addon for Blender that supports Hydra, though it is
         | a bit slow. There's ongoing work to add it to Blender itself
         | natively
         | 
         | https://projects.blender.org/BogdanNagirniak/blender/src/bra...
        
         | capableweb wrote:
         | > It would be nice to have it supported natively in Blender
         | itself. If it did, one could easily switch renderers between
         | this and others.
         | 
         | Blender in general is setup to work with different renderers,
         | especially since the work of Eevee which is the latest renderer
         | to be added. Some part of the work on integrating Eevee also
         | put some groundwork for making it easier in the future to add
         | more of them.
         | 
         | Most probably this renderer would be added as a addon (if
         | someone in the community does it), rather than in the core of
         | Blender.
        
       | antegamisou wrote:
       | Wonder how it fares against the monopoly of Pixar's Renderman.
        
         | ralusek wrote:
         | I don't think there is a monopoly renderer.
        
           | jsheard wrote:
           | There isn't even a monopoly within Disney, they acquired
           | Pixar 17 years ago but Disney Animation Studios still develop
           | their own Hyperion renderer completely independently of
           | Pixar/Renderman.
        
             | sosodev wrote:
             | Just because it's hard to migrate existing workflows or is
             | there something that makes Hyperion superior?
        
               | jsheard wrote:
               | Hyperion has the advantage of being exclusive to WDAS, so
               | they can tailor it to their exact workflow and
               | requirements, while Renderman is a commercial product
               | with many users outside of Pixar all with their own wants
               | and needs.
        
       | pipeline_peak wrote:
       | o boi, shrek engine
        
         | dagmx wrote:
         | Memes aside, this has actually never been used for a shrek
         | film. The first film it was used for was How to Train Your
         | Dragon 3.
        
           | aidenn0 wrote:
           | If it was used in any of the _Puss in Boots_ movies, then it
           | was at least used in a Shrek spinoff...
        
             | jsheard wrote:
             | _Puss in Boots: The Last Wish_ was rendered with this
        
         | isatty wrote:
         | Perfection
        
       | daniel-thompson wrote:
       | Surprised nobody has mentioned this, but it looks like it
       | implements the render kernels in ISPC^, which is a tool that
       | exposes a CUDA-like SPMD model that runs over the vector lanes in
       | the CPU.
       | 
       | ISPC is also used by Disney's Hyperion renderer, see
       | https://www.researchgate.net/publication/326662420_The_Desig...
       | 
       | Neat!
       | 
       | ^ https://ispc.github.io/
        
         | xmcqdpt2 wrote:
         | Neat!
         | 
         | Vectorization is the best part of writing Fortran. This looks
         | like it makes it possible to write fortran-like code in C. I
         | wonder how it compares to ifort / openMP?
        
       | chungy wrote:
       | The README is a bit opaque when I don't know the terminology, but
       | the website seems to clue me in that it's a raytracer:
       | https://openmoonray.org/
        
         | toxik wrote:
         | MCRT = Monte Carlo Ray Tracing
        
       ___________________________________________________________________
       (page generated 2023-03-15 23:00 UTC)