[HN Gopher] Pixar's Render Farm
       ___________________________________________________________________
        
       Pixar's Render Farm
        
       Author : brundolf
       Score  : 184 points
       Date   : 2021-01-02 19:56 UTC (3 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | blaisio wrote:
       | In case someone is curious, the optimization they describe is
       | trivial to do in Kubernetes - just enter a resource request for
       | cpu without adding a limit.
        
         | aprdm wrote:
         | It is far from trivial actually. Kubernetes and Render Farms
         | are trying to optimize for very different goals.
        
       | jason-festa wrote:
       | OH I VISITED THIS BACK DURING CARS 2 -- IT WAS OLD AS FUCK
        
       | shadowofneptune wrote:
       | It's good to know they care about optimization. I had the
       | assumption that all CGI is a rather wasteful practice where you
       | just throw more hardware at the problem.
        
         | dagmx wrote:
         | CGI is heavily about optimization. I recommend checking out
         | SIGGRAPH ACM papers , and there's a great collection by them on
         | production renderers.
         | 
         | Every second spent rendering or processing, is time an artist
         | is not working on a shot. Any savings in optimizations add up
         | to incredible cost savings.
        
         | ChuckNorris89 wrote:
         | Most of these studios are tech first since thy wouldn't be able
         | to have gotten where they are now without prioritizing tech.
        
         | mathattack wrote:
         | In the end it's still more profitable to hire performance
         | engineers than hardware. For the last decade I've heard the
         | "toss more HW" argument. It hasn't held because the amount of
         | compute and storage goes up too.
        
           | theptip wrote:
           | In reality it's never as simple as a single soundbite. If you
           | are a startup with $1k/mo AWS bills, throwing more hardware
           | at the problem can be orders of magnitude cheaper. If you are
           | running resource-intensive workloads then at some point
           | efficiency work becomes ROI-positive.
           | 
           | The reason the rule of thumb is to throw more hardware at the
           | problem is that most (good) engineers bias towards wanting to
           | make things performant, in my experience often beyond the
           | point where it's ROI positive. But of course you should not
           | take that rule of thumb as a universal law, rather it's
           | another reminder of a cognitive bias to keep an eye on.
        
           | Retric wrote:
           | That's very much an exaggeration. Pixar/Google etc can't run
           | on a single desktop CPU and spends a lot of money on
           | hardware. The best estimate I have seen is it's scale
           | dependent. At small budgets your generally spending most of
           | that on people, but as the budget increase the ratio tends to
           | shift to ever more hardware.
        
             | cortesoft wrote:
             | It is absolutely about scale.... an employee costs $x of
             | dollars regardless of how many servers they are managing,
             | and might improve performance y%..... that only becomes
             | worth it if y% of your hardware costs is greater than the
             | $x for the employee.
        
               | Retric wrote:
               | The issue is extra employees run out of low hanging fruit
               | to optimize, so that y% isn't a constant. Extra hardware
               | benefits from all the existing optimized code written by
               | your team, where extra manpower needs to improve the
               | already optimized code already written by your team.
        
         | unsigner wrote:
         | No, artists can throw more problem at the hardware faster than
         | you can throw hardware against the problem. There are enough
         | quality sliders in the problem to make each render infinitely
         | expensive if you feel like it.
        
       | klodolph wrote:
       | My understanding (I am not an authority) is that for a long time,
       | it has taken Pixar _roughly_ an equal amount of time to render
       | one frame of film. Something on the order of 24 hours. I don't
       | know what the real units are though (core-hours? machine-hours?
       | simple wall clock?)
       | 
       | I am not surprised that they "make the film fit the box", because
       | managing compute expenditures is such a big deal!
       | 
       | (Edit: When I say "simple wall clock", I'm talking about the
       | elapsed time from start to finish for rendering one frame,
       | disregarding how many other frames might be rendering at the same
       | time. Throughput != 1/latency, and all that.)
        
         | joshspankit wrote:
         | Maybe it's society or maybe it's intrinsic human nature, but
         | there seems to be an overriding "only use resources to make it
         | faster to a point, otherwise just make it better [more
         | impressive?]".
         | 
         | Video games, desktop apps, web apps, etc. And now confirmed
         | that it happens to movies at Pixar.
        
         | brundolf wrote:
         | Well it can't just be one frame total every 24 hours, because
         | an hour-long film would take 200+ years to render ;)
        
           | [deleted]
        
           | chrisseaton wrote:
           | I'm going to guess they have more than one computer rendering
           | frames at the same time.
        
             | brundolf wrote:
             | Yeah, I was just (semi-facetiously) pointing out the
             | obvious that it can't be simple wall-clock time
        
               | chrisseaton wrote:
               | Why can't it be simple wall-clock time? Each frame takes
               | 24 hours of real wall-clock time to render start to
               | finish. But they render multiple frames at the same time.
               | Doing so does not change the wall-clock time of each
               | frame.
        
               | brundolf wrote:
               | In my (hobbyist) experience, path-tracing and rendering
               | in general are enormously parallelizable. So if you can
               | render X frames in parallel such that they all finish in
               | 24 hours, that's roughly equivalent to saying you can
               | render one of those frames in 24h/X.
               | 
               | Of course I'm sure things like I/O and art-team-workflow
               | hugely complicate the story at this scale, but I still
               | doubt there's a meaningful concept of "wall-clock time
               | for one frame" that doesn't change with the number of
               | available cores.
        
               | chrisseaton wrote:
               | _Wall-clock_ usually refers to time actually taken, in
               | practice, with the particular configuration they use, not
               | time could be taken if they used the configuration to
               | minimise start-to-finish time.
        
               | dodobirdlord wrote:
               | Ray tracing is embarrassingly parallel, but it requires
               | having most if not all of the scene in memory. If you
               | have X,000 machines and X,000 frames to render in a day,
               | it almost certainly makes sense to pin each render to a
               | single machine to avoid having to do a ton of moving data
               | around the network and in and out of memory on a bunch of
               | machines. In which case the actual wall-clock time to
               | render a frame on a single machine that is devoted to the
               | render becomes the number to care about and to talk
               | about.
        
               | chrisseaton wrote:
               | Exactly - move to the compute to the data, not the data
               | to the compute.
        
               | klodolph wrote:
               | I suspect hobbyist experience isn't relevant here. My
               | experience running workloads at large scale (similar to
               | Pixar's scale) is that as you increase scale, thinking of
               | it as "enormously parallelizable" starts to fall apart.
        
               | masklinn wrote:
               | It could still be wallclock per-frame, but you can render
               | each frame independently.
        
           | riffic wrote:
           | you solve that problem with massively parallel batch
           | processing. Look at schedulers like Platform LSF or HTCondor.
        
             | sandermvanvliet wrote:
             | Haven't heard those two in a while, played around with
             | those while I was in uni 15 years ago :-O
        
           | onelastjob wrote:
           | True. With render farms, when they say X minutes or hours per
           | frame, they mean the time it takes 1 render node to render 1
           | frame. Of course, they will have lots of render nodes working
           | on a shot at once.
        
             | [deleted]
        
           | welearnednothng wrote:
           | They almost certainly render two frames at a time. Thus
           | bringing the render time down to only 100+ years per film.
        
         | ChuckNorris89 wrote:
         | Wait, what? 24 hours per frame?!
         | 
         | At the standard 24fps it takes you 24 days per film second
         | which works out to 473 years for the average 2 hour long film
         | which can't be right.
        
           | dagmx wrote:
           | It's definitely not 24 hours per frame outside of gargantuan
           | shots, at least by wall time. If you're going by core time,
           | then it assumes you're serial which is never the case.
           | 
           | That also doesn't include rendering multiple shots at once.
           | It's all about parallelism.
           | 
           | Finally, those frame counts for a film only assume final
           | render. There's a whole slew of work in progress renders too,
           | so a given shot may be rendered 10-20 times. Often they'll
           | render every other frame to spot check and render at lower
           | resolutions to get it back quick.
        
           | dralley wrote:
           | 24 hours scaled to a normal computer, not 24 hours for the
           | entire farm per frame.
        
           | klodolph wrote:
           | Again, I'm not sure whether this is core-hours, machine-
           | hours, or wall clock. And to be clear, when I say "wall
           | clock", what I'm talking about is latency between when
           | someone clicks "render" and when they see the final result.
           | 
           | My experience running massive pipelines is that there's a
           | limited amount of parallelization you can do. It's not like
           | you can just slice the frame into rectangles and farm them
           | out.
        
             | capableweb wrote:
             | > It's not like you can just slice the frame into
             | rectangles and farm them out.
             | 
             | Funny thing, you sure can! Distributed rendering of single
             | frames been a thing for a long time already.
        
           | berkut wrote:
           | In high-end VFX, 12-36 hours (wall clock) per frame is a
           | roughly accurate time frame for a final 2k frame at final
           | quality.
           | 
           | 36 is at the high end of things, and the histogram is more
           | skewed towards the lower end than > 30 hours, but it's
           | relatively common.
           | 
           | Frames can be parallelised, so multiple frames in a
           | shot/sequence are rendered at once, on different machines.
        
           | noncoml wrote:
           | Maybe they mean 24 hours per frame _per core_
        
           | mattnewton wrote:
           | Not saying it's true, but I assume this is all parallizable
           | so 24 cores would complete that 1 second in 1 day, and
           | 3600*24 cores would complete the first hour of the film in a
           | day, etc. And each frame might have parallizable processes to
           | get them under 1 day wall time, but still cost 1 "day" of
           | core-hours
        
         | CyberDildonics wrote:
         | Not every place talks about frame rendering times the same.
         | Some talk about the time it takes to render one frame of every
         | pass sequentially, some talk about more about the time of the
         | hero render or the longest dependency chain, since that is the
         | latency to turn around a single frame. Core hours is usually
         | separate because most of the time you want to know if something
         | will be done overnight or if broken frames can be rendered
         | during the day.
         | 
         | 24 hours of wall clock time is excessive and the reality is
         | that anything over 2 hours starts to get painful. If you can't
         | render reliably over night, your iterations slow down to
         | molasses and the more iterations you can do the better
         | something will look. These times are usually inflated in
         | articles. I would never accept 24 hours to turn around a
         | typical frame as being necessary. If I saw people working with
         | that, my top priority would be to figure out what is going on,
         | because with zero doubt there would be a huge amount of
         | nonsense under the hood.
        
       | tikej wrote:
       | It is always a pleasure to watch/read about something that works
       | very well it it's domain. Nice that they put so much heart in
       | optimising the rendering process.
        
         | cpuguy83 wrote:
         | Indeed. I read this and instantly wanted to spend like 6 months
         | learning the system, decisions/reasons into making whatever
         | trade offs they make, etc.
         | 
         | I think a key is keeping the amount of time to render a
         | constant.
        
       | [deleted]
        
       | supernova87a wrote:
       | I would love to know about some curious questions, for example:
       | 
       | If there's a generally static scene with just characters walking
       | through it, does the render take advantage of rendering the
       | static parts for the whole scene _once_ , and then overlay and
       | recompute the small differences caused by the moving things in
       | each individual sub frame?
       | 
       | Or, alternatively what "class" of optimizations does something
       | like that fall into?
       | 
       | Is rendering of video games more similar to rendering for movies,
       | or for VFX?
       | 
       | What are some of physics "cheats" that look good enough but
       | massively reduce compute intensity?
       | 
       | What are some interesting scaling laws about compute intensity /
       | time versus parameters that the film director may have to choose
       | between? "Director X, you can have <x> but that means to fit in
       | the budget, we can't do <y>"
       | 
       | Can anyone point to a nice introduction to some of the basic
       | compute-relevant techniques that rendering uses? Thanks!
        
         | raverbashing wrote:
         | > If there's a generally static scene with just characters
         | walking through it, does the render take advantage of rendering
         | the static parts for the whole scene once
         | 
         | From the detail of rendering they do, I'd say there's no such
         | thing.
         | 
         | As in: characters walking will have radiosity and shadows and
         | reflections so there's no such thing as "the background is the
         | same, only the characters are moving" because it isn't.
        
         | lattalayta wrote:
         | Here's a kind of silly but accurate view of path tracing for
         | animated features https://www.youtube.com/watch?v=frLwRLS_ZR0
         | 
         | Typically, most pathtracers use a technique called Monte Carlo
         | Estimation, which means that they continuously loop over every
         | pixel in an image, and average together the incoming light from
         | randomly traced light paths. To calculate motion blur, they
         | typically sample the scene at least twice (once at camera
         | shutter open, and again at shutter close). Adaptive sampling
         | rendering techniques will typically converge faster when there
         | is less motion blur.
         | 
         | One of the biggest time-saving techniques lately, is machine
         | learning powered image denoising [1]. This allows the renderer
         | to compute significantly fewer samples, but then have a 2D
         | post-process run over the image and guess what the image might
         | look like if it had been rendered with higher samples.
         | 
         | Animated movies and VFX render each frame in terms of minutes
         | and hours, while games need to render in milliseconds. Many of
         | the techniques used in game rendering are approximations of
         | physically based light transport, that look "good enough". But
         | modern animated films and VFX are much closer to simulating
         | reality with true bounced lighting and reflections.
         | 
         | [1] https://developer.nvidia.com/optix-denoiser
        
         | tayistay wrote:
         | Illumination is global so each frame needs to be rendered
         | separately AFAIK.
        
         | dagmx wrote:
         | If you're interested in production rendering for films, there's
         | a great deep dive into all the major studio renderers
         | https://dl.acm.org/toc/tog/2018/37/3
         | 
         | As for your questions:
         | 
         | > Is rendering of video games more similar to rendering for
         | movies, or VFX?
         | 
         | This question is possibly based on an incorrect assumption that
         | feature (animated) films are rendered differently than VFX.
         | They're identical in terms of most tech stacks including
         | rendering and the process is largely similar overall.
         | 
         | Games aren't really similar to either since they're raster
         | based rather than pathtraced. The new RTX setups are bringing
         | those worlds closer. However older rendering setups like REYES
         | that Pixar used up until Finding Dory, are more similar to
         | games raster pipelines. though that's trivializing the
         | differences.
         | 
         | A good intro to rendering is reading Raytracing in a Weekend (h
         | ttps://raytracing.github.io/books/RayTracingInOneWeekend.ht...)
         | , and Matt Pharr's PBRT book (http://www.pbr-book.org/)
        
           | supernova87a wrote:
           | Thanks!
           | 
           | (I was also reading the OP which says "...Our world works
           | quite a bit differently than VFX in two ways..." hence my
           | curiosity)
        
             | lattalayta wrote:
             | One way that animated feature films are different than VFX
             | is schedules. Typically, an animated feature from Disney or
             | Pixar will take 4-5 years from start to finish, and
             | everything you see in the movie will need to be created and
             | rendered from scratch.
             | 
             | VFX schedules are usually significantly more compressed,
             | typically 6-12 months, so often times it is cheaper and
             | faster to throw more compute power at a problem rather than
             | paying a group of highly knowledgeable rendering engineers
             | and technical artists to optimize it (although, VFX houses
             | will still employ rendering engineers and technical artists
             | that know about optimization). Pixar has a dedicated group
             | of people called Lightspeed technical artists whose sole
             | job is to optimize scenes so that they can be rendered and
             | re-rendered faster.
             | 
             | Historically, Pixar is also notorious for not doing a lot
             | of "post-work" to their rendered images (although they are
             | slowly starting to embrace it on their most recent films).
             | In other words, what you see on film is very close to what
             | was produced by the renderer. In VFX, to save time, you
             | often render different layers of the image separately and
             | then composite them later in a software package like Nuke.
             | Doing compositing later allows you to fix mistakes, or make
             | adjustments in a faster way than completely re-rendering
             | the entire frame.
        
             | dagmx wrote:
             | I suspect they mean more in approaches to renderfarm
             | utilization and core stealing.
             | 
             | A lot of VFX studios use off the shelf farm management
             | solutions that package up a job as a whole to a node.
             | 
             | I don't believe core stealing like they describe is unique
             | to Pixar, but is also not common outside Pixar either,
             | which is what they allude to afaik. It's less an animation
             | vs VFX comparison, as just studio vs studio infrastructure
             | comparison.
        
           | dodobirdlord wrote:
           | > This question is possibly based on an incorrect assumption
           | that feature (animated) films are rendered differently than
           | VFX. They're identical in terms of most tech stacks including
           | rendering and the process is largely similar overall.
           | 
           | Showcased by the yearly highlights reel that the Renderman
           | team puts out.
           | 
           | https://vimeo.com/388365999
        
       | 2OEH8eoCRo0 wrote:
       | From what I understand they still seem to render at 1080p and
       | then upsample to 4k. Judging by Soul.
        
         | dodobirdlord wrote:
         | That seems extremely unlikely. The Renderman software they use
         | has no issues rendering at 4k.
        
           | 2OEH8eoCRo0 wrote:
           | The 4k streaming copy of the film has stairsteps though. Like
           | it's been upsampled. I'm sure their software can render at 4k
           | but they choose not to for whatever reason.
        
       | banana_giraffe wrote:
       | One of the things they mentioned briefly in a little documentary
       | on the making of Soul is that all of the animators work on fairly
       | dumb terminals connected to a back end instance.
       | 
       | I can appreciate that working well when people are in the office,
       | but I'm amazed that worked out for them when people moved to work
       | from home. I have trouble getting some of my engineers to have a
       | stable connection stable enough for VS Code's remote mode. I
       | can't imagine trying to use a modern GUI over these connections.
        
         | mroche wrote:
         | The entire studio is VDI based (except for the Mac stations,
         | unsure about Windows), utilizing the Teradici PCoIP protocol,
         | 10Zig zero-clients, and (at the time, not sure if they've
         | started testing the graphical agent), Teradici host cards for
         | the workstations.
         | 
         | I was an intern in Pixar systems for 2019 (at Blue Sky now),
         | and we're also using a mix of PCoIP and NoMachine for home
         | users. We finally figured out a quirk with our VPN terminal we
         | sent home with people that was throttling connections, but the
         | experience post-that fix is actually really good. There are a
         | few things that can cause lag (such as moving apps like
         | Chrome/Firefox), but for the most part unless your ISP is
         | introducing problems it's pretty stable. And everyone with a
         | terminal setup has two monitors, either 2*1920x1200 or
         | 1920x1200+2560x1440.
         | 
         | I have a 300Mbps/35Mbps plan (turns into a ~250/35 on VPN) and
         | it's great. We see bandwidth usage ranging from 1Mbps to ~80 on
         | average. The vast majority being sub-20. There are some
         | outliers that end up in mid-100s, but we still need to
         | investigate those.
         | 
         | We did some cross country tests with our sister studio ILM over
         | the summer and was hitting ~70-90ms latency which although not
         | fantastic, was still plenty workable.
        
         | pstrateman wrote:
         | I think most connections could be massively improved with a VPN
         | that supports Forward Error Correction, but there doesn't seem
         | to be any that do.
         | 
         | Seems very strange to me.
        
         | lattalayta wrote:
         | That is correct. It's pretty common for a technical artist to
         | have a 24-32 core machine, with 128 GB of RAM, and a modern
         | GPU. Not to mention that the entirety of the movie is stored on
         | a NFS and can approach many hundreds of terabytes. When you're
         | talking about that amount of power and data, it makes more
         | sense to connect into the on-site datacenter.
        
         | dagmx wrote:
         | A lot of studios use thin client/ PCoiP boxes from teradici
         | etc..
         | 
         | They're pretty great overall and the bandwidth requirements
         | aren't crazy high but it does max out your data usage if you're
         | capped pretty quickly. The faster you can be, the better the
         | experience.
         | 
         | Some studios like Imageworks don't even have the backend data
         | center in the same location. So the thin clients connect to a
         | center in Washington state when the studios are in LA and
         | Vancouver.
        
       | thomashabets2 wrote:
       | I'm surprised they hit only 80-90% CPU utilization. Sure, I don't
       | know their bottlenecks, but I understood this to be way more
       | parallelizable than that.
       | 
       | I ray trace quake demos for fun at a much much lower scale[0],
       | and have professionally organized much bigger installs (I feel
       | confident in saying even though I don't know Pixar's exact
       | scale).
       | 
       | But I don't know state of the art rendering. I'm sure Pixar knows
       | their workload much better than I do. I would be interested in
       | hearing why, though.
       | 
       | [0] Youtube butchers the quality in compression, but
       | https://youtu.be/0xR1ZoGhfhc . Live system at
       | https://qpov.retrofitta.se/, code at
       | https://github.com/ThomasHabets/qpov.
       | 
       | Edit: I see people are following the links. What a day to
       | overflow Go's 64bit counter for time durations on the stats page.
       | https://qpov.retrofitta.se/stats
       | 
       | I'll fix it later.
        
         | mike_d wrote:
         | Rendering may be highly parallelizable, but the custom bird
         | flock simulation they wrote may be memory constrained. This is
         | why having a solid systems team who can do care and feeding of
         | a job scheduler is worth more than expanding a cluster.
        
         | dagmx wrote:
         | I suspect they mean core count utilization, not per core
         | utilization.
         | 
         | Ie there's some headroom left for rush jobs and a safety net,
         | because full occupancy isn't great either.
        
         | brundolf wrote:
         | My guess would be that the core-redistribution described in the
         | OP only really works for cores on the same machine. If there's
         | a spare core being used by _none_ of the processes on that
         | machine, a process on another machine might have trouble
         | utilizing it because memory isn 't shared. The cost of loading
         | (and maybe also pre-processing) all of the required assets may
         | outweigh the brief window of compute availability you're trying
         | to utilize.
        
         | [deleted]
        
       | hadrien01 wrote:
       | For those that can't stand Twitter's UI:
       | https://threadreaderapp.com/thread/1345146328058269696.html
        
         | nikolay wrote:
         | It's very painful to follow a conversation on Twitter. I'm not
         | sure why they think the way they've done things makes sense.
        
           | 3gg wrote:
           | Shouting works better than conversing to keep users "engaged"
           | and target them with ads.
        
           | lmilcin wrote:
           | It was never supposed to support conversation in the first
           | place.
           | 
           | People were supposed to shoot short, simple, single messages
           | and other people maybe react to this with their own short,
           | single messages.
        
             | mac01021 wrote:
             | That sounds a lot like a conversation to me.
        
             | fluffy87 wrote:
             | You are describing a chat
        
             | iambateman wrote:
             | We are way way past that point.
             | 
             | It's time for Twitter to evolve in so many ways.
        
               | Finnucane wrote:
               | Evolve or die. Twitter dies is a good option.
        
               | ksec wrote:
               | I cant even remember the last user feature changes
               | Twitter has.
        
               | mcintyre1994 wrote:
               | Fleets, I suppose?
        
               | mschuster91 wrote:
               | Proper support in the UI for RTs with comments.
               | 
               | But honestly I'd prefer if they spent some fucking time
               | upstaffing their user service and abuse teams. And if
               | they could finally ban Trump and his conspiracy nutcase
               | followers/network.
        
               | nikolay wrote:
               | I think they can't as their system is built with many of
               | these limitations and preconceptions and it's hard for it
               | to evolve easily.
        
               | yiyus wrote:
               | It may be difficult to evolve, but it should be possible.
        
             | nikolay wrote:
             | Yeah, I've never seen any value in Twitter. They should've
             | called it "Public IM" or limited IIRC and make it function
             | well like IM. Even the poor implementation of threads in
             | Slack is way better than the perverted version of Twitter.
        
           | npunt wrote:
           | As a reading medium seeing a bunch of tweets strung together
           | is not fantastic as implemented today.
           | 
           | As an authoring medium though, the character constraints
           | force you to write succinct points that keep the reader
           | engaged. You can focus your writing just on one point at a
           | time, committing to them when you tweet, and you can stop
           | anytime. If you're struggling with writing longer form pieces
           | a tweet thread is a great on-ramp to get the outline
           | together, which you can later expand into a post.
           | 
           | As a conversation medium, it's also nice to be able to focus
           | conversation specifically on a particular point, rather than
           | get jumbled together with a bunch of unrelated comments in
           | the comments section at the end of a post.
        
         | 3gg wrote:
         | > JavaScript is not available. We've detected that JavaScript
         | is disabled in this browser. Please enable JavaScript or switch
         | to a supported browser to continue using twitter.com. You can
         | see a list of supported browsers in our Help Center.
         | 
         | So much for Pixar's render farm.
        
           | buckminster wrote:
           | They changed this about a fortnight ago. Twitter worked
           | without javascript previously.
        
         | dirtyid wrote:
         | I wish there was a service that restructures twitter like a
         | reddit thread.
        
         | happytoexplain wrote:
         | Thank you. All I saw was a post with zero context, followed by
         | a reply, followed by another reply using a different reply
         | delineator (a horizontal break instead of a vertical line??),
         | followed by nothing. It just ends. It's hard to believe this is
         | real and intended.
        
           | naikrovek wrote:
           | It's amazing to me that people find twitter difficult to
           | read... I mean it's not perfect but it's not an ovaltine
           | decoder ring, either.
           | 
           | Just ... Scroll ... Down ... Click where it says "read more"
           | or "show more replies"
           | 
           | You're human; THE most adaptable creature known. Adapt!
           | 
           | I'm not saying that twitter UX is perfect, or even good. I AM
           | saying that it is usable.
        
             | kaszanka wrote:
             | I think building a more usable alternative (or in this case
             | using an existing one) is a better idea than adapting to
             | Twitter's horrendous "UX".
        
               | naikrovek wrote:
               | WHY NOT BOTH?!
        
             | rconti wrote:
             | It's very unintuitive that when you open a Thread, the
             | "back" arrow means "back to something else" and you have to
             | scroll up above "line 0" to see the context of the thing
             | being replied to. I forget this every single time I open a
             | tweet thread and try to figure out the context.
             | 
             | Once you scroll up, it sort of makes sense -- each tweet
             | with a line connecting user icons but then suddenly the
             | expanded tweet thread has the main tweet in a larger font,
             | then the "retweet/like" controls below it, THEN another
             | line of smaller-font tweets that comprise the thread. Then
             | you get some limited number and have to click "more" for
             | more.
             | 
             | The monochrome of it all reminds me of when GMail got rid
             | of the very helpful colors-for-threads and went to grey on
             | grey on grey.
             | 
             | It's not visually apparent at all.
        
               | naikrovek wrote:
               | I am not saying it is intuitive.
               | 
               | I'm saying it's usable.
               | 
               | I'm saying that complaining about it makes people look
               | like they think they're royalty who need everything _just
               | so_ or their whole day is ruined... And now they can 't
               | tell the butler to take Poopsie for a walk because
               | they're so shaken by the experience.
        
               | gspr wrote:
               | I usable all we can expect from one of the world's most
               | popular websites?
        
             | xtracto wrote:
             | Twitter was designed to have 280 characters max per
             | message. This means that for this kind of long format text,
             | the amount of Signal-to-Noise ratio of having a large
             | number of "tweets" is pretty low.
             | 
             | The amount of stuff your brain has to filter in the form of
             | user name, user tweet handle, additional tagged handlers,
             | UI menus, UI buttons for replying, retweeting, liking, etc
             | on every single code snippet makes your brain work way more
             | than it should to read a page of text.
             | 
             | Just imagine if I had written this exact text in 3 separate
             | HackerNews comments, and prepended each with a 1/ 2/ 3/
             | text, in addition to all the message UI, it would have been
             | more difficult to read than a simple piece of text.
        
               | dahart wrote:
               | Fully agree with your takeaway. Adding context on the
               | character limit:
               | 
               | > Twitter was designed to have 280 characters max per
               | message.
               | 
               | Twitter was designed to use 140 characters, plus room for
               | 20 for a username to fix in the 160 char budget of an SMS
               | message.
               | 
               | Their upping it to 280 later was capitulating to the fact
               | that nobody actually wants to send SMS sized messages
               | over the internet.
        
               | naikrovek wrote:
               | You all are perfect delicate flowers that need things to
               | be _just right_ in order to use them, then? Is that what
               | you 're saying?
               | 
               | Because that's what I'm getting from you.
        
               | turminal wrote:
               | I for one am saying Twitter has many, many people working
               | on UX and the web interface is still terrible. Hardly any
               | interface made for such a broad spectrum of people gets
               | it just right for all its users, but Twitter is doing an
               | exceptionally bad job at it.
        
               | mech422 wrote:
               | no - I'm a delicate flower that refuses to use that sad
               | excuse of a 'service'...
               | 
               | Plenty of much better, more readable content on the
               | internet without submitting myself to that low quality
               | shit show with a poor ui.
        
               | NikolaNovak wrote:
               | I mean,yes? :-)
               | 
               | If I'm going to use something, it should be intuitive and
               | usable. It should be fit for its purpose, especially with
               | a myriad of single and multi purpose tools available to
               | all. This doesn't feel like something I should justify
               | too hard :-)
               | 
               | Twitter is not a necessity of life. I don't have to use
               | it. They want me to use it and if so they can/should make
               | it usable.
               | 
               | Its paradigm and user interface don't work for me
               | personally (and particularly when people try to fit an
               | article into something explicitly designed for a single
               | sentence - it feels like a misuse of a tool,like
               | hammering a screw) so I don't use it. And that's ok!
               | 
               | I don't feel they are morally obligated to make it usable
               | by me. It's a private platform and they can do as they
               | please.
               | 
               | But my wife is a store manager and taught me that
               | "feedback is a gift" - if a customer will leave the store
               | and never come back,she'd rather know why, rather then
               | remain ignorant.
               | 
               | She may or may not choose to address it but being aware
               | and informed is better than being ignorant of the
               | reasons.
               | 
               | So at the end of it, rather than down vote, let me ask
               | what is the actual crux of your argument? People
               | shouldn't be discriminate? They should use optional
               | things they dislike? They shouldn't share their
               | preferences and feedback? Twitter is a great tool for
               | long format essays? Or something we all may be missing?
        
           | H8crilA wrote:
           | It is real, probably not quite intended. Or at least not
           | specifically designed.
        
       | nom wrote:
       | Oh man, I wanted this to contain much more details :(
       | 
       | Whats the hardware? How much electric energy goes into rendering
       | a frame or a whole movie? How do they provision it (as they keep
       | #cores fixed)? They only talk about cores, do they even use GPUs?
       | What's running on the machines? What did they optimize lately?
       | 
       | So many questions! Maybe someone from Pixar's systems department
       | is reading this :)?
        
         | aprdm wrote:
         | Not Pixar specifically but Modern VFX and Animation studios
         | usually have a bare metal render farm, they usually are pretty
         | beefy -- think at least 24 cores / 128 GB of RAM per node.
         | 
         | Usually in crunch time if there's not enough nodes in the
         | render farm they might rent nodes connecting them to their
         | network for a period of time, or they might use the cloud, or
         | they might get budget to increase their render farms.
         | 
         | From what I've seen the Cloud is extremely expensive for beefy
         | machines with GPUs, but, you can see that some companies use it
         | if you google [0] [1].
         | 
         | GPUs can be used for some workflows in modern studios but I
         | would bet the majority of it is CPUs, those machines are
         | usually running a Linux distro and the render processes (like
         | vray / prman , etc.). Everything runs from a big NFS cluster.
         | 
         | [0] https://deadline.com/2020/09/weta-digital-pacts-with-
         | amazon-...
         | 
         | [1] https://www.itnews.com.au/news/dreamworks-animation-steps-
         | to...
        
           | tinco wrote:
           | Can confirm cloud GPU is way overpriced if you're doing 24/7
           | rendering. We run a bare metal cluster (not VFX but
           | photogrammetry) and I pitched our board on the possibilities.
           | I really did not want to run a bare metal cluster, but it
           | just does not make sense for a low margin startup to use
           | cloud processing.
           | 
           | Running 24/7 for three months, it's cheaper to buy consumer
           | grade hardware with similar (probably better) performance.
           | "Industrial" grade hardware (Xeon/Epyc + Quadro) it's under
           | 12 months. We chose consumer grade bare metal.
           | 
           | On thing that was half surprising, half calculated in our
           | decision was despite the operational overhead how much less
           | stressful running your own hardware is. When we ran
           | experimentally on the cloud, a misrender could cost us 900
           | euro, and sometimes we'd have to render 3 times or more for a
           | single client. Bringing us from healthily profitable to
           | losing money. The stress of having to get it right the first
           | time sucked.
        
             | jack2222 wrote:
             | I've had renders cost $50,000! Or CTO was less than amused
        
         | mroche wrote:
         | Former Pixar Systems Intern (2019) here. Though I was not part
         | of the team involved in this area, but I have some rough
         | knowledge around some of the parts.
         | 
         | > Whats the hardware?
         | 
         | It varies. They have several generations of equipment, but I
         | can say it was all Intel based, and high core count. I don't
         | know how different the render infra was to the workstation
         | infra. I think the total core count (aggregate of render,
         | workstation, and leased) was ~60K cores. And they effectively
         | need to double that over the coming years (trying to remember
         | one of the last meetings I was in) for the productions they
         | have planned.
         | 
         | > How much electric energy goes into rendering a frame or a
         | whole movie?
         | 
         | A lot. The render farm is pretty consistently running at high
         | loads as they produce multiple shows (movies, shorts,
         | episodics) simultaneously so that there really isn't idle
         | times. I don't have numbers, though.
         | 
         | > How do they provision it
         | 
         | Not really sure how to answer this question. But in terms of
         | rendering, to my knowledge shots are profiled by the TDs and
         | optimized for their core counts. So different sequences will
         | have different rendering requirements (memory, cores,
         | hyperthreading etc). This is all handled by the render farm
         | scheduler.
         | 
         | > What's running on the machines?
         | 
         | RHEL. And a lot of Pixar proprietary code (along with the
         | commercial applications).
         | 
         | > They only talk about cores, do they even use GPUs?
         | 
         | For rendering, not particularly. The RenderMan denoiser is
         | capable of being used on GPUs, but I can't remember if the
         | render specific nodes have any in them. The workstation systems
         | (which are also used for rendering) are all on-prem VDI.
         | 
         | Though with RenderMan 24 due out in Q1 2021 will include
         | RenderMan XPU, which is a GPU (CUDA) based engine. Initially
         | it'll be more of a workstation facing product to allow artists
         | to iterate more quickly (it'll also replace their internal CUDA
         | engine used in their propriety look-dev tool Flow, which was
         | XPU's predecessor), but it will eventually be ready for final-
         | frame rendering. There is still some catchup that needs to
         | happen in the hardware space, though NVLink'ed RTX8000's does a
         | reasonable job.
         | 
         | A small quote on the hardware/engine:
         | 
         | >> In Pixar's demo scenes, XPU renders were up to 10x faster
         | than RIS on one of the studio's standard artist workstations
         | with a 24-core Intel Xeon Platinum 8268 CPU and Nvidia Quadro
         | RTX 6000 GPU.
         | 
         | If I remember correctly that was the latest generation
         | (codenamed Pegasus) initially given to the FX department.
         | Hyperthreading is usually disabled and the workstation itself
         | would be 23-cores as they reserve one for the hypervisor. Each
         | workstation server is actually two+1, one workstation per CPU
         | socket (with NUMA configs and GPU passthrough) plus a
         | background render vm that takes over at night. The next-gen
         | workstations they were negotiating with OEMs for before COVID
         | happened put my jaw on the floor.
        
         | dahart wrote:
         | > They only talk about cores, do they even use GPUs?
         | 
         | They've been working on a GPU version of RenderMan for a couple
         | of years.
         | 
         | https://renderman.pixar.com/news/renderman-xpu-development-u...
        
         | lattalayta wrote:
         | Also, renderfarms are usually referred to "in cores", because
         | it's usually heterogeneous hardware networked together over the
         | years. You may have some brand new 96 core 512 GB RAM machines
         | mixed in with some several year old 8 core 32 GB machines. When
         | a technical artist is submitting their work to be rendered on
         | the farm, they often have an idea of how expensive their task
         | will be. They will request a certain number of cores from the
         | farm and a scheduler will go through and try to optimize
         | everyone's requests across the available machines.
        
         | daxfohl wrote:
         | And when leasing cores, who do they lease from and why?
        
       | jedimastert wrote:
       | When I was a little younger, I was looking at 3D graphics as a
       | career path, and I knew from the very beginning that if I were to
       | do it, I would work towards Pixar. I've always admired everything
       | they've done, both from an artistic and technical standpoint, and
       | how well they've meshed those two worlds in an incredible and
       | beautiful manner.
        
       | throwaway888abc wrote:
       | Is there similar scheduler to K8S ?
        
       | solinent wrote:
       | I've heard rumors that Pixar used to be able to render their
       | movies in real-time :)
        
         | klodolph wrote:
         | From what I understand, this was never the case, and not even
         | close. Toy Story took a ton of compute power to render (at the
         | time), Soul took a ton of compute power to render (at the
         | time).
        
         | HeWhoLurksLate wrote:
         | I'd like to imagine that they use some old movie or short as a
         | benchmark, that'd be neat to see the data on.
        
       | gorgoiler wrote:
       | Does their parallelism extend to rendering the movie in real
       | time? One display rendering the sole output of an entire data
       | centre.
        
       | mmcconnell1618 wrote:
       | Can anyone comment on why Pixar uses standard CPU for processing
       | instead of custom hardware or GPU? I'm wondering why they haven't
       | invested in FPGA or completely custom silicon that speeds up
       | common operations by an order of magnitude. Is each show that
       | different that no common operations are targets for hardware
       | optimization?
        
         | ArtWomb wrote:
         | There's a brief footnote about their REVES volume rasterizer
         | used in Soul World crowd characters. They simply state their
         | render farm is CPU based and thus no GPU optimizations were
         | required. At the highest, most practical level of abstraction,
         | it's all software. De-coupling the artistic pipeline from
         | underlying dependence on proprietary hardware or graphics APIs
         | is probably the only way to do it.
         | 
         | https://graphics.pixar.com/library/SoulRasterizingVolumes/pa...
         | 
         | On a personal note, I had a pretty visceral "anti-" reaction to
         | the movie Soul. I just felt it too trite in its handling of
         | themes that humankind has wrestled with since the dawn of time.
         | And jazz is probably the most cinematic of musical tastes.
         | Think of the intros to Woody Allen's Manhattan or Midnight in
         | Paris. But it felt generic here.
         | 
         | That said the physically based rendering is state of the art!
         | If you've ever taken the LIE toward the Queensborough Bridge as
         | the sun sets across the skyscraper canyons of the city you know
         | it is one of the most surreal tableaus in modern life. It's
         | just incredible to see a pixel perfect globally illuminated
         | rendering of it in an animated film, if only for the briefest
         | of seconds ;)
        
         | mhh__ wrote:
         | Relative to the price of a standard node, FPGA's aren't magic :
         | You have to find the parallelism in order to exploit it. As for
         | custom silicon, anything on a close to a modern process costs
         | millions in NRE alone.
         | 
         | From a different perspective, think about supercomputers - many
         | supercomputers do indeed do relatively specific things (and I
         | would assume some do run custom hardware), but the magic is in
         | the interconnects - getting the data around effectively is
         | where the black magic is.
         | 
         | Also, if you aren't particularly time bound, why bother? FPGAs
         | require completely different types of engineers, and are
         | generally a bit of pain to program for even ignoring how
         | horrific some vendor tools are - your GPU code won't fail
         | timing for example.
        
         | colordrops wrote:
         | I'm "anyone" since I know very little about the subject but I'd
         | speculate that they've done a cost-benefit analysis and figured
         | that would be overkill and tie them to proprietary hardware, so
         | that they couldn't easily adapt and take advantage of advances
         | in commodity hardware.
        
           | CyberDildonics wrote:
           | GPUs are commodity hardware, but you don't have to speculate,
           | this was answered well here:
           | 
           | https://news.ycombinator.com/item?id=25616527
        
         | boulos wrote:
         | Amusingly, Pixar did build the "Pixar Image Computer" [1] in
         | the 80s and they keep one inside their renderfarm room in
         | Emeryville (as a reminder).
         | 
         | Basically though, Pixar doesn't have the scale to make custom
         | chips (the entire Pixar and even "Disney all up" scale is
         | pretty small compared to say a single Google or Amazon
         | cluster).
         | 
         | Until _recently_ GPUs also didn 't have enough memory to handle
         | production film rendering, particularly the amount of textures
         | used per frame (which even on CPUs are handled out-of-core with
         | a texture cache, rather than "read it all in up front
         | somehow"). I think the recent HBM-based GPUs will make this a
         | more likely scenario, especially when/if OptiX/RTX gains a
         | serious texture cache for this kind of usage. Even still,
         | however, _those_ GPUs are extremely expensive. For folks that
         | can squeeze into the 16 GiB per card of the NVIDIA T4, it 's
         | _just_ about right.
         | 
         | tl;dr: The economics don't work out. You'll probably start
         | seeing more and more studios using GPUs (particularly with RTX)
         | for shot work, especially in VFX or shorts or simpler films,
         | but until the memory per card (here now!) and $/GPU (nope) is
         | competitive it'll be a tradeoff.
         | 
         | [1] https://en.wikipedia.org/wiki/Pixar_Image_Computer
        
           | brundolf wrote:
           | That wikipedia article could be its own story!
        
         | corysama wrote:
         | Not an ILMer, but I was at LucasArts over a decade ago. Back
         | then, us silly gamedevs would argue with ILM that they needed
         | to transition from CPU to GPU based rendering. They always
         | pushed back that their bottleneck was I/O for the massive
         | texture sets their scenes through around. At the time RenderMan
         | was still mostly rasterization based. Transitioning that multi-
         | decade code and hardware tradition over to GPU would be a huge
         | project that I think they just wanted to put off as long as
         | possible.
         | 
         | But, very soon after I left Lucas, ILM started pushing ray
         | tracing a lot harder. Getting good quality results per ray is
         | very difficult. Much easier to throw hardware at the problem
         | and just cast a whole lot more rays. So, they moved over to
         | being heavily GPU-based around that time. I do not know the
         | specifics.
        
         | jlouis wrote:
         | Probably because CPU times fall within acceptable windows. That
         | would be my guess. You can go faster with FPGA or silicon, but
         | it also has a very high cost, on the order of 10 to 100 as
         | expensive. You can get a lot of hardware for that.
        
         | berkut wrote:
         | Because the expense is not really worth it - even GPU rendering
         | (while around 3/4 x faster than CPU rendering) is memory
         | constrained compared to CPU rendering, and as soon as you try
         | and go out-of-core on the GPU, you're back at CPU speeds, so
         | there's usually no point doing GPU rendering for entire scenes
         | (which can take > 48 GB of RAM for all geometry, accel
         | structures, textures, etc) given the often large memory
         | requirements.
         | 
         | High end VFX/CG usually tessellates geometry down to
         | micropolygon, so you roughly have 1 quad (or two triangles) per
         | pixel in terms of geometry density, so you can often have >
         | 150,000,000 polys in a scene, along with per vertex primvars to
         | control shading, and many textures (which _can_ be paged fairly
         | well with shade on hit).
         | 
         | Using ray tracing pretty much means having all that in memory
         | at once (paging sucks in general of geo and accel structures,
         | it's been tried in the past) so that intersection / traversal
         | is fast.
         | 
         | Doing lookdev on individual assets (i.e. turntables) is one
         | place where GPU rendering can be used as the memory
         | requirements are much smaller, but only if the look you get is
         | identical to the one you get using CPU rendering, which isn't
         | always the case (some of the algorithms are hard to get working
         | correctly on GPUs, i.e. volumetrics).
         | 
         | Renderman (the renderer Pixar use, and create in Seattle) isn't
         | really GPU ready yet (they're attempting to release XPU this
         | year I think).
        
           | dahart wrote:
           | > Because the expense is not really worth it
           | 
           | I disagree with this takeaway. But full disclosure I'm
           | biased: I work on OptiX. There is a reason Pixar and Arnold
           | and Vray and most other major industry renderers are moving
           | to the GPU, because the trends are clear and because it has
           | recently become 'worth it'. Many renderers are reporting
           | factors of 2-10 for production scale scene rendering. (Here's
           | a good example: https://www.youtube.com/watch?v=ZlmRuR5MKmU)
           | There definitely are tradeoffs, and you've accurately pointed
           | out several of them - memory constraints, paging,
           | micropolygons, etc. Yes, it does take a lot of engineering to
           | make the best use of the GPU, but the scale of scenes in
           | production with GPUs today is already firmly well past being
           | limited to turntables, and the writing is on the wall - the
           | trend is clearly moving toward GPU farms.
        
             | berkut wrote:
             | I should also point out that ray traversal / intersection
             | costs are generally only around 40% of the costs of
             | extremely large scenes, and that's predominantly where GPUs
             | are currently much faster than CPUs.
             | 
             | (I'm aware of the OSL batching/GPU work that's taking
             | place, but it remains to be seen how well that's going to
             | work).
             | 
             | From what I've heard from friends in the industry (at other
             | companies) who are using GPU versions of Arnold, the
             | numbers are no-where near as good as the upper numbers
             | you're claiming when rendering at final fidelity (i.e. with
             | AOVs and Deep output), so again, the use-cases - at least
             | for high-end VFX with GPU - are still mostly for lookdev
             | and lighting blocking iterative workflow from what I
             | understand. Which is still an advantage and provides clear
             | benefits in terms of iteration time over CPU renderers, but
             | it's not a complete win, and so far, only the smaller
             | studios have started dipping their toes in the water.
             | 
             | Also, the advent of AMD Epyc has finally thrown some
             | competitiveness back to CPU rendering, so it's now possible
             | to get a machine with x2 as many cores for close to half
             | the price, which has given CPU rendering a further shot in
             | the arm.
        
             | berkut wrote:
             | I write a production renderer for a living :)
             | 
             | So I'm well aware of the trade offs. As I mentioned, for
             | lookdev and small scenes, GPUs do make sense currently (if
             | you're willing the pay the penalty of getting code to work
             | on both CPU and GPU, and GPU dev is not exactly trivial in
             | terms of debugging / building compared to CPU dev).
             | 
             | But until GPUs exist with > 64 GB RAM, for rendering large
             | scale scenes, it's just not worth it given the extra
             | burdens (increased development costs, heterogeneous sets of
             | machines in the farm, extra debugging, support), so for
             | high-end scale, we're likely 3/4 years away yet.
        
               | foota wrote:
               | Given current consumer GPUs are at 24 GB I think 3-4
               | years is likely overly pessimistic.
        
               | berkut wrote:
               | They've been at 24 GB for two years though - and they
               | cost an arm and a leg compared to a CPU with a similar
               | amount.
               | 
               | It's not just about them existing, they need to be cost
               | effective.
        
               | lhoff wrote:
               | Not anymore. The new Ampere based Quadros and Teslas just
               | launched with up to 48GB of RAM. A special datacenter
               | version with 80Gb is also already announced:
               | https://www.nvidia.com/en-us/data-center/a100/
               | 
               | They are really expensive though. But chassis and
               | rackspace also isn't free. If one beefy node with a
               | couple GPUs can replace have a rack of CPU only Nodes the
               | GPUs are totally worth it.
               | 
               | I'm not too familiar with 3D rendering but in other
               | workloads the GPU speedup is so huge that if its possible
               | to offload to the GPU it made sense to do it from a
               | economical perspective.
        
               | dahart wrote:
               | I used to write a production renderer for a living, now I
               | work with a lot of people who write production renderers
               | for both CPU and GPU. I'm not sure what line you're
               | drawing exactly ... if you mean that it will take 3 or 4
               | years before the industry will be able to stop using CPUs
               | for production rendering, then I totally agree with you.
               | If you mean that it will take 3 or 4 years before
               | industry can use GPUs for any production rendering, then
               | that statement would be about 8 years too late. I'm
               | pretty sure that's not what you meant, so it's somewhere
               | in between there, meaning some scenes are doable on the
               | GPU today and some aren't. It's worth it now in some
               | cases, and not worth it in other cases.
               | 
               | The trend is pretty clear, though. The size of scenes
               | than can be done on the GPU today is large and growing
               | fast, both because of improving engineering and because
               | of increasing GPU memory speed & size. It's just a fact
               | that a lot of commercial work is already done on the GPU,
               | and that most serious commercial renderers already
               | support GPU rendering.
               | 
               | It's fair to point out that the largest production scenes
               | are still difficult and will remain so for a while. There
               | are decent examples out there of what's being done in
               | production with GPUs already:
               | 
               | https://www.chaosgroup.com/vray-gpu#showcase
               | 
               | https://www.redshift3d.com/gallery
               | 
               | https://www.arnoldrenderer.com/gallery/
        
           | ArtWomb wrote:
           | Nice to have an industry insider perspective on here ;)
           | 
           | Can you speak to any competitive advantages a vfx-centric gpu
           | cloud provider may have over commodity AWS? Even the
           | RenderMan XPU looks to be OSL / Intel AVX-512 SIMD based.
           | Thanks!
           | 
           | Supercharging Pixar's RenderMan XPU(tm) with Intel(r) AVX-512
           | 
           | https://www.youtube.com/watch?v=-WqrP50nvN4
        
             | lattalayta wrote:
             | One potential difference is that the input data required to
             | render a single frame of a high end animated or VFX movie
             | might be several hundred gigabytes (even terabytes for
             | heavy water simulations or hair) - caches, textures,
             | geometry, animation & simulation data, scene description.
             | Often times a VFX centric cloud provider will have some
             | robust system in place for uploading and caching out data
             | across the many nodes that need it.
             | (https://www.microsoft.com/en-us/avere)
             | 
             | And GPU rendering has been gaining momentum over the past
             | few years, but the biggest bottleneck until recently was
             | availabe VRAM. Big budget VFX scenes can often take 40-120
             | GB of memory to keep everything accessible during the
             | raytrace process, and unless a renderer supports out-of-
             | core memory access, then the speed up you may have gained
             | from the GPU gets thrown out the window from swapping data
        
               | lattalayta wrote:
               | Oh, and also, security. After the Sony hack several years
               | ago, many film studios have severe restrictions on what
               | they'll allow off-site. For upcoming unreleased movies,
               | many studios are overly protective of their IP and want
               | to mitigate the chance of a leak as much as possible.
               | Often times complying with those restrictions and
               | auditing the entire process is enough to make on-site
               | rendering more attractive.
        
               | pja wrote:
               | As a specific example, Disney released the data for
               | rendering a single shot from Moana a couple of years ago.
               | You can download it here:
               | https://www.disneyanimation.com/data-
               | sets/?drawer=/resources...
               | 
               | Uncompressed, it's 93Gb of render data, plus 130Gb of
               | animation data if you want to render the entire shot
               | instead of a single frame.
               | 
               | From what I've seen elsewhere, that's not unusual at all
               | for a modern high end animated scene.
        
               | berkut wrote:
               | To re-enforce this, here is some discussion of average
               | machine memory size at Disney and Weta two years ago:
               | 
               | https://twitter.com/yiningkarlli/status/10144180385677967
               | 38
        
         | brundolf wrote:
         | In addition to what others have said, I remember reading
         | somewhere that CPUs give more reliably accurate results, and
         | that that's part of why they're still preferred for pre-
         | rendered content
        
           | dahart wrote:
           | > I remember reading somewhere that CPUs give more reliably
           | accurate results
           | 
           | This is no longer true, and hasn't been for around a decade.
           | This is a left-over memory of when GPUs weren't using IEEE
           | 754 compatible floating point. That changed a long time ago,
           | and today all GPUs are absolutely up to par with the IEEE
           | standards. GPUs even took the lead for a while with the FMA
           | instruction that was more accurate than what CPUs had, and
           | Intel and other have since added FMA instructions to their
           | CPUs.
        
           | enos_feedler wrote:
           | I believe this to be historically true as GPUs often
           | "cheated" with floating point math to optimize hardware
           | pipelines for game rasterization where only looks matter.
           | This is probably not true as GPGPU took hold over the last
           | decade.
        
             | brundolf wrote:
             | Ah, that makes sense
        
         | dahart wrote:
         | > Can anyone comment on why Pixar uses standard CPU for
         | processing instead of custom hardware or GPU?
         | 
         | A GPU enabled version of RenderMan is just coming out now. I
         | imagine their farm usage after this could change.
         | 
         | https://gfxspeak.com/2020/09/11/animation-studios-renderman/
         | 
         | I'm purely speculating, but I think the main reason they
         | haven't been using GPUs until now is that RenderMan is very
         | full featured, extremely scalable on CPUs, has a lot of legacy
         | features, and it takes a metric ton of engineering to port and
         | re-architect well established CPU based software over to the
         | GPU.
        
         | aprdm wrote:
         | FPGA is really expensive for the scale of a modern studio
         | render farm, we're talking around 40~100k cores per datacenter.
         | Because 40~100k cores isn't Google scale either it also doesn't
         | seem to make sense to invest in custom silicon.
         | 
         | There's a huge I/O bottleneck as well as you're reading huge
         | textures (I've seen textures as big as 1 TB) and writing
         | constantly to disk the result of the renderer.
         | 
         | Other than that, most of the tooling that modern studios use is
         | off the shelf, for example, Autodesk Maya for Modelling or
         | Sidefx Houdini for Simulations. If you had a custom
         | architecture then you would have to ensure that every piece of
         | software you use is optimized / works with that.
         | 
         | There are studios using GPUs for some workflows but most of it
         | is CPUs.
        
           | dahart wrote:
           | > There are studios using GPUs for some workflows but most of
           | it is CPUs.
           | 
           | This is probably true today, but leaves the wrong impression
           | IMHO. The clear trend is moving toward GPUs, and surprisingly
           | quickly. Maya & Houdini have release GPU simulators and
           | renderers. RenderMan is releasing a GPU renderer this year.
           | Most other third party renderers have already gone or are
           | moving to the GPU for path tracing - Arnold, Vray, Redshift,
           | Clarisse, etc., etc.
        
           | nightfly wrote:
           | I'm assuming these 1TiB textures are procedural generated or
           | composites? Where do this large of textures come up?
        
             | aprdm wrote:
             | Can be either. You usually have digital artists creating
             | them.
             | 
             | https://en.wikipedia.org/wiki/Texture_artist
        
               | CyberDildonics wrote:
               | Texture artists aren't painting 1 terabyte textures dude.
        
             | CyberDildonics wrote:
             | I would take that with a huge grain of salt. Typically the
             | only thing that would be a full terabyte is a full
             | resolution water simulation for an entire shot. I'm
             | unconvinced that is actually necessary, but it does happen.
             | 
             | An entire movie at 2k, uncompressed floating point rgb
             | would be about 4 terabytes.
        
             | lattalayta wrote:
             | 1 terabyte sounds like an outlier, but typically texture
             | maps are used as inputs to shading calculations. So it's
             | not uncommon for hero assets in large-scale VFX movies to
             | have 12-20 different sets of texture files that represent
             | different portions of a shading model. For assets that are
             | large (think the Star Destroyer from Star Wars, or the
             | Helicarrier from the Avengers), it may take 40-50 4K-16K
             | images to adequately cover the entire model such that if
             | you were to render it from any angle, you wouldn't see the
             | pixelation. And these textures are often stored as 16 bit
             | TIFFs or equivalent, and they are pre-mipmapped so the
             | renderer can choose the most optimal resolution at
             | rendertime.
             | 
             | So that ends up being 12-20 sets * 40-50 16 bit mipmapped
             | images which can end up being several hundred gigabytes of
             | image data. Then at rendertime, only the textures that are
             | needed to render what's visible in the camera are loaded
             | into memory and utilized, which typically ends up being
             | 40-80 GB of texture memory.
             | 
             | Large scale terrains and environments typically make more
             | use of procedural textures, and they may be cached
             | temporarily in memory while the rendering process happens
             | to speed up calculations
        
         | lattalayta wrote:
         | Pixar renders their movies with their commercially available
         | software, Renderman. In the past they have partnered with Intel
         | [1] and Nvidia [2] on optimizations
         | 
         | I'd imagine another reason is that Pixar uses off-the-shelf
         | Digital Content Creation apps (DCCs) like Houdini and Maya in
         | addition to their proprietary software, so while they could
         | optimize some portions of their pipeline, it's probably better
         | to develop for more general computing tasks. They also mention
         | the ability to "ramp up" and "ramp down" as compute use changes
         | over the course of a show
         | 
         | [1] https://devmesh.intel.com/projects/supercharging-pixar-s-
         | ren...
         | 
         | [2] https://nvidianews.nvidia.com/news/pixar-animation-
         | studios-l...
        
       | bluedino wrote:
       | I like the picture of the 100+ SPARCstation render farm for the
       | first _Toy Story_
       | 
       | https://mobile.twitter.com/benedictevans/status/766822192197...
        
         | erk__ wrote:
         | This reminds me of one of the first FreeBSD press releases. [0]
         | 
         | > FreeBSD Used to Generate Spectacular Special Effects
         | 
         | > Manex Visual Effects used 32 Dell Precision 410 Dual P-II/450
         | Processor systems running FreeBSD as the core CG Render Farm.
         | 
         | [0]: https://www.freebsd.org/news/press-rel-1.html
        
         | lattalayta wrote:
         | I always liked the neon sign they have outside their current
         | renderfarm
         | 
         | https://www.slashfilm.com/cool-stuff-a-look-at-pixar-and-luc...
        
       ___________________________________________________________________
       (page generated 2021-01-02 23:00 UTC)