[HN Gopher] The model for coins in Super Mario Odyssey is simple...
       ___________________________________________________________________
        
       The model for coins in Super Mario Odyssey is simpler than in Super
       Mario Galaxy
        
       Author : danso
       Score  : 327 points
       Date   : 2023-03-16 08:08 UTC (1 days ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | mortenjorck wrote:
       | Perhaps it would satisfy the pedants who have come out in force
       | if the headline read "The _in-game_ model for the coins in Super
       | Mario Odyssey _has a mesh that is_ 65% simpler than in SM
       | Galaxy."
       | 
       | Does that pass muster? Are we now free to appreciate how the
       | clever use of normal maps allowed Nintendo's artists to
       | counterintuitively decrease poly count across generations of
       | Mario titles?
        
         | make3 wrote:
         | It's funny that something that is now a super basic approach
         | that has been used literally everywhere for 20 years is
         | presented as some fancy Nintendo performance hack
        
           | chc wrote:
           | Mario Galaxy was not even 20 years ago, and that was the
           | point of comparison, so I think you're stating your case a
           | bit more strongly than is fair.
        
         | humanistbot wrote:
         | > the pedants who have come out in force
         | 
         | The only reason this made the top of the front page is because
         | this was written in a way that seems intentionally deceptive in
         | order to make you click. I'd argue the author is being a pedant
         | even more, and did it first. Had they used a more honest title,
         | people wouldn't be upset. But then it wouldn't have made the
         | front page.
         | 
         | I hate this trend in journalism that can be summed up as: "I'm
         | going to frame something in a confusing, counterintuitive way,
         | then explain it in a straightforward way, in order to make you
         | think I'm smart and make you think you've learned something
         | really complex." I see everywhere else on the Internet. HN is
         | supposed to filter this out.
        
           | nr2x wrote:
           | It's not journalism, it's a tweet. From a Mario fan account.
           | 
           | Maybe take the rest of the day off? Work through the anger?
        
             | joemi wrote:
             | Aren't tweets viewed as journalism these days? Or perhaps
             | better phrased: Can't tweets be considered journalism? As
             | much as I am loathe to admit it, I don't think that
             | something being on Twitter excludes it from being
             | journalism. That said, I don't know that this tweet counts
             | as journalism. But I also don't know that it doesn't count.
        
               | nr2x wrote:
               | Tweets by a professional journalist with employer name in
               | bio are a grey zone, but tweets from a super Mario fan
               | account are firmly not journalism.
        
               | joemi wrote:
               | That sure sounds like it's discrediting citizen
               | journalism or amateur journalism.
        
             | plonk wrote:
             | > Maybe take the rest of the day off? Work through the
             | anger?
             | 
             | This is nice by Twitter standards but it's still an ad
             | hominem, I hope this doesn't become common on HN. It never
             | leads to anything interesting and overall makes
             | conversations worse.
        
               | chc wrote:
               | I would say the same about middlebrow dismissals like the
               | one that comment was responding to, but apparently if you
               | make a comment like yours toward them it's a bad thing.
        
               | plonk wrote:
               | The parent is too harsh with that tweet but it's not an
               | ad hominem. You're allowed to hate things. I also hate
               | the same thing they're talking about, I just don't think
               | the OP deserves to be described this way.
        
               | chc wrote:
               | How is it ad hominem to say that somebody's comment is
               | excessively harsh to somebody, but not ad hominem to say
               | that somebody's comment is attacking somebody? They're
               | extremely similar sentiments, and I personally wouldn't
               | categorize either as ad hominem for the same reason -- a
               | person is distinct from the tone of their comment.
        
             | humanistbot wrote:
             | > Maybe take the rest of the day off? Work through the
             | anger?
             | 
             | This is an insulting and inappropriate response. I'm not
             | going to engage with the rest of your comment if this is
             | how you're going to act.
        
               | nr2x wrote:
               | It's from a place of love my friend, a Tweet is but a
               | Tweet. Your inner peace is more important. Take a nice
               | walk. :-)
        
           | libraryatnight wrote:
           | It's video game trivia on a twitter fan account.
        
         | cnity wrote:
         | I also find terminology games boring and pointless, but there's
         | something quite interesting that lies at (I think) the heart of
         | the debate: people tend to view a mesh as something like the
         | underlying concrete object, and the textures as a mere skin
         | pasted onto the outside.
         | 
         | This is evidenced in the way people seem to consider fragment
         | shaders to be a kind of "illusion" or falsehood.
         | 
         | But of course, a spoiler: it's "illusions" all the way down.
         | This is not a pipe. A value in a normal map is just as "real"
         | as a face in a mesh (that is, not at all, or entirely so,
         | depending on your perspective).
        
           | rideontime wrote:
           | Good insight. My intro to 3D gaming was the N64. For those of
           | us of a certain age, polygons are "real" and anything newer
           | is "fake."
        
           | ShamelessC wrote:
           | Right but this is entirely clarified from reading the image
           | posted. Instead people are reacting based on the title here.
           | 
           | Not to imply people don't read the articles here. I would
           | never imply that...
        
           | mortenjorck wrote:
           | _> people tend to view a mesh as something like the
           | underlying concrete object, and the textures as a mere skin
           | pasted onto the outside_
           | 
           | This is interesting to consider, as it never occurred to me
           | that anyone would think _otherwise._
           | 
           | I wonder if it's generational. Those who grew up with games
           | whose triangle (or quad, in the case of a certain Sega
           | system) budgets were constrained to the point of clearly
           | distinct mesh faces probably have a different mental model
           | from those for whom baked normals and LOD meshes have always
           | been a continuum.
        
         | Aeolun wrote:
         | In the current state of game development I'm not sure if we can
         | call using normal maps very 'clever' any more. It's just par
         | for the course.
         | 
         | It's fun anecdata though, that's for sure.
        
         | [deleted]
        
       | pcwalton wrote:
       | As pointed out by Matias Goldberg [1], neither model is
       | particularly optimized, however. They produce too many sliver
       | triangles. The max-area approach [2] was brought up as an
       | alternative that can significantly improve shading performance.
       | I'd like to see benchmarks against constrained Delaunay
       | triangulations too, which also tend to reduce sliver triangles
       | and have some potential advantages over the greedy max-area
       | approach because they distribute the large areas over many big
       | triangles instead of leaving a few slivers around the edges.
       | 
       | (Of course, none of this would be likely to improve FPS in the
       | game, but it's an interesting computational geometry problem
       | nonetheless.)
       | 
       | [1]:
       | https://twitter.com/matiasgoldberg/status/163650418707999539...
       | 
       | [2]: https://www.humus.name/index.php?page=Comments&ID=228
        
       | Animats wrote:
       | I saw that yesterday in a graphics forum.
       | 
       | Question: is there an existing level of detail generation system,
       | preferably open source, which can take a high-detail mesh, vertex
       | normals, and a normal map, and emit a lower level of detail mesh,
       | with vertex normals and a new normal map which looks the same?
       | 
       | (LOD generators tend to be either expensive, like Simplygon, or
       | academic and brittle, blowing up on some meshes.)
        
         | Jasper_ wrote:
         | The traditional process is building a high-poly mesh and a low-
         | poly mesh by hand, and using your 3DCG tool to bake from one
         | onto the other. For instance, in Maya, this is done using the
         | "Transfer Maps" tool [0]. The high-poly mesh can come from a
         | tool like Z-Brush.
         | 
         | For various productions reasons, we want to build the low-poly
         | mesh by hand. Managing topology is key to maintain a good
         | deformation, and you as the artist probably know all the areas
         | that will need more vertex density. As things make their way
         | down the rigging/animation pipeline, automated tools really
         | just don't cut it.
         | 
         | [0]
         | https://help.autodesk.com/view/MAYAUL/2022/ENU/?guid=GUID-B0...
        
       | samsaga2 wrote:
       | The model is simpler, but the render is substantially more
       | complex. You need an high poly model, extract their normal map,
       | convert it to a simpler one and then render the simpler model
       | with a texture and the precalculated normal map through a
       | vertex/fragment shader.
        
       | charlieyu1 wrote:
       | I'm actually curious, can we use native circles/curves/spheres
       | for rendering instead of polygons in a 3D engine? Is it at least
       | technically feasible?
        
         | Arch485 wrote:
         | It's completely doable, but most modeling software is built to
         | produce triagonal meshes, and most graphics cards are optimized
         | to render triagonal meshes.
        
         | feiss wrote:
         | You can use SDFs (signed distance functions) to express your
         | objects in a mathematical form. It has its pros and cons. One
         | of the bigger cons is that most tools use the polygonal
         | approach, so you'd have to build _a lot_ from scratch.
         | 
         | Lots of SDF examples in ShaderToy[2], and many introductory
         | videos on YouTube (I specially recommend Inigo Quilez's
         | channel[2] --co-creator of [1]--, and The Art of Code [3])
         | 
         | [1] https://www.shadertoy.com/
         | 
         | [2] https://www.youtube.com/channel/UCdmAhiG8HQDlz8uyekw4ENw
         | 
         | [3] https://youtu.be/PGtv-dBi2wE
        
       | chungy wrote:
       | Coins appear very seldomly in Mario Galaxy, whereas they are in
       | abundance in Odyssey. I have doubts the Wii would have coped with
       | the Galaxy coin model if so many appeared all at once. :)
        
       | h0l0cube wrote:
       | This is completely incorrect. The complexity has just been moved
       | into fragment shaders and texture assets. I'm quite sure the
       | cumulative size of the asset is much larger even though it has
       | less polygons.
        
         | kmeisthax wrote:
         | Yes, but the end result still allows more detail with less
         | processing power. You're going to do texture look-ups for each
         | pixel of the coin _anyway_ , you might as well also grab a
         | normal vector while you're there if you can cut your polycount
         | down by a third.
        
           | h0l0cube wrote:
           | > more detail with less processing power
           | 
           | Unarguably it's also _more_ processing power than its
           | predecessor which appeared to be untextured. Far more FLOPs
           | are executed at the fragment level than at the vertex level,
           | and that will vary with how much the model takes up on the
           | screen. But more bang for buck than geometry alone? Can 't
           | disagree with that.
        
           | ReactiveJelly wrote:
           | The high-poly model might have looked okay with a blurrier
           | texture, though.
           | 
           | It's hard to say without realistic benchmarks on Wii and
           | Switch hardware.
        
             | MetaWhirledPeas wrote:
             | Isn't texture performance mostly a function of RAM though?
             | Don't polygons require more CPU/GPU horsepower?
        
               | Aeolun wrote:
               | Only because current GPU's have whole banks of processors
               | specifically for texture processing, while you have only
               | a single CPU in your machine. The equation might change a
               | bit if you don't have all those extra units.
        
         | maverwa wrote:
         | Well, yes it shifts the details away from the actual poly model
         | and into other assets, but this is what you want a lot of the
         | time. The poly count factors in a lot of steps that do not care
         | about normal maps or stuff like that. Depth pass, culling,
         | transforming between coordination systems, all care about the
         | poly count, not about color. Not sure how many of these things
         | apply to Odyssey, but it's probably safe to assume this was an
         | intentional choice over going with more polys. Shifts the
         | details to the very end of the pipeline instead is considering
         | them on every step, kind of.
        
           | h0l0cube wrote:
           | > but this is what you want a lot of the time
           | 
           | Don't disagree with that, just the claim that the model is
           | simpler.
           | 
           | > Depth pass, culling, transforming between coordination
           | systems, all care about the poly count
           | 
           | At a guess a dynamic asset like this would be frustum or
           | occlusion culled by bounding box, which doesn't scale with
           | geometry count. Depth pass happens at draw time. But yes,
           | there would be more vertices to transform. Moving geometric
           | details to shaders is a trade-off between either being fill-
           | rate or vertex bound.
        
             | djmips wrote:
             | The Switch tends to be vertex bound and texture reads in
             | the fragment shader are efficient. But YMMV, always
             | profile.
        
             | maverwa wrote:
             | I guess the wording threw me of. For me, the ,,model"
             | references just the geometry. Which is what the post states
             | is smaller (=less triangles). But if you include all the
             | assets for the coin in that, yes it's not simpler/smaller.
        
         | philistine wrote:
         | That's exactly what the second tweet addresses. Most devs would
         | be aware of the difference, but the average game player would
         | have no clue. I'm not the average player and I didn't know how
         | much normal maps could _replace_.
        
         | make3 wrote:
         | they just present this as a cool optimization & aesthetic
         | trick, which it is
        
         | Waterluvian wrote:
         | It seems objectively correct, though. The model _is_ simpler.
         | They might take the model and do more processor intensive work
         | with it, transforming it, tessellating if, etc. before render
         | (like all models). But the model itself, is simpler.
        
           | makapuf wrote:
           | Except that you can't separate model and normal maps since
           | they're baked versions of the real model. As, you can say you
           | have smaller textures (but shaders with complex 2D data also)
        
             | Dylan16807 wrote:
             | Sure you can separate them. And normal maps don't have to
             | be baked from a "real" model.
        
           | raincole wrote:
           | Depending on what you mean by "model". I would probably call
           | it "mesh".
        
         | [deleted]
        
       | ant6n wrote:
       | I was expecting an article on the changes of the monetary system
       | in the Mario Universe.
        
       | rgj wrote:
       | In 1993 had an exam for my MSc Computer Sciences, course "3D
       | graphics".
       | 
       | To my despair, the exam turned out to consist of only a single
       | question.
       | 
       | Question: write code to approach a sphere using triangular planes
       | so the model can be used in rendering a scene.
       | 
       | I didn't get to that specific chapter, so I had no idea.
       | 
       | My answer consisted of a single sentence:
       | 
       | I won't do that, this is useless, let's just use the sphere, it's
       | way more efficient and more detailed.
       | 
       | And I turned it in.
       | 
       | I got an A+.
        
       | [deleted]
        
       | bitwize wrote:
       | Shaders can compensate for a lot of geometry. The humans in _Doom
       | 3_ look like formless mudmen on the lowest graphic settings, but
       | with all the intended shaders applied, detail emerges.
        
       | tinco wrote:
       | Were normal maps new in 2007? I feel I learned about normal maps
       | in around 2005, as a high schooler just hobbying around in Maya.
       | Did they maybe get a head start in PC games, or in raytracing?
       | Maybe my memory is just mixing things up, Mario Galaxy did feel
       | like a state of the art game when it came out although I wouldn't
       | associate the cartoony style with normal maps, maybe that's part
       | of why it doesn't have them.
        
         | FartyMcFarter wrote:
         | Super Mario Galaxy ran on the Wii, which had a GPU that is very
         | different from modern GPUs (and very different from even the
         | GPUs of other consoles in that generation i.e. Xbox 360 and
         | PS3). I'm not sure whether using normal maps was
         | feasible/efficient on it.
        
           | masklinn wrote:
           | According to the wiki, the PS2 was the only console of the
           | 6th gen to lack hardware normal mapping (and that's including
           | the Dreamcast), and all the 7th gen supported it.
        
           | nwallin wrote:
           | The Wii's GPU did indeed have a fixed function pipeline, but
           | that fixed function pipeline did support normal mapping.
           | ("dot3 bump mapping" as was the style at the time.) However,
           | the Wii's GPU only had 1MB of texture cache. So it was
           | generally preferable to do more work with geometry than with
           | textures.
           | 
           | Some exec from Nintendo is on record as saying something
           | along the lines that texture mapping looks bad and should be
           | avoided. My conspiracy theory is that it was a conscious
           | decision at Nintendo to limit the ability of its consoles to
           | do stuff with textures. The N64 was even worse, a lot worse:
           | it only had 4kB of texture cache.
        
           | dogma1138 wrote:
           | The GPU on the Wii supports normal mapping and bump mapping,
           | even the OG Xbox and the Dreamcast and most importantly the
           | Gamecube did.
           | 
           | https://www.nintendoworldreport.com/guide/1786/gamecube-
           | faq-...
           | 
           | Quite often it's not even a question of the GPU itself but
           | the development pipeline and the tooling used. As well as ofc
           | does the engine itself supports it.
           | 
           | Also at the end of the day your GPU has finite amount of
           | compute resources many of them are shared across various
           | stages of the rendering pipeline even back at the 6th and 7th
           | gen of consoles where fixed function units were far more
           | common.
           | 
           | In fact in many cases even back then if you used more
           | traditional "fixed function pipelines" the driver would still
           | convert things into shaders e.g. hardware T&L even on the Wii
           | was probably done by a vertex shader.
           | 
           | Many teams especially in companies that focus far more on
           | gameplay than visuals opted out for simpler pipelines that
           | have fewer variables to manage.
           | 
           | It's much easier to manage the frame budget if you only need
           | to care about how many polygons and textures you use, as if
           | the performance is too low you can reduce the complexity of
           | models or number/size of textures.
           | 
           | Shaders and more advanced techniques require far more fine
           | grain optimization especially once you get into deferred or
           | multi pass rendering.
           | 
           | So this isn't hardware, it's that the team behind it either
           | didn't want or didn't need to leverage the use of normal
           | mapping to achieve their design goals.
        
             | monocasa wrote:
             | > e.g. hardware T&L even on the Wii was probably done by a
             | vertex shader.
             | 
             | Nope, no vertex shaders on the Wii, just a hardware T&L
             | unit with no code upload feature.
        
         | masklinn wrote:
         | Normal maps were not new in 2007. I wondered if the Wii might
         | have lacked support, but apparently the PS2 was the last home
         | console to lack normal mapping hardware (you had to use the
         | vector units to simulate them), and the Dreamcast was the first
         | to have it though games made little use of it (that bloomed
         | with the Xbox).
         | 
         | It's still possible EAD Tokyo didn't put much stock in the
         | technique though, or that their workflow was not set up for
         | that at this point, or that hardware limitations otherwise made
         | rendering coins using normal mapping worse than fully modelling
         | them.
        
           | Jasper_ wrote:
           | There are tests of normal map assets found in Super Mario
           | Galaxy, but they're unused. The GameCube "supported" limited
           | normal mapping through its indirect unit, assuming the
           | lighting response is baked. The BLMAP tech seen in Skyward
           | Sword can generates lighting response textures at runtime and
           | could support normal mapping, but I don't think the studio
           | used it outside of a few special effects.
           | 
           | The bumpmapping hardware most wikis mention as "supporting
           | normal mapping" is pretty much irrelevant.
        
         | sorenjan wrote:
         | The first time I saw them used in gaming was when John Carmack
         | showed off Doom 3 running on Geforce 3 at a Macworld keynote in
         | 2001. It was very impressive and felt like a huge leap into the
         | future.
         | 
         | https://www.youtube.com/watch?v=80guchXqz14
        
         | favaq wrote:
         | I remember Doom 3 (2004) using them extensively
        
           | cyxxon wrote:
           | And on the other hand Unreal 2 from 2003 was not using them
           | IIRC, and I felt back hen that it was a missed opportunity
           | and immediately made it look a bit dated with the other cool
           | looking stuff coming out around then.
        
           | mastax wrote:
           | I remember The Chronicles of Riddick: Escape from Butcher Bay
           | (2004) used them quite a lot. I remember the effect was so
           | obvious that I noticed it despite not knowing what it was or
           | what it was called. Good looking game.
        
           | Arrath wrote:
           | I recall them being hyped up quite a bit in press in
           | prerelease articles about Halo 2 (2004) and TESIV: Oblivion
           | (2006)
        
             | silveroriole wrote:
             | Vividly remember my mind being BLOWN by the normal mapping
             | on the prison wall bricks in an Oblivion trailer! Simpler
             | times :)
        
         | matja wrote:
         | GL_ARB_texture_env_dot3 was implemented on the Geforce 3
         | launched in 2001, but the GeForce 256 launched in 1999 had dot3
         | support
        
           | kijiki wrote:
           | Matrox G400 (also 1999) had dot3 bump mapping as well. IIRC,
           | they advertised it as a first in the PC 3d accelerator space,
           | so I'd guess they shipped earlier in '99.
           | 
           | The G400 was generally lackluster at 3d performance, so the
           | feature didn't really get used by games.
        
         | lancesells wrote:
         | I was messing around with normal maps around ~'98 in Maya. Toon
         | shaders and x-ray shaders were what I was use them for.
        
         | capableweb wrote:
         | I don't think they were. Maybe they were new in gaming, but for
         | the 3D/animation scene they were not new at that point, I
         | remember using them earlier than so, and also using dump and
         | displacement maps which are related concepts.
        
       | thedrbrian wrote:
       | Here's a stupid question from someone who doesn't do this for a
       | living.
       | 
       | Which one is most efficient?
       | 
       | As a MEng I'd think the more polygons would be as you're just
       | rendering simple shapes but more of them. The other simple disc
       | is more power hungry because it requires some special bits on a
       | graphics card or whatever.
        
         | Arch485 wrote:
         | It largely depends, but in general normal maps are faster to
         | render. GPUs can parallelize that type of task better than
         | rendering triangles.
        
       | V__ wrote:
       | I assume both examples would run at indiscernible speeds on
       | modern hardware. How about more complex scenes? Are poly
       | calculations more expensive than map ones?
        
         | bluescrn wrote:
         | Per-pixel lighting (with a normal map) is generally more
         | expensive than per-vertex lighting (without). But when using
         | normal maps, you've usually got less vertices to deal with,
         | which helps.
         | 
         | And modern GPUs are plenty fast enough to do quite a bit of
         | work per pixel, at least until the pixel count gets silly
         | (resolutions of 4K and beyond), hence all the work going into
         | AI upscaling systems.
         | 
         | (But the first coin image is from Super Mario Galaxy, a Wii
         | game, and the Wii had a rather more limited GPU with a fixed-
         | function pipeline rather than fully-programmable shaders)
        
           | mewse wrote:
           | The Wii (like the GameCube) was weird. It was more than
           | fixed-function, but less than fully-programmable; you'd
           | basically assemble a simple "pixel shader" by clicking
           | together up to 16 operations from a hardcoded list, using up
           | to 8 textures. ('TEV Stages' is the term to Google if you
           | want the gory technical details)
           | 
           | If you were coming to it from the PS2 (like I was), it was
           | super-freeing and amazing and so much more flexible and
           | powerful than anything you were used to. But if you were
           | coming to it from the Xbox (like many of my co-workers were)
           | it was more like suddenly being handcuffed.
        
             | royjacobs wrote:
             | This reminds me of very early pixel shaders, which could
             | only consist of about 6 assembly instructions.
        
         | capableweb wrote:
         | It would reduce the number of draw calls and increase the
         | performance of rendering, as it's simpler to render textures
         | than polygons. Although I think you wouldn't be able to use the
         | same method for everything, as you'd use more vram when using
         | textures for the effect rather than polygons. But for the case
         | of something like coins in a Mario game, I guess it makes
         | sense.
         | 
         | As always, what is the right approach depends. If you want to
         | run the game in a very memory constrained environment (early
         | game consoles), the approach of using polygons might make more
         | sense. But nowadays we have plenty of vram and if the entity is
         | used a lot, 1 texture for 100s of entities makes it a
         | worthwhile tradeoff.
         | 
         | Edit: thank you for the corrections, I'm not heavily into game
         | programming/rendering but am familiar with 3D/animation
         | software, but the internals of rendering is really not my
         | specialty so I appreciate being corrected. Cunningham's Law to
         | the rescue yet again :)
        
           | codeflo wrote:
           | > It would reduce the number of draw calls
           | 
           | That hasn't been true for many years.
           | 
           | > as you'd use more vram when using textures
           | 
           | For the equivalent amount of detail, not really. Vertices are
           | expensive, normal maps are encoded very compactly.
        
             | masklinn wrote:
             | > That hasn't been true for many years.
             | 
             | This is a game from 2007, 16 years ago (and its development
             | would have started in 2005), so that it has not been true
             | for many years is not a useful assertion even if true.
             | 
             | Or worse, it confirms that it used to be an issue, and thus
             | may well have affected a game created "many years ago".
        
           | cypress66 wrote:
           | Draw calls is the same. The amount of tris does not affect
           | the draw call count.
        
         | [deleted]
        
         | codeflo wrote:
         | There are lots of subtleties about static vs. dynamic geometry,
         | instancing and so forth, but to summarize, yes: In general,
         | normal maps are cheaper than the equivalent additional geometry
         | would be, by a lot. They are also higher quality for the same
         | computational budget, because you get high quality anti-
         | aliasing essentially for free.
         | 
         | Now, you are right that this might not matter much if you only
         | consider coins in particular. But _everything_ in a level wants
         | to have additional surface details. The ability to have normal
         | maps everywhere was a huge part of the jump in visual quality a
         | couple of console generations ago.
        
           | DubiousPusher wrote:
           | If I had to make a guess, I would say that this change was
           | made for reasons totally unrelated to performance.
           | 
           | First, modern GPUs are capable of handling millions of
           | triangles with ease. Games are almost never vertex bound.
           | They are usually pixel bound or bound by locking. In other
           | words, too many or too costly of pixel operations are in use
           | or there is some poor coordination between the CPU and GPU
           | where the CPU gets stuck waiting for the GPU or the GPU is
           | underutilized at certain times and then over burdened other
           | times.
           | 
           | Adding two maps adds two texture fetches and possibly some
           | blending calculations. It's very hard to weigh the comparable
           | impact because different situations have a huge impact on
           | which code gets executed.
           | 
           | For a model with many triangles, the word case scenario is
           | seeing the side of the model with the most faces. This will
           | cause the most triangles to pass the winding test, occlusion
           | tests, frustum tests, etc. This will be relatively the same
           | regardless of how close the coin is to the camera.
           | 
           | For the normal mapped test, the worst case scenario is when
           | the coin fills the view regardless of its orientation. This
           | is because the triangles will then be rasterizied across the
           | entire screen resulting in a coin material calculation and
           | therefore the additional fetches for every pixel of the
           | screen.
           | 
           | Also, when it comes to quality, normal maps have a tendency
           | to break down at views of high angular incidence. This is
           | because though the lighting behaves as expected, it becomes
           | clear that parts of the model aren't sticking up and
           | occluding the coin. This means such maps are a poor solution
           | for things like walls on long hallways where the view is
           | expected to be almost perpendicular to the wall surfaces most
           | of the time.
           | 
           | There is a solution called Parallax Occlusion Mapping. Though
           | it is expensive and I don't see it in a lot of games.
        
         | jeffbee wrote:
         | The Switch does not contain modern hardware, though. It has,
         | viewed from the year 2023, a ten-year-old low-end smartphone
         | CPU and what amounts to a GeForce 830M. This is consistent with
         | Nintendo's past strategy. The Wii, when it was launched, was
         | similar to a 10-year-old PowerBook G3, in a white box.
        
           | kijiki wrote:
           | > The Wii, when it was launched, was similar to a 10-year-old
           | PowerBook G3, in a white box.
           | 
           | True for the CPU, but quite unfair to the Wii's ArtX GPU,
           | which was substantially faster and more featureful than the
           | ATI r128 in the PB G3.
        
         | dorkwood wrote:
         | I think another benefit of using normal maps that no one is
         | mentioning is that you get significantly less aliasing due to
         | mipmapping and interpolation in the fragment shader. You can
         | model all the same detail into the mesh, but it will often
         | still look worse than the normal map approach. So I don't think
         | it's necessarily a purely performance-oriented decision.
        
           | wlesieutre wrote:
           | Normal mapping is cool but parallax occlusion mapping is some
           | real wizardry:
           | 
           | https://en.wikipedia.org/wiki/Parallax_occlusion_mapping
           | 
           | https://www.youtube.com/watch?v=0k5IatsFXHk
           | 
           | You'd have a bad time including all of that in the actual
           | model.
        
       | layer8 wrote:
       | Shouldn't that be "simpler by a factor of 2.8" instead of "65%
       | simpler"? The number of triangles is reduced by a factor of 2.8
       | (268/96). If "simplicity" is the inverse of the number of
       | triangles used, then a 65% increase in simplicity would translate
       | to a reduction of the number of triangles to 1/1.65 = 61%,
       | whereas it's really 1/2.8 = 36% (which I guess is where the 65%
       | was derived from as 1 - 35%).
       | 
       | Just nitpicking about the confusing use of percentages here.
        
       | causality0 wrote:
       | I always thought the "yellow" style coins from games like Mario
       | 64 were much more aesthetically pleasing than the more realistic
       | gold coins of games like Odyssey.
        
         | edflsafoiewq wrote:
         | Me too. Incidentally, those are just billboards (1 quad) with a
         | grayscale+alpha texture. Vertex colors are used to tint them
         | yellow, red, or blue.
        
           | causality0 wrote:
           | As far as I know the N64 didn't support quads, just
           | triangles.
        
             | edflsafoiewq wrote:
             | That's correct, 1 quad just means 2 tris.
        
       | LarsDu88 wrote:
       | This is new news for people? This tech has been around since
       | before DOOM3 (2003, 20 years ago!)
       | 
       | As someone developing a VR game on the side who does all the
       | models in blender myself, modeling in high res then baking to
       | normal maps is a time consuming pain the the ass.
        
       | gravitronic wrote:
       | Now I understand the point of a normal map. Thanks!
        
       | dragontamer wrote:
       | For more information, see this GIF from Wikipedia:
       | https://en.wikipedia.org/wiki/Normal_mapping#/media/File:Ren...
       | 
       | The left polygons are "truly" a polygon. The right 4 shapes is
       | this "fake" technique of normal-mapping.
       | 
       | https://en.wikipedia.org/wiki/Normal_mapping
       | 
       | In particular:
       | https://en.wikipedia.org/wiki/Normal_mapping#/media/File:Nor...
       | 
       | This png makes it more obvious how to "Bake" a proper mesh into a
       | normal map.
       | 
       | ----------
       | 
       | Normal mapping looks pretty good, but its obviously not as good
       | as a proper mesh. Still though, normal-mapping is far, far, far
       | cheaper to calculate. In most cases, normal-mapping is "good
       | enough".
        
         | zwkrt wrote:
         | TIL about Normal Mapping, which to my layman eyes kind of looks
         | like computing a hologram on the flat surface of an object. In
         | the coin example in TFA, even though I now 'know' that the coin
         | is a cylinder, the normal map gives it very convincing coin
         | shape. Cool!
        
           | a_e_k wrote:
           | If you think that's cool, the next level, which really could
           | be considered to behave like a hologram is "parallax mapping"
           | and it's variant "parallax occlusion mapping".
           | 
           | Wikipedia has a little video showing the effect of parallax
           | mapping in action:
           | https://en.wikipedia.org/wiki/Parallax_mapping
           | 
           | Another good example: https://doc.babylonjs.com/features/feat
           | uresDeepDive/material...
           | 
           | And there's a decent explanation here:
           | https://learnopengl.com/Advanced-Lighting/Parallax-Mapping
           | 
           | Some game examples:
           | http://wiki.polycount.com/wiki/Parallax_Map
           | 
           | Some nice game examples, specifically with looking into
           | windows: http://simonschreibt.de/gat/windows-ac-row-ininite/
           | 
           | Basically, in terms of levels of realism via maps, the
           | progression goes
           | 
           | 1. Bump mapping: the shader reads a heightfield and estimates
           | the gradients to compute an adjustment to the normals.
           | Provides some bumpiness, but tends looks a little flat.
           | 
           | 2. Normal mapping: basically a variant of bump mapping -- the
           | shader reads the adjustment to the normals directly from a
           | two- or three-channel texture.
           | 
           | 3. Parallax mapping: the shader offsets the lookups in the
           | texture map by a combination of the heightmap height and the
           | view direction. Small bumps will appear to shift correctly as
           | the camera moves around, but the polygon edges and
           | silhouettes usually give the illusion away.
           | 
           | 4. Parallax occlusion mapping: like parallax mapping, but
           | done in a loop where the shader steps across the heightfield
           | looking for where a ray going under the surface would
           | intersect that heightfield. Handles much deeper bumps, but
           | polygon edges and silhouettes still tend to be a giveaway.
           | 
           | 4. Displacement mapping: the heightfield map (or vector
           | displacement map) gets turned into actual geometry that gets
           | rendered somewhere further on in the pipeline. Pretty much
           | perfect, but _very_ expensive. Ubiquitous in film (feature
           | animation and VFX) rendering.
        
           | dragontamer wrote:
           | So "classic" rendering, as per DirectX9 (and earlier) is
           | Vertex Shader -> Hardware stuff -> Pixel Shader -> more
           | Hardware stuff -> Screen. (Of course we're in DirectX12U
           | these days, but stuff from 15 years ago are easier to
           | understand, so lets stick with DX9 for this post)
           | 
           | The "hardware stuff" is automatic and hard coded. Modern
           | pipelines added more steps / complications (Geometry shaders,
           | Tesselators, etc. etc.), but the Vertex Shader / Pixel Shader
           | steps have remained key to modern graphics since the early
           | 00s.
           | 
           | -------------
           | 
           | "Vertex Shader" is a program run on every vertex at the start
           | of the pipeline. This is commonly used to implement wind-
           | effects (ex: moving your vertex left-and-right randomly to
           | simulate wind), among other kinds of effects. You literally
           | move the vertex from its original position to a new one, in a
           | fully customizable way.
           | 
           | "Pixel Shader" is a program run on every pixel after the
           | vertexes were figured out (and redundant ones removed). Its
           | one of the last steps as the GPU is calculating the final
           | color of that particular pixel. Normal mapping is just one
           | example of the many kinds of techniques implemented in the
           | Pixel Shading step.
           | 
           | -------------
           | 
           | So "Pixel Shaders" are the kinds of programs that "compute a
           | hologram on a flat surface". And its the job of a video game
           | programmer to write pixel shaders to create the many effects
           | you see in video games.
           | 
           | Similarly, Vertex Shaders are the many kinds of programs
           | (wind and other effects) that move vertices around at the
           | start of the pipeline.
        
         | bendmorris wrote:
         | There are more texture mapping approaches which can be added on
         | top of normal mapping to make the illusion even better, such as
         | parallax [1] or displacement [2] mapping.
         | 
         | [1] https://en.wikipedia.org/wiki/Parallax_mapping
         | 
         | [2] https://en.wikipedia.org/wiki/Displacement_mapping
        
           | dragontamer wrote:
           | Displacement mapping actually changes the mesh though, so its
           | no longer a simple mapping that can be implemented in the
           | pixel-shader step alone.
           | 
           | ------
           | 
           | IMO, its important to understand the limitations of pixel
           | shading, because its an important step in the modern graphics
           | pipeline.
        
       | xg15 wrote:
       | So far I found most purely normal/depth map based effects
       | disappointing: Yes, they kook amazing when you look at the object
       | frontally and sufficiently far away. But especially if you look
       | from the side, the effect falls flat because occlusion and
       | silhouettes are still following the model, not the textures.
       | 
       | Not sure if this has changed in recent years though.
        
         | xg15 wrote:
         | *look
        
       | darkerside wrote:
       | As a layman, I notice that the new model makes use of circles
       | while the old model uses straight edge polygons.
        
         | stefncb wrote:
         | They're both straight edged, as you can see in the wireframe
         | view. It's just that with modern programmable GPUs you get to
         | fake the roundness.
         | 
         | Essentially, in the lighting calculations, instead of using the
         | real normal from the polygon face, you interpolate it with its
         | neighbors, making it look smooth.
        
       | polytely wrote:
       | One of the quote tweets linked to this article about the most
       | efficient way to model cylinders, which is pretty interesting:
       | https://www.humus.name/index.php?page=Comments&ID=228
        
         | Aeolun wrote:
         | That's a surprising amount of difference in performance. Wonder
         | if it would still hold true on current GPU's.
        
           | kame3d wrote:
           | I was also curious so I just implemented and tried. For
           | circles with 49152 vertices I get (RTX 2070 Super):
           | 
           | - max area: ~2600 fps
           | 
           | - fan: ~860 fps
           | 
           | So it still holds true.
        
       | mewse wrote:
       | "Simpler" is maybe a bit of a stretch, since what's happened is
       | that a lot of the data which previously existed in vertices has
       | been moved to substantially higher-detail storage in texture
       | maps. If you weren't using the texture maps, you'd have to have a
       | lot more vertices to compensate and get an acceptable look.
       | 
       | In modern games, you often can't really separate the model from
       | the textures the way that you used to. Hell, I have a couple
       | models I render in my game which take their geometry _entirely_
       | from textures and don 't use traditional model vertices to carry
       | their mesh at all. These days you kind of have to consider models
       | and textures to be basically the same thing; just different data
       | sources that get fed into your shader and accessed in different
       | ways, and you can often move data back and forth between them
       | depending on the exact effect you want to get. But I wouldn't
       | call my no-vertex-data models "simpler" than a version which was
       | drawn from a static mesh; all that complexity is still there,
       | it's just being carried in a texture instead of in a vertex
       | buffer object.
        
         | rhn_mk1 wrote:
         | How do you even create an object without any vertex data? The
         | texture has to be stretched over _something_. Is this something
         | just the center point of the object?
        
           | tskool3 wrote:
           | [dead]
        
           | pixelesque wrote:
           | You can use various forms of projection mapping, i.e.
           | triplanar projection or surface gradient projection...
           | 
           | That way you only need point positions in the mesh, and no UV
           | or normals.
           | 
           | However, I think (at least in games: in VFX we go the other
           | way and have many many "primvars" as vertex/point data) most
           | real-time setups for hero objects end up having points and
           | UVs, and then normals are normally done as a texture.
        
           | mewse wrote:
           | You can put anything in a texture these days; they're just
           | generic pod arrays now and you can do pretty much whatever
           | you want with them; they definitely aren't always images that
           | get stretched over explicit meshes! In my game, for example,
           | I have one texture which contains the xyz positions and rgb
           | colors of lights. And another which contains connectivity
           | data for a road network. Neither one is an image that gets
           | displayed or stretched over things, but they're used as data
           | within various shaders.
           | 
           | The most "pure" way to do a "no mesh" render would probably
           | be to use geometry shaders to emit your geometric primitives.
           | (You do need to give it at least one "starting" primitive to
           | make the draw call, but you're free to entirely ignore that
           | primitive). It'd be trivial to embed some mesh data into a
           | "texture", for example, just by reading vertex xyz positions
           | into the rgb channels of a 1D texture that's
           | `number_of_triangles*3` elements wide, and your geometry
           | shader could walk along that texture and output those
           | vertices and triangles and have them get rendered, without
           | ever accessing a vertex buffer object.
        
             | user3939382 wrote:
             | Wow this has advanced a lot. I'm a programmer but in a
             | totally different space. The last time I learned about
             | this, many years ago, textures were basically bitmap files.
        
               | tenebrisalietum wrote:
               | Textures still are, but I think what OP was saying is
               | they can be used for purposes other than painting the
               | sides of polygons. Shaders are basically programs that
               | run massively parallel per pixel to be rendered if I
               | understand right.
        
             | sdwr wrote:
             | Whats your game?
        
             | gcr wrote:
             | This is how linden labs used to represent user-created
             | models in Second Life! you could just upload a .jpeg of
             | your model geometry. Not sure if that happened in the
             | vertex shader - probably not - but a similar idea
        
         | Jasper_ wrote:
         | The distinction is that textures are dense and have rigid
         | topology (a grid) while a vertex mesh can be sparse, and has
         | arbitrary topology. The distinction is extra clear in a
         | voxel/SDF approach, where you see harsh grid boundaries because
         | detail is undersampled, but you can't crank up the resolution
         | without oversampling elsewhere. The rigid topology of textures
         | can really hurt you.
        
           | water-your-self wrote:
           | > because detail is undersampled, but you can't crank up the
           | resolution without oversampling elsewhere.
           | 
           | I thought octrees solved this?
        
           | h0l0cube wrote:
           | > The rigid topology of textures can really hurt you.
           | 
           | Technically you could distort the UV map if you need variable
           | resolution across a model.
        
             | wildpeaks wrote:
             | Rizom UV can be used to generate UV maps containing
             | multiple resolution (Texel Density): https://www.rizom-
             | lab.com/how-to-use-texel-density-in-rizomu...
             | 
             | It's a very specialized tool (only about UV maps) but goes
             | in depth. It's so handy, I wouldn't be surprised if Epic
             | buys that company sooner or later.
        
             | fxtentacle wrote:
             | You can even automatically generate the optimal UV
             | allocation by balancing resulution-to-area distortion
             | against polygon edge distortion and optimizing with a
             | constrained least-squares solver.
        
               | Arrath wrote:
               | I'm half convinced you pulled this line from a ST:TNG
               | script.
        
               | justinator wrote:
               | All that's needed is for Data to say, "Captain, I've
               | figured a way out:"
        
               | xg15 wrote:
               | Ensign Crusher might have a point here!
        
               | LolWolf wrote:
               | And, lucky for everyone involved, we can solve huge
               | constrained least squares problems in less than
               | milliseconds on modern machines !
        
           | carterschonwald wrote:
           | Could you expand on the problem in sdf approaches? I've
           | always thought or assumed that if there's high frequency
           | features in a signed distance function rep you could use
           | gradient / derivative info to do adaptive sampling ?
        
             | Jasper_ wrote:
             | I'm referring to a discrete SDF, where you have a 2D/3D
             | grid storing distance samples, e.g. font textures, or mesh
             | distance field textures [0]. You can use a tree of textures
             | to get adaptive sampling, but a single texture gives you a
             | fixed resolution grid.
             | 
             | [0] See the "Quality" section here:
             | https://docs.unrealengine.com/4.27/en-
             | US/BuildingWorlds/Ligh...
        
               | carterschonwald wrote:
               | Gotcha. Nyquist and friends strike again!
        
         | aidenn0 wrote:
         | With normal maps are there issues with improper occlusion when
         | viewing at an angle, or are the textures "smart" enough to
         | avoid that?
        
           | Synaesthesia wrote:
           | When you view it from the side you can tell it's still a flat
           | polygon without depth, but mostly it's a really convincing
           | effect.
        
         | thiht wrote:
         | > "Simpler" is maybe a bit of a stretch
         | 
         | It's not. The full rendering might not be simpler, but the
         | model _is_ simpler.
        
           | ajross wrote:
           | Seems unlikely. Neither look like more than a few dozen
           | vertices. The version with the normal map is likely going to
           | be significantly _larger_ in GPU resources due to the texture
           | storage.
           | 
           | That said... how is this news, exactly? Bump/normal mapping
           | was a novel, new fangled technique back in the late 1990's...
           | In its original form, it actually predates programmable GPUs.
        
           | jstimpfle wrote:
           | The _mesh_ is simpler.
        
             | qikInNdOutReply wrote:
             | Eh, the low-poly mesh is simpler, the high fidelity mesh to
             | bake out the normal texture is actually more complex then
             | ever.
        
               | jstimpfle wrote:
               | I would guess the textures for this model have been
               | directly created from a CSG description, no need for a
               | higher fidelity mesh.
        
               | klodolph wrote:
               | The CSG approaches are somewhat more rare. The tools for
               | working with meshes are just so much better. When I've
               | seen people use CSG these days, it's often used as a tool
               | to create a mesh.
        
               | gcr wrote:
               | sorry for nitpicking but that's actually a misconception
               | - the GPU doesn't generate intermediate vertices as a
               | result of normal maps! all of that happens in the
               | fragment shader, which only changes how light behaves at
               | the fragments being rendered. Look close and you'll see
               | distortion.
               | 
               | a vertex shader could change the geometry or emit
               | intermediate vertices from a displacement map, but my
               | understanding is that that also happens via fragment
               | shader tricks!
               | 
               | perhaps you mean the artist's highly detailed mesh used
               | to bake normal maps but those aren't typically shipped
               | out to the user's game engine to save space, it's just an
               | artifact of the artist's production process
        
               | wccrawford wrote:
               | They literally said "bake" so yeah, they mean the
               | original mesh.
        
               | woah wrote:
               | Could you do it by lighting a clay model with some
               | colored lights?
        
         | polishdude20 wrote:
         | Doesn't the bump map use more data than the extra vertices?
        
           | tinus_hn wrote:
           | If you render a bump map, the graphics system actually
           | creates geometry and draws it. This is a normal map, which
           | just fiddles with the lighting to make it approximately match
           | what you would get if the details were actually there.
           | 
           | The normal map probably takes up slightly more memory than
           | all the geometry that is in the more complex model, but the
           | more complex model causes the gpu to render a lot more
           | triangles which is a lot more work. Looking up values in the
           | normal map is something the gpu is very good at.
        
             | ygra wrote:
             | > If you render a bump map, the graphics system actually
             | creates geometry and draws it.
             | 
             | That sounds more like a displacement map.
        
         | dudeinjapan wrote:
         | In terms of bytes of data of the asset, the Odyssey coin has
         | got to be way larger.
        
         | brundolf wrote:
         | But a normal map renders much more cheaply than the equivalent
         | geometry, right? I assume that's why they exist at all. "All
         | that complexity is still there" feels beside the point
        
           | ladberg wrote:
           | Maybe, maybe not. It depends what your constraints are. The
           | normal map probably takes up much more memory than the mesh
           | and the shaders needed to render it are likely more
           | complicated (though they would likely already be included in
           | the engine, and thus no additional cost).
        
           | BearOso wrote:
           | Probably not. You save on vertex transforms, but all the work
           | is put into the pixel shaders, which do a bunch of
           | calculations per fragment. If this is full parallax mapping,
           | it's definitely more total work this way. Whether it's faster
           | in the end depends on where you have CPU/GPU time to spare
           | and what your GPU is good at.
        
             | [deleted]
        
             | chc wrote:
             | If parallax mapping is more expensive than just rendering
             | the equivalent polygons, doesn't that imply that the
             | Spider-Man games are being wasteful with their texture-
             | based building interiors? I thought it was supposed to be
             | an optimization and rendering actual objects would be
             | wasteful. Have I got that wrong?
        
               | wildpeaks wrote:
               | It's not wasteful because Interior Mapping replaces many
               | objects (which would each have their own vertices and
               | textures) by a single square and texture, so you can have
               | a complex room with many objects at the cost of a single
               | object.
               | 
               | Example: https://andrewgotow.com/2018/09/09/interior-
               | mapping-part-2/
        
         | Aeolun wrote:
         | How do you determine position/size of something that is
         | entirely a texture/shader? The shader just makes something up
         | based on size?
        
           | vikingerik wrote:
           | It still has a position and size. It's just a simple
           | definition, something like only the center point of the
           | object and one size value, rather than a mesh with hundreds
           | or thousands of vertex coordinates.
        
         | bowmessage wrote:
         | Your point about models taking their geometry entirely from
         | textures reminds me of 'sculpties' from Second Life.
        
         | flangola7 wrote:
         | Textures are an image stretched over a wireframe. I don't see
         | how you can have that without a wireframe.
        
           | gcr wrote:
           | Inigo Quilez does some incredible stuff with signed distance
           | fields! Those don't use textures in the traditional sense,
           | but see https://www.youtube.com/watch?v=8--5LwHRhjk
        
         | Buttons840 wrote:
         | > Hell, I have a couple models I render in my game which take
         | their geometry entirely from textures and don't use traditional
         | model vertices
         | 
         | I've briefly looked at rendering API, newer ones like Vulkan or
         | WGPU. I got the impression that ultimately there is only data
         | and shaders. Is there any truth to this? Of course, you want
         | your data to be useful to the parallel calculations done by
         | shaders.
        
           | klodolph wrote:
           | I'm not exactly sure what you're getting at here, but "only
           | data and shaders" is not correct. We still have big fixed-
           | function chunks like perspective divide, clipping,
           | rasterization, depth test, etc. The pipeline is very flexible
           | and programmable, but games aren't using GPGPU approaches to
           | render the game (although they might use GPGPU approaches for
           | certain tasks).
        
       | lmpdev wrote:
       | I wonder if there's an opportunity down the line as engines get
       | faster for NURBs + modern texture rendering?
       | 
       | Especially for geometrically simple shapes like this coin
       | 
       | After using Rhino 3D for a while, meshes feel "dirty"
       | 
       | I know NURBs would be much slower to render but I think it could
       | be a potential future look for games
       | 
       | NB I'm not a game developer or 3D modeller
        
         | pezezin wrote:
         | GPUs have implemented tessellation shaders for a long time now,
         | and they can be used to implement what you describe. The Xbox
         | 360 already had them, but it was too slow at the time so not
         | used much. If I'm not mistaken, later generations have used
         | them extensively.
        
         | vvilliamperez wrote:
         | Nurbs are not efficient for realtime rendering.
         | 
         | There's no reason to encode all that curve data if all you care
         | about in the end is a finite set of pixels rendered on the
         | screen.
         | 
         | You can achieve the same level of detail without having to
         | recalculate the surfaces. It's a lot easier for the computer to
         | morph a shape by applying some math function (that's
         | essentially arithmetic) to all the polygon's vertexes.
         | 
         | That said, I've seen some 3d artists work in NURBs, then render
         | out to Polygons.
        
         | throwaway17_17 wrote:
         | One of the largest problems with moving to anything other than
         | triangles as the rendering primitive is the decades of industry
         | investment in pipelines for producing and hardware (on the
         | player end) designed and optimized to push triangles. There
         | would be huge issue with backwards compatibility for a GPU that
         | defaulted to anything other than triangles, not to mention the
         | graphics API's designed for those triangle pushers.
        
       | butz wrote:
       | Could we "3D golf" to make coins looking the same, but using even
       | less vertices?
        
         | cdev_gl wrote:
         | Absolutely. The minimal mesh is just 3 triangles rendering a 3D
         | texture that contains depth information that gets raytraced in
         | a shader. Or you could send a single point vertex over and have
         | a geometry shader generate the geometry vertices for you.
         | 
         | Or you could reduce the texture complexity to a 2D texture
         | without raytracing if you defined the coin mathematically in
         | the shader. Heck, you could get rid of the texture entirely and
         | just do everything with math in the shader.
         | 
         | 3D graphics used to require rigid hardware distinctions between
         | vertices and textures and etc, but in modern pipelines it's
         | pretty much all data and the programmer decides what to do with
         | it.
        
       | pyrolistical wrote:
       | Could they have striped odyssey coin to further reduce the vetex
       | buffer?
        
         | wlesieutre wrote:
         | The details within the object can look great but you'll see the
         | low vertex count in the shape of its silhouette
        
       ___________________________________________________________________
       (page generated 2023-03-17 23:00 UTC)