[HN Gopher] Vector graphics on GPU
       ___________________________________________________________________
        
       Vector graphics on GPU
        
       Author : mpweiher
       Score  : 122 points
       Date   : 2022-08-08 10:37 UTC (2 days ago)
        
 (HTM) web link (gasiulis.name)
 (TXT) w3m dump (gasiulis.name)
        
       | Pulcinella wrote:
       | I'm glad it seems more and more people are looking into rendering
       | vector graphics on the GPU.
       | 
       | Has anyone done any image comparisons between CPU vs GPU
       | rendering. I would be worried about potential quality and
       | rendering issues of a GPU rendered image vs a CPU rendered
       | reference image.
        
         | jstimpfle wrote:
         | There's nothing to worry about. You can do the same things on
         | the GPU as on the CPU. The tricky part is finding a good way to
         | distribute the work on many small cores.
        
         | moonchild wrote:
         | The interesting primitives are: add mul fma sqrt. All of these
         | are mandated by ieee-754 to be correctly rounded. While gpus
         | have been found to cut corners in the past, I wouldn't worry
         | too much about it.
        
         | TobTobXX wrote:
         | Shouldn't a GPU render (given a correct algorithm
         | implementation) be more correct in environments where zooming
         | and sub-pixel movements are common (eg. browsers)? The GPU runs
         | the mathemarical computations every frame for the exact pixel
         | dimensions while the CPU may often use techniques like
         | upscaling.
        
           | Pulcinella wrote:
           | I would at least think it would be faster/smoother during
           | operations like zooming.
           | 
           | My concern was about precision of math operations on the GPU
           | and potential differences between GPU vendors (or even
           | different models of GPUs from the same vendor).
        
         | [deleted]
        
       | slabity wrote:
       | I don't want to be the guy that doesn't read the entire article,
       | but the first sentence surprised me quite a bit:
       | 
       | > Despite vector graphics being used in every computer with a
       | screen connected to it, rendering of vector shapes and text is
       | still mostly a task for the CPU.
       | 
       | Do modern vector libraries really not use the GPU? One of the
       | very first things I did when learning Vulkan was to use a
       | fragment shader to draw a circle inside a square polygon. I
       | always assumed that we've been using the GPU for pretty much any
       | sort of vector rasterization, whether it was bezier curves or
       | font rendering.
        
         | dragontamer wrote:
         | 3D vector graphics are not as full featured as 2d vector
         | graphics.
         | 
         | 2d vector graphics include things like "bones" and "tweening",
         | which are CPU algorithms. (Much like how bone processing in 3d
         | world is also CPU-side processing).
         | 
         | ---------
         | 
         | Consider the creation of a Beizer curve, in 2d or 3d. Do you
         | expect this to be a CPU algorithm, or GPU algorithm? Answer:
         | clearly a CPU algorithm.
         | 
         | GPU algorithms generally are triangle-only, or close to it (ex:
         | quads) as far as geometry. Sure, there are geometry shaders,
         | but I don't think its common practice to take a Beizer Curve
         | definition and write a Tesselator-shader for it and output (in
         | parallel) a set of verticies. (And if someone is doing that,
         | I'm interested in heading / learning more about it. It seems
         | like a parallelizable algorithm to me but the devil is always
         | in the details...).
        
           | jstimpfle wrote:
           | GPUs have evolved away from being strictly triangle
           | rasterizers. There are compute shaders that can do general
           | purpose computing. The approach described here could in
           | theory be set up by "drawing" a single quad - the whole
           | screen, and it doesn't even need compute shaders but can be
           | implemented using conventional vertex/fragment shaders with
           | global buffer access (in OpenGL, UBOs or SSBOs).
           | 
           | There is a well-known paper that describes an approach how to
           | draw bezier curves by "drawing" a single triangle. Checkout
           | Loop-Blinn from 2005.
        
             | Lichtso wrote:
             | Ah yes, the https://en.wikipedia.org/wiki/Implicit_curve
             | approach to filling curves. I have implemented a GPU vector
             | renderer using that:
             | https://github.com/Lichtso/contrast_renderer Here the
             | implicit curve filling extracted as a shader toy:
             | https://www.shadertoy.com/view/WlcBRn
             | 
             | It can even do cubic rational bezier curves, resolution
             | independently. And to my knowledge it is the only renderer
             | capable of that so far.
        
               | jstimpfle wrote:
               | Your project sounds very impressive. I would like to try
               | it out, unfortunately I'm unlikely to be able to get it
               | to run by building Rust. If I understand correctly it
               | should be able to run it on the Web, you have a demo
               | somewhere? Or a video?
        
               | Lichtso wrote:
               | There is a WASM based web demo on GitHub:
               | https://lichtso.github.io/contrast_renderer/
               | 
               | You will need to try different nightly browsers (I think
               | Chrome works ATM), because the WebGPU API changes and
               | breaks all the time. Also don't forget to enable WebGPU,
               | you can check that here: https://wgpu.rs/examples-gpu/
               | 
               | The WASM port is highly experimental: It currently does
               | not use interval handlers. So for animations to run you
               | need to constantly move the mouse to provide frame
               | triggering events. In WebGPU MSAA is limited to 4 samples
               | ATM, so anti aliasing will look kind of bad in browsers.
               | And the keyboard mapping is not configured, so typing in
               | text fields produces gibberish.
        
             | dragontamer wrote:
             | Seeing that this is from Microsoft research, and that
             | Microsoft's font renderer has always looked nicer (and is
             | known to be GPU-rendered to boot) makes a lot of sense.
             | 
             | Still, my point stands that this is relatively uncommon
             | even in the realm of 3d programmers. Unity / Unreal engine
             | doesn't seem to do GPU-side Beizer curve processing, even
             | if the algorithm was researched by Microsoft from 2005.
        
               | jstimpfle wrote:
               | What font renderer do you mean? I don't know about the
               | internals of Microsoft renderers but vector graphics and
               | font rasterization generally are distinct disciplines.
               | This has started to change with higher-resolution
               | displays, but font rasterization traditionally has been a
               | black art involving things like grid snapping, stem
               | darkening etc. Probably (but can't back this up) the most
               | used font rasterization technologies are still Microsoft
               | ClearType (are there implementations that use GPU??) and
               | Freetype (strictly a CPU rasterizer). Don't know about
               | MacOS, but I heard they don't do any of the advanced
               | stuff and have less sharp fonts on low-dpi displays.
               | 
               | I would also like to know where Loop-Blinn is used in
               | practice? I once did an implementation of quadratic
               | Beziers using it, but I'm not up to doing the cubic
               | version, it's very complex.
        
               | dragontamer wrote:
               | Microsoft's DirectWrite font renderer, which has been the
               | default since Windows 7 IIRC:
               | https://docs.microsoft.com/en-
               | us/windows/win32/directwrite/d...
               | 
               | Its a blackbox. But Microsoft is very clear that its
               | "hardware accelerated", whatever that means (IE: I think
               | it means they got GPU-shaders handling a lot of details).
               | 
               | GDI / etc. etc. are legacy. You were supposed to start
               | migrating towards Direct2D and DirectWrite decades ago.
               | Cleartype itself moved to DirectWrite (though it still
               | has GDI renderer for legacy purposes).
               | 
               | https://docs.microsoft.com/en-
               | us/windows/win32/directwrite/i...
        
           | slabity wrote:
           | I'm not really experienced when it comes to GPU programming,
           | so forgive me if I'm wrong with this, but some of the things
           | you say don't make a lot of sense to me:
           | 
           | > 2d vector graphics include things like "bones" and
           | "tweening", which are CPU algorithms. (Much like how bone
           | processing in 3d world is also CPU-side processing).
           | 
           | Changing the position of bones does seem like something you
           | would do on a CPU (or at least setting the indices of bone
           | positions in a pre-loaded animation), but as far as I'm
           | aware, 99% of the work for this sort of thing is done in a
           | vertex shader as it's just matrix math to change vertex
           | positions.
           | 
           | > Consider the creation of a Beizer curve, in 2d or 3d. Do
           | you expect this to be a CPU algorithm, or GPU algorithm?
           | Answer: clearly a CPU algorithm.
           | 
           | Why is it clearly a CPU algorithm? If you throw the bezier
           | data into a uniform buffer, you can use a compute shader that
           | writes to an image to just check if each pixel falls into the
           | bounds of the curve. You don't need to use the graphics
           | pipeline at all if you're not using vertices. Or even just
           | throw a quad on the screen and jump straight to the fragment
           | shader like I did with my circle vector.
        
         | Jasper_ wrote:
         | Skia mostly uses the CPU -- it can draw some very basic stuff
         | on the GPU, but text and curves are a CPU fallback. Quartz 2D
         | is full CPU. cairo never got an acceptable GPU path. Direct2D
         | is the tessellate-to-triangle approach. If you name a random
         | vector graphics library, chances are 99% of the time it will be
         | using the CPU.
        
           | jahewson wrote:
           | Skia can tessellate curved paths.
        
         | amelius wrote:
         | Yes, it's like the article is trying to ignore why GPUs were
         | invented in the first place.
        
         | TazeTSchnitzel wrote:
         | There's definitely a lot of code out there that still does this
         | only on the CPU, but the optimized implementations used in
         | modern OSes, browsers and games won't.
        
         | ygra wrote:
         | A few of the previous approaches are mentioned in _Other work_
         | near the end. And from reading a few articles on the topic I
         | got the impression that, yes, drawing a single shape in a
         | shader seems almost trivial, vector graphics in general means
         | mostly what PostScript /PDF/SVG are capable of these days. This
         | means you don't just need filled shapes, but also strokes (and
         | stroking in itself is a quite complicated problem), including
         | dashed lines, line caps, etc. Gradients, image fills, blending
         | modes are probably on the more trivial end, since I think those
         | can all be easily solved in shaders.
        
           | bXVsbGVy wrote:
           | In SVG, strokes has a separated specification only for them
           | [1].
           | 
           | The specification has images that highlights some of the
           | challenges.
           | 
           | [1] https://svgwg.org/specs/strokes/
        
         | Guzba wrote:
         | SVG paths can be arbitrarily complex. This article really
         | doesn't discuss any of the actual hard cases. For example,
         | imagine the character S rotated 1 degree and added to the path
         | on top of itself in a full rotation. This is one path composed
         | of 360 shapes. These begin and end fill sections (winding order
         | changes) coincide in the same pixels at arbitrary angles (and
         | the order of the hits is not automatically sorted!) but the
         | final color cannot be arrived at correctly if you do not
         | process all of the shapes at the same time. If you do them one
         | at a time, you'll blend tiny (perhaps rounded to zero) bits of
         | color and end up with a mess that looks nothing like what it
         | should. These are often called conflation artifacts IIRC.
         | 
         | There's way more to this than drawing circles and rectangles,
         | and these hard cases are why much of path / vector graphics
         | filling still ends up being better on CPU where you can
         | accumulate, sort, etc which takes a lot of the work away. CPU
         | does basically per-Y whereas this is GPU per-pixel so perhaps
         | they're almost equal if the GPU has the square of a CPU power.
         | Obv this isn't quite right but gives you an idea.
         | 
         | Video discussing path filling on CPU (super sampling and
         | trapezoid): https://youtu.be/Did21OYIrGI?t=318 We don't talk
         | about the complex cases but this at least may help explain the
         | simple stuff on CPU for those curious.
        
           | Scene_Cast2 wrote:
           | Is is it practical to ignore / approximate / offload the
           | complex edge cases?
        
             | bsder wrote:
             | No, mostly it is not practical to offload the edge cases.
             | 
             | The reason for this is that the single 2D application that
             | people most want to speed up is font rendering. And font
             | rendering is also the place where the edge cases are really
             | common.
             | 
             | Rendering everything else (geometric shapes) is trivial by
             | comparison.
        
             | Guzba wrote:
             | I want to say yes, but it depends on what you're actually
             | doing as a final goal.
             | 
             | Detecting when a path is antagonistic to most GPU
             | approaches takes time, as does preparing the data however
             | it needs to be prepared on the CPU before being uploaded to
             | the GPU. If you can just fill the whole thing on CPU in
             | that time, you wasted your time even thinking about the
             | GPU.
             | 
             | If you can identify a simple case quickly, it's probably
             | totally a good idea to get the path done on the GPU unless
             | you need to bring the pixels back to the CPU, maybe for
             | writing to disk. The upload and then download can be way
             | slower than just, again, filling on CPU.
             | 
             | If you're filling on GPU and then using on GPU (maybe as a
             | web renderer or something), GPU is probably a big win.
             | Except, this may not actually matter. If there is no need
             | to re-render the path after the first time, it would be
             | dumb to keep re-rendering on the GPU each frame / screen
             | paint. Instead you'd want to put it into a texture.
             | Well.... if you're only rendering once and putting into a
             | texture, this whole conversation is maybe pointless? Then
             | what is simple is probably the best idea. Anyway lots to 2d
             | graphics that goes underappreciated!
        
       | pbsurf wrote:
       | GPU vector graphics library I released a few years ago:
       | https://github.com/styluslabs/nanovgXC - basically a new backend
       | for nanovg that supports arbitrary paths. Coverage calculation
       | (for analytic antialiasing) is explained a bit here:
       | https://github.com/styluslabs/nanovgXC/blob/master/src/nanov...
        
       | tasty_freeze wrote:
       | 8 or 9 years ago I had need to rasterize SVG for a program I had
       | written back then and looked into gpu vs cpu, but a software
       | rasterizer ended up being fast enough for my needs and was
       | simpler, so I didn't dig any further.
       | 
       | At the time I looked at an nvidia rendering extension, which was
       | described in this 2012 paper:
       | 
       | https://developer.nvidia.com/gpu-accelerated-path-rendering
       | 
       | In addition to the paper, the linked page has links to a number
       | of youtube demos. That was 10 years ago, so I have no idea if
       | that is still a good way to do it or if it has been superseded.
        
       | jayd16 wrote:
       | Is there no built in GPU path draw command? Seems like it would
       | be similar (although not identical) to what the GPU does for
       | vertices visibility.
       | 
       | Especially when you consider what tile based renderers do for
       | determining whether a triangle fully covers a tile (allowing
       | rejection of any other draw onto that tile) it seems like GPUs
       | could have built in support for 'inside a path or outside a
       | path.' Even just approximating with triangles as a pre-pass seems
       | faster than the row based method in the post.
       | 
       | Are arbitrary paths just too complex for this kind of
       | optimization?
        
         | pyrolistical wrote:
         | Depends on your definition of a path. If what you mean is
         | something like a SVG path, then consider the path is defined as
         | a series of command that describe the shape. See
         | https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Pa...
         | 
         | From my understanding there is no closed form solution to
         | arbitrary paths defined in that way. So the only way to figure
         | out what the shape looks like, and to figure out if a point is
         | inside or outside, you would need to run all the commands that
         | form the path.
        
         | moonchild wrote:
         | NV_path_rendering https://developer.nvidia.com/nv-path-
         | rendering
        
         | interroboink wrote:
         | > ... seems faster than the row based method in the post.
         | 
         | But the row-based method in the post is _not_ what they
         | describe doing on the GPU version of the algorithm. The row-
         | based method is their initial CPU-style version.
         | 
         | The GPU version handles each pixel in isolation, checking it
         | against the relevant shape(s).
         | 
         | At least, if I understand things correctly (:
         | 
         | As far as I can tell, the approach described here is probably
         | similar to what a built-in "draw path" command would do.
         | Checking if something is inside a triangle is just extremely
         | easy (no concavity, for instance) and common, and more complex
         | operations are left up to shader developers -- why burn that
         | special-case stuff into silicon?
        
       | bane wrote:
       | The demoscene has been doing this for a while, I wonder if the
       | basic techniques (doing it in a shader) are the same?
       | 
       | https://youtu.be/-ZxPhDC-r3w
        
       | UncleEntity wrote:
       | There's a OpenGL-ish gpu graphics library (who's name I can't
       | currently remember) that's in mesa, but not built by default in
       | most distros, and IIRC is also supported on raspberrypi.
       | 
       | I played with it a bit, wrote a python wrapper for it, borked a
       | fedora install trying to get real gpu support, fun times all
       | around. Seems nobody cares about an accelerated vector graphics
       | library.
        
         | genpfault wrote:
         | https://en.wikipedia.org/wiki/OpenVG
         | 
         | Mesa removed support in 2015:
         | 
         | https://docs.mesa3d.org/relnotes/10.6.0.html
         | 
         | > Removed OpenVG support.
        
       | Lichtso wrote:
       | A post about vector graphics and the word "stroke" appears zero
       | times ...
       | 
       | > Much better approach for vector graphics is analytic anti-
       | aliasing. And it turns out, it is not just almost as fast as a
       | rasterizer with no anti-aliasing, but also has much better
       | quality than what can be practically achieved with supersampling.
       | 
       | > Go over all segments in shape and compute area and cover values
       | continuously adding them to alpha value.
       | 
       | This is approach is called "coverage to alpha". The author will
       | be surprised to learn about the problem of coverage to alpha
       | conflation artifacts. E.g. if you draw two shapes of exactly the
       | same geometry over each other, but with different colors. The
       | correct result includes only the color of the last shape, but
       | with "coverage to alpha" you will get a bit of color bleeding
       | around the edges (the conflation artifacts). I think Guzba also
       | gives other examples in this thread.
       | 
       | Also, they did not mention the other hard problems like stroke
       | offsetting, nested clipping and group opacity etc.
       | 
       | Here, IMO a good blog post about the hard problems of getting
       | alpha blending and coverage right: https://ciechanow.ski/alpha-
       | compositing/
        
       | jfmc wrote:
       | Didn't Chrome (and probably others) added GPU accelerated CSS and
       | SVG (i.e., vector graphics) 10 years ago?
       | https://www.tomshardware.com/news/google-chrome-browser-gpu-...
        
         | jahewson wrote:
         | Not exactly - the article you link to is about SVG/CSS
         | _filters_ , not path drawing. Modern Chrome (skia) supports
         | accelerated path drawing but only some of the work is offloaded
         | to the GPU. In even older Chrome the GPU was used for
         | compositing bitmaps of already-rendered layers.
        
         | pier25 wrote:
         | Yeah I was also under the impression that Skia had GPU
         | acceleration.
         | 
         | Same with the FF Rust renderer (sorry don't remember the name).
        
           | jstimpfle wrote:
           | Pathfinder?
        
         | bXVsbGVy wrote:
         | Based on a quick test I just ran, it seems chrome does and
         | firefox doesn't.
        
       | jstimpfle wrote:
       | What I don't understand - why is there this "cover table" with
       | precomputed per-cell-row coverage? I.e. why is the cover table
       | computed per-cell-row when the segments are specialized per-cell?
       | There is the paper "ravg.pdf" that gets by with basically one
       | "cover" integer per tile, plus some artificial fixup segments
       | that I believe are needed even in the presence of such a cover
       | table. I'm probably missing something, someone who is deeper in
       | the topic please enlighten me?
        
       | moonchild wrote:
       | > a very good performance optimization is not to try to reject
       | segments on the X axis in the shader. Rejecting segments which
       | are below or above current pixel boundaries is fine. Rejecting
       | segments which are to the right of the current pixel will most
       | likely increase shader execution time.
       | 
       | Thread groups are generally rectangular IME--nv is 8x4, others
       | 8x8. So it doesn't make sense to distinguish X from Y in this
       | respect. But yes, you do want a strategy for dealing with 'branch
       | mispredictions'. Buffering works, and is applicable to the cpu
       | too.
        
       | pcwalton wrote:
       | This is essentially how Pathfinder works, with 16x16 tiles
       | instead of 32x32. It also has a pure-GPU mode that does tile
       | setup on GPU instead of CPU, which is a nice performance boost,
       | though a lot more complex. Note that if you're doing the setup on
       | CPU, the work can be parallelized across cores and I highly
       | recommend this for large scenes. The main difference, which isn't
       | large, is that Pathfinder draws the tiles directly instead of
       | drawing a large canvas-size quad and doing the lookup in the
       | fragment shader.
       | 
       | When I originally checked, Slug works in a similar way but
       | doesn't do tiling, so it has to process more edges per scanline
       | than Pathfinder or piet-gpu. Slug has gone through a lot of
       | versions since then though and I wouldn't be surprised if they
       | added tiling later.
        
         | coffeeaddict1 wrote:
         | Would you recommend Pathfinder for real world use? Of course, I
         | know that you're no longer working on it but would like to know
         | if there any significant bugs/drawbacks in using it. For
         | context, I'm coding a simple vector graphics app that needs to
         | resize and render quite complex 2d polycurves in real time. So
         | far, the only thing I found working was Skia which is good but
         | not fast enough to do the stuff I need in real time (at least
         | on low end devices).
        
         | moonchild wrote:
         | I thought pathfinder is scanline-oriented, using techniques
         | similar to http://kunzhou.net/zjugaps/pathrendering/
         | 
         | Tiling doesn't work too well under domain transforms--3d
         | environments, dynamic zoom, etc. That's why I am betting on
         | high-quality space partitioning. Slug space partitioning is not
         | amazing; I believe it still processes O(n) curves per fragment
         | in a horizontal band.
        
           | jstimpfle wrote:
           | You are literally talking to the Pathfinder author.
           | 
           | I just implemented basically the same tiling mechanism, it
           | works OK for me, I can already do about 1000 line segments,
           | covering ~10000 tiles (multiply counting tiles that are
           | touched by different polygons) in about 1ms on CPU and GPU
           | each. Meaning the work is already quite well divided for my
           | test cases. This is end to end without any caching. For
           | static scenes doing world space partitioning is an idea that
           | I considered, but for now I'm trying to optimize more in
           | order to see what the limits of this basic approach are.
        
             | pcwalton wrote:
             | > You are literally talking to the Pathfinder author.
             | 
             | No worries, I think they know :)
             | 
             | And yeah, I considered doing fancier partitioning, but
             | really toward the end (before work stopped due to layoffs)
             | I was pushing on shipping, which meant deprioritizing fancy
             | partitioning in favor of implementing all the bells and
             | whistles of SVG. I certainly believe that you could get
             | better results with more sophisticated acceleration
             | structures, especially in cases like texturing 3D meshes
             | with vector textures.
        
       | jszymborski wrote:
       | OT, but anyone know the font used in the figures? They're awfully
       | neat!
        
       | flakiness wrote:
       | Does this mean it is mostly done in the fragment shader and there
       | is no tessellation, like how Bezier patches are rendered in 3D
       | land? That's quite different from what I thought I knew.
        
       | grishka wrote:
       | A possibly dumb question. GPUs are really, really good at
       | rendering triangles. Millions of triangles per second good. Why
       | not convert a vector path into a fine enough mesh of
       | triangles/vertexes and make the GPU do all the rasterization from
       | start to finish instead of doing it yourself in a pixel shader?
        
         | jakearmitage wrote:
         | Kind of what Slug does: http://sluglibrary.com/
        
         | bsder wrote:
         | Mark Kilgard from NVIDIA has been beating this drum for a
         | couple decades. It's not simple.
         | 
         | His latest paper is about how to handle stroking of cubic
         | splines: https://arxiv.org/abs/2007.00308
         | 
         | He gives it as a talk, but you have to sign up with NVIDIA:
         | https://developer.nvidia.com/siggraph/2020/video/sig03-vid
        
         | Jasper_ wrote:
         | You can do that, except now you've moved the bulk of the work
         | from the GPU to the CPU -- triangulation is tricky to
         | parallelize. And GPUs are best at rendering large triangles --
         | small triangles are much trickier since you risk overdraw
         | issues.
         | 
         | Also, typical GPU triangle antialiasing like MSAAx16 only gives
         | you 16 sample levels, which is far from the quality we want out
         | of fonts and 2D shapes. We don't have textures inside the
         | triangles in 2D like we do in 3D, so the quality of the
         | silhouette matters far more.
         | 
         | That said, this is what Direct2D does for everything except
         | text.
        
       | pyrolistical wrote:
       | This seems like a silly way to do vector graphics in a shader.
       | 
       | What I've done in the past is representing the shape as a
       | https://en.wikipedia.org/wiki/Signed_distance_function to allow
       | each pixel to figure out if it is inside or outside the shape.
       | This avoids the need to figure out the winding.
       | 
       | Anti-aliasing is implemented as a linear interpolation for values
       | near zero. This also allows you to control the "thickness" of the
       | shape boundary. The edge become mores blurry if you increase the
       | lerp length.
       | 
       | Shader toy demo https://www.shadertoy.com/view/sldyRj
        
         | Guzba wrote:
         | SDF is cool but not a generally good solution to GPU vector
         | graphics. It only works for moderate scaling up before looking
         | bad, the CPU prep figuring out the data the GPU needs takes far
         | longer than just rasterizing on CPU would, etc. It's great as a
         | model for games where there are many renders as world position
         | changes but that's about it.
        
           | johndough wrote:
           | > It only works for moderate scaling up before looking bad
           | 
           | That problem has been solved by multi-channel signed distance
           | fields https://github.com/Chlumsky/msdfgen
        
         | jstimpfle wrote:
         | I don't know the details of using SDF (especially MSDF!) for
         | doing vector graphics, but my understanding is that essentially
         | it's a precomputation that involves _already_ a rasterization.
         | 
         | I would like to know why you think the described approach is
         | silly? It doesn't involve a final rasterization but merely a
         | prefiltering of segments.
        
           | pyrolistical wrote:
           | SDF are not precomputed. The SDF is implemented in the shader
           | code itself.
           | 
           | Added shader toy link in my OP
        
             | Lichtso wrote:
             | What about SDFs of cubic bezier curves and rational bezier
             | curves? Because these appear in vector graphics and I think
             | there is no analytic solution for them (yet?).
        
               | johndough wrote:
               | You can approximate the Bezier curves with circular arcs
               | and store those in the "SDF" instead.
               | https://github.com/behdad/glyphy
        
               | moonchild wrote:
               | > yet?
               | 
               | SDF of a cubic bezier involves solving a quintic, so it's
               | not analytic. There are approximations, of course, but
               | for an outline, using an sdf is just silly. (For a
               | stroke, you don't really have much choice--though it's
               | common to approximate strokes using outlines.) I'll add
               | that sdf aa is not as good as analytic aa.
        
               | pcwalton wrote:
               | IIRC Loop-Blinn gives you a pretty good distance
               | approximation using the 2D generalization of Newton's
               | method to avoid having to solve the quintic. (Though I've
               | never actually seen anybody implement Loop-Blinn for
               | cubics because the edge cases around cusps/loops are
               | annoying. Every Loop-Blinn implementation I've actually
               | seen just approximates cubics with quadratics.)
               | 
               | (Fun fact: Firefox uses the same 2D Newton's method trick
               | that Loop-Blinn does for antialiasing of elliptical
               | border radii in WebRender--I implemented it a few years
               | back.) :)
        
               | moonchild wrote:
               | Out of curiosity, where have you seen loop-blinn
               | implemented? I was under the impression that it was
               | generally a no-go due to the patent (which, incidentally,
               | expires in 2024).
        
               | Lichtso wrote:
               | > Though I've never actually seen anybody implement Loop-
               | Blinn for cubics
               | 
               | I implemented them (implicit formulation of rational
               | cubic bezier curves) in my renderer (see one of my other
               | replies in this thread). Here is an extract of the
               | relevant code in a shader toy:
               | https://www.shadertoy.com/view/WlcBRn
               | 
               | Even in Jim Blinn's book "Notation, Notation, Notation"
               | he leaves some of the crucial equations as an exercise to
               | the reader. I remember spending 2 or 3 weeks reading and
               | trying to understand everything he wrote to derive these
               | equations he hinted at myself.
        
             | jstimpfle wrote:
             | What I was referring is SDFs/MSDFs as in
             | https://github.com/Chlumsky/msdfgen .
             | 
             | Of course you can implement smooth circles directly in a
             | shader the way you describe, but note that there are Vector
             | outline shapes that are not circles...
             | 
             | Check out the outlines you can do using SVG - paths
             | composed of straight line segments as well as cubic bezier
             | curves and various arcs. Also color gradients, stroking...
        
         | johndough wrote:
         | Signed distance fields only work well for relatively simple
         | graphics.
         | 
         | If you have highly detailed characters like Chinese or emojis,
         | you need larger resolution to faithfully represent every
         | detail. The problem is that SDFs are sampled uniformly over the
         | pixel grid. If the character is locally complex, a high
         | resolution is required to display it, but if the character has
         | simple flat regions, memory is wasted. One way to get around
         | excessive memory requirements is to store the characters in
         | their default vector forms and only render the subset of
         | required characters on demand, but then you might as well
         | render them at the required pixel resolution and do away with
         | the additional complexity of SDF rendering.
         | 
         | SDFs are still useful though if you have to render graphics at
         | many different resolutions, for example on signs in computer
         | games, as seen in the original Valve paper
         | https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...
        
       | samlittlewood wrote:
       | The best GPU vector rendering library I have seen is
       | https://sluglibrary.com/. The principal use case is fonts, but it
       | appears that the underlying mechanism can be used for any vector
       | graphics.
        
       | xawsi wrote:
       | I don't get it. Is this really a problem in 2022? WPF does this
       | since back in 2006!
       | https://en.wikipedia.org/wiki/Windows_Presentation_Foundatio...
        
         | stefanfisk wrote:
         | From what I remember, WPF basically does tessellation on the
         | CPU and then sends draw lists to the GPU.
         | 
         | I could certainly be wrong though.
        
       ___________________________________________________________________
       (page generated 2022-08-10 23:00 UTC)