[HN Gopher] WebGPU - All of the cores, none of the canvas
       ___________________________________________________________________
        
       WebGPU - All of the cores, none of the canvas
        
       Author : jasim
       Score  : 277 points
       Date   : 2022-03-08 14:53 UTC (8 hours ago)
        
 (HTM) web link (surma.dev)
 (TXT) w3m dump (surma.dev)
        
       | brrrrrm wrote:
       | great writeup!
       | 
       | For those interested, I was only able to get the demos running in
       | Chrome Canary with "Unsafe WebGPU" enabled.
        
       | bob1029 wrote:
       | I've busted my ass on webgl a few times, and I am not really
       | seeing how WebGPU is a substantially better API looking at the
       | samples in the wild:
       | 
       | https://webkit.org/demos/webgpu/scripts/hello-triangle.js
       | 
       | The advantages of WebGPU vs WebGL probably make sense to experts
       | in this area, but I still find most of it to be completely
       | impenetrable at first glance as a relative novice.
        
         | whatever_dude wrote:
         | WebGPU is better in many dimensions, but also worse in others
         | people might care about. More appropriately, in this case,
         | WebGPU is in many ways _less_ approachable for novices.
         | 
         | One of the expected advantages of WebGPU is exposing the inner
         | workings of the hardware's GPU support in a more explicit
         | manner. This leads to really verbose code. This is similar to
         | API movements seen in DirectX, Vulkan, and Metal.
         | 
         | WebGL was never exactly easy to read either, but you could get
         | started more easily. It tried being something in-betweenm
         | though, and ended up never being the best at either providing
         | explicit behavior, nor being friendly.
         | 
         | The idea of WebGPU going forward is that it will make it easier
         | for other libraries to use it in a more deterministic fashion.
         | If you know how the metal works, you'll be able to optimize the
         | sh*t out of it. But for novices, you'll end up using some
         | library or engine that abstracts away a lot of the hardcore
         | functionality like "let's get an adapter that supports this
         | specific texture extension" or "let's reuse a binding group
         | layout to save 2ns when binding uniforms". In the same way most
         | people on the web use Three.js (instead of WebGL), you'll see
         | libraries like Babylon or Three.js making use of WebGPU without
         | really having to know about all that stuff. And that's the
         | right solution.
        
         | mrpinc wrote:
         | I think one of the big changes will allow for async gpu
         | uploads. Currently those happen on the main thread and lock UI.
        
       | shadowgovt wrote:
       | Is this API going to have a permissions lock, like access to the
       | video camera or microphone APIs do?
       | 
       | If it doesn't, I'm not looking forward to the future where every
       | website I visit tries to pull a big chunk of my graphics card to
       | mine cryptocurrencies without my explicit authorization.
        
         | skywal_l wrote:
         | It's already the case. Any web page can use WebGL1/2 for
         | general computation. As the author said:
         | 
         | > Using GPUs for calculations of any kind is often called
         | General-Purpose GPU or GPGPU, and WebGL 1 is not great at this.
         | If you wanted to process arbitrary data on the GPU, you have to
         | encode it as a texture, decode it in a shader, do your
         | calculations and then re-encode the result as a texture. WebGL
         | 2 made this a lot easier with Transform Feedback
        
           | shadowgovt wrote:
           | True, but this API will make that use case simple and
           | obvious. Given the tradeoffs, probably worth it for user
           | agents to permission-gate it.
        
             | skywal_l wrote:
             | You can always disable hardware acceleration in your
             | browser.
        
               | shadowgovt wrote:
               | That's a big red lever that disables all sites' ability
               | to use WebGL also, isn't it? That kind of thing is the
               | kind of thing I imagine one would want at site-by-site
               | granularity.
        
               | techdragon wrote:
               | I don't want to disable it, I want websites to have to
               | ask permission before they get access to even more
               | hardware resources.
        
             | afavour wrote:
             | I think the problem is presenting this to users in terms
             | they'll all understand. "This site wants to do a special
             | kind of processing"? "Use more resources"? I can't think of
             | many good answers. Anything involving the term "GPU" is
             | immediately going to confuse most.
        
         | TurningCanadian wrote:
         | Couldn't you make the same argument about being able to perform
         | computation on the CPU?
        
           | shadowgovt wrote:
           | You definitely could (and, indeed, I can imagine beneficial
           | use cases for a "renice" feature in a browser like Chrome
           | with its Task Manager).
           | 
           | But there's less of a significant need for it. Main CPU
           | performance is too poor on hashing datasets to much benefit
           | from salami-slicing user CPU at scale. That trick is much
           | likelier to get you a solved block in the blockchain if you
           | can tap user GPUs on a couple million users.
        
         | skohan wrote:
         | While I understand why it's necessary, I feel that the UX of
         | security needs a serious overhaul. It seems like an increasing
         | share of the time I spend using technology involves jumping
         | through hoops.
        
           | shadowgovt wrote:
           | There is a push-and-pull because the ideal of how the web
           | should work slammed head-on into the reality of how the worst
           | actors will exploit the ideal.
           | 
           | Ideally, the web is a fabulous location-destroying
           | architecture. The common protocol for both communications and
           | content rendering really flattened the universe of data
           | sources; doesn't matter if you're on CNN, Fox, Wikipedia,
           | Google, or Hacker News, they all "feel" the same. Common top-
           | level UX metaphor (underneath which customizations can be
           | built), common language under-the-hood; the web really became
           | a sort of global operating system for transactional
           | communications.
           | 
           | In practice, letting arbitrary actors run arbitrary code on
           | your machine at the speed of the web can be arbitrarily bad,
           | so we tacked on a permissions architecture based on data
           | source (delegating to the domain name service plus secret
           | certificates to tell us who the data source is). And because
           | the proper level of paranoia for strangers online is "zero
           | trust by default," every domain you visit is a new trust
           | relationship with its own security story to wrangle.
           | 
           | So these two features (flat experience where domain doesn't
           | matter and zero-trust security model where domain matters a
           | bunch) are directly at odds with each other. Sucks but it's
           | the best we've got right now (how to improve? Hypothetically,
           | we could add a meta-trust layer... "Here's an allow-list of
           | sites that are trusted, so sayeth Goodsites.com". But
           | nobody's written that spec yet).
           | 
           | GDPR-compliant cookies, as a concrete example, are a huge
           | pain-in-the-ass because we retrofitted them onto site
           | internal code itself instead of adding the concept of
           | "required" vs "optional" cookies to the cookie jar design,
           | which would have allowed user agents to put optional cookies
           | behind a trust barrier like your microphone or video. But
           | cookies are a legacy web feature and making changes to the
           | implementation is hard (and, of course, there's the human
           | element... I'm not 100% sure the people who hold the reigns
           | of the W3C are on the same page with the people who hold the
           | power to create and enforce the GDPR vis-a-vis goals).
        
       | faeyanpiraat wrote:
       | Question for someone with gpu programming experience: is it
       | feasible to analyze time series data (eg.: run a trading
       | algorithm through a series of stock prices) in parallel (with the
       | algorithms receiving different parameters in each core)?
       | 
       | If not then what is the limiting factor?
        
         | modeless wrote:
         | Yes, in fact GPUs are perfect for this. Here is a Monte Carlo
         | financial simulation I built in WebGL 2.
         | https://james.darpinian.com/money/ (Warning, programmer UI) It
         | performs one million Monte Carlo runs instantly every time you
         | move a slider. Each run is one execution of a pixel shader and
         | they all happen in parallel.
         | 
         | WebGPU will make this kind of thing more accessible. It's kind
         | of a pain to do it in WebGL, but even so it's still totally
         | possible and worthwhile because you can easily get 20 times the
         | performance in many cases and handily beat native CPU code even
         | on the web.
        
           | neogodless wrote:
           | Hmm the UI is super unresponsive for me. As in, I try to move
           | a slider, I can't tell where it's moving to... then I let go
           | and it's somewhere I didn't expect.
           | 
           | Latest Firefox, Windows 10, Radeon RX 6700XT 16GB.
        
             | emi2k01 wrote:
             | I'm on Windows 11 with a Radeon RX570 8gb and it runs fine
             | under Edge.
             | 
             | On Firefox it's way laggier but still usable.
        
             | modeless wrote:
             | Do other WebGL 2 demos work well for you? Does it work in
             | Chromium?
             | 
             | I haven't tested the code extensively so I'm sure it's
             | broken in some configurations. It's just a proof of concept
             | demo for now.
        
               | neogodless wrote:
               | Ah sorry - disregard. I had a background process gobbling
               | up GPU, and now I can use it in Firefox or Edge.
               | 
               | I do find it confusing, as I can put in numbers that get
               | me well below a 3% withdrawal rate, but still have only
               | 40% success rate. But I'm probably misunderstanding some
               | of the mortgage sliders.
        
               | modeless wrote:
               | Yeah, it's confusing for sure. One day hopefully I'll
               | have time to put a real UI on it. Right now it's assuming
               | you buy a house with a mortgage (and property tax and
               | yearly maintenance costs) at the same time as you retire.
               | Put the house price to zero to eliminate all that. Also
               | the expenses slider is before tax, so the tax rate slider
               | influences how large your withdrawals actually are. The
               | total withdrawal per year is displayed near the bottom.
        
         | skohan wrote:
         | GPUs are largely best suited for providing high throughput for
         | doing similar computations across wide datasets. So if you can
         | break down your algorithm into a series of steps which are
         | largely independent and have limited flow-of-control it might
         | be well suited to the task. If you need to have a lot of random
         | access, or branching logic, it may not work so well. But often
         | times it's possible to re-structure an algorithm designed for
         | CPU to perform better on GPU.
         | 
         | But how many stocks even are there? You might not even have
         | enough parallel operations to saturate a modern GPU.
         | 
         | Out of curiosity, why use WebGPU for this? If you're really
         | trying to do something high performance, why not reach for
         | something like CUDA?
        
         | rowanG077 wrote:
         | Probably possible, but you would really need some more concrete
         | information.
         | 
         | For some easy experimentation with GPUs I would advise looking
         | at Futhark. It's super easy to setup and get started.
         | 
         | https://futhark-lang.org/
        
         | raphlinus wrote:
         | This depends on the algorithm. If it has lots of branching and
         | control flow, then it's unlikely to run well. If it's based on
         | something like FFT or any sort of linear algebra really, then
         | it should run really well.
         | 
         | WebGPU is accessible enough maybe you should try it! You'll
         | learn a lot either way, and it can be fun.
        
           | amelius wrote:
           | Isn't there something like SciPy which you can run on WebGPU?
        
             | raphlinus wrote:
             | At this point, there are very few tools actually running on
             | WebGPU, as it's too new. One to watch for sure is IREE,
             | which has a WebGPU backend planned and in the works.
        
               | galangalalgol wrote:
               | There is wonnx a rust based webgpu enabled onnx runtime
               | for ml stuff.
        
             | westurner wrote:
             | https://wgpu-py.readthedocs.io/en/stable/guide.html :
             | 
             | > _As WebGPU spec is being developed, a reference
             | implementation is also being build. It's written in Rust,
             | and is likely going to power the WebGPU implementation in
             | Firefox. This reference implementation, called wgpu-native,
             | also exposes a C-api, which means that it can be wrapped in
             | Python. And this is what wgpu-py does._
             | 
             | > _So in short, wgpu-py is a Python wrapper of wgpu-native,
             | which is a wrapper for Vulkan, Metal and DX12, which are
             | low-level API's to talk to the GPU hardware._
             | 
             | So, it should be possible to WebGPU-accelerate SciPy; for
             | example where NumPy is natively or third-partily CUDA-
             | accelerated
             | 
             | edit: Intel MKL, https://Rapids.ai,
             | 
             | > _Seamlessly scale from GPU workstations to multi-GPU
             | servers and multi-node clusters with Dask._
             | 
             | Where can WebGPU + IDK WebRTC/WebSockets + Workers provide
             | value for multi-GPU applications that already have
             | efficient distributed messaging protocols?
             | 
             | "Considerable slowdown in Firefox once notebook gets a bit
             | larger" https://github.com/jupyterlab/jupyterlab/issues/163
             | 9#issueco... Re: the differences between the W3C Service
             | Workers API, Web Locks API, and the W3C Web Workers API and
             | "4 Ways to Communicate Across Browser Tabs in Realtime" may
             | be helpful.
             | 
             | Pyodide compiles CPython and the SciPy stack to WASM. The
             | WASM build would probably benefit from WebGPU acceleration?
        
           | mwcampbell wrote:
           | Since you mentioned FFT, I wondered whether GPU-based
           | computation would be useful for audio DSP. But then, lots of
           | audio DSP operations run perfectly well even on a low-end
           | CPU. Also, as one tries to reduce latency, the buffer size
           | decreases, thus reducing the potential to exploit
           | parallelism, and I'm guessing that going to the GPU and back
           | adds latency. Still, I guess GPU-based audio DSP could be
           | useful for batch processing.
        
             | raphlinus wrote:
             | Yes. Probably the single most effective use of the GPU for
             | audio is convolutional reverb, for which a number of
             | plugins exist. However, the problem is that GPUs are
             | optimized for throughput rather than latency, and it gets
             | worse when multiple applications (including the display
             | compositor) are contending for the same GPU resource - it's
             | not uncommon for dispatches to have to wait multiple
             | milliseconds just to be scheduled.
             | 
             | I think there's potential for interesting things in the
             | future. There's nothing inherently preventing a more
             | latency-optimized GPU implementation, and I personally
             | would love to see that for a number of reasons. That would
             | unlock vastly more computational power for audio
             | applications.
             | 
             | There's also machine learning, of course. You'll definitely
             | be seeing (and hearing) more of that in the future.
        
               | misterdata wrote:
               | Already on it! https://github.com/webonnx/wonnx - runs
               | ONNX neural networks on top of WebGPU (in the browser, or
               | using wgpu on top of Vulkan/DX12/..)
        
         | WithinReason wrote:
         | Depends what you do the analysis with. The state of the art is
         | a neural network, and the best there are transformers which do
         | parallelize (as opposed to the previous generation, LSTMs which
         | don't parallelize as well)
         | 
         | The limiting factor is usually parallelism, to utilize a GPU
         | well you need to be running something on the order of 10000
         | threads in parallel.
        
       | assbuttbuttass wrote:
       | Am I understanding this right, that websites can now run a crypto
       | miner in the background when I visit them?
        
         | wlesieutre wrote:
         | Already a thing, it's just been CPU based and now can be GPU
         | based. Firefox (and probably other browsers) have systems to
         | attempt to block them.
         | https://blog.mozilla.org/futurereleases/2019/04/09/protectio...
         | 
         | The people sneaking this shit into webpages don't care if it
         | burns $1 of electricity and destroys your battery to generate
         | $0.0000001 of Monero because all of those costs are paid by
         | someone else.
        
           | prox wrote:
           | Is there a toggle extension like I have for JS?
        
         | formerly_proven wrote:
         | always has been
        
         | brrrrrm wrote:
         | If you're using Chrome Canary with the "Unsafe WebGPU" flag
         | turned on, yes. And it'll be a lot faster/more power hungry
         | than before. Otherwise, nothing beyond the normal CPU based
         | miners have been enabled.
         | 
         | Adaptation is the cost of progress. Browser vendors will almost
         | certainly implement a way to prevent unauthorized use of such
         | hardware (either by prompt like webcams or automatic detection)
         | before this stuff becomes publicly available.
        
         | secondcoming wrote:
         | We'll also have to start measuring websites by how much power
         | they consume.
        
         | modeless wrote:
         | No. Crypto miners have always been possible and WebGPU does not
         | make them faster than they would be in WebGL 2 which has been
         | available for a long time already.
        
           | brrrrrm wrote:
           | I'm not up to date on the latest mining algorithms but is it
           | true that they're all embarrassingly parallel to the point
           | the WebGPU makes no difference? Fast inter-thread
           | communication is the big addition here.
        
             | modeless wrote:
             | I guess it depends on the specific cryptocurrency. I don't
             | think the big ones would benefit. Maybe some niche ones
             | could see some kind of benefit, but making mining somewhat
             | more efficient isn't going to change the incentives for
             | drive by miners that much because efficiency doesn't matter
             | when it's someone else's electricity you're wasting.
        
         | davemp wrote:
         | Yup, though you'll probably notice your fans spin up (or coil
         | whine if it's my PC).
        
       | dataangel wrote:
       | If I'm already familiar with Rust, how is WebGPU as an
       | introduction to graphics programming if you aren't already
       | familiar with any of the backends that it wraps? Should I try
       | just learning Vulcan first? Is the abstraction layer just going
       | to introduce additional confusion for a newbie?
        
         | hutzlibu wrote:
         | "Is the abstraction layer just going to introduce additional
         | confusion for a newbie?"
         | 
         | As a newbie you probably don't want to deal with WebGPU
         | directly, but rather use (wait for) a framework, that takes
         | care of the details.
        
           | rektide wrote:
           | depends on your objective, doesnt it? if you eant to learn
           | how things work & what the fundamnetals are, im not sure that
           | a big engine or framework "taking care of the details" is
           | going to be as illuminating.
           | 
           | also, from the article, some poditive reinforcement about
           | trying WebGPU:
           | 
           | > _I got my feet wet and found that I didn't find WebGPU
           | significantly more boilerplate-y than WebGL, but actually to
           | be an API I am much more comfortable with._
           | 
           | sometimes, there aint nothing like going to the source,
           | finding out for yourself. is it the fastest way to get a job
           | done? perhaps not. but the school of lifelong learning &
           | struggles has it's upsides, can be a path towards a deeper
           | mastery.
        
             | hutzlibu wrote:
             | "depends on your objective, doesnt it? "
             | 
             | Sure, thats why I said "probably".
             | 
             | Anyone who wants the most performance and raw power needs
             | to get to the source.
             | 
             | But most people using WebGPU are probably fine with a
             | framework, that hides all the details. Like threejs did for
             | webGL.
        
           | qbasic_forever wrote:
           | Exactly. It's going to be much more productive for someone
           | totally new to 3D graphics to play with three.js and learn
           | all about models, meshes, textures, shaders, coordinate
           | spaces, cameras, etc. If you're starting with webgpu or even
           | webgl you're going to get mired in eccentricities of buffer
           | management, byte formats, etc. which are important plumbing
           | but just that--plumbing.
        
         | raphlinus wrote:
         | I think one of WebGPU's greatest strengths is learning, as you
         | can get started pretty easily, and you actually learn a modern
         | approach to GPU. Vulkan requires a significant amount of
         | boilerplate just to establish a connection to the GPU and set
         | up basic resources like the ability to allocate buffers. In
         | WebGPU, the latter is one or two lines of code.
         | 
         | That said, there are some really nice learning resources for
         | Vulkan as well, so if you're motivated and determined, it's
         | also not a bad way to learn modern GPU.
        
           | danielvaughn wrote:
           | So I just went through this process and I have to slightly
           | disagree. I'm coming from literally no knowledge of low-level
           | graphics programming. My most relevant experience is writing
           | against Canvas2D, which is not relevant at all.
           | 
           | I first tried WebGPU, and was so overwhelmed with detail that
           | it was nearly impossible to connect the dots. So I went back
           | and decided to learn WebGL first. I know that it's very
           | different, but the whole state machine thing makes it so much
           | easier to understand, at a _conceptual_ level, what you 're
           | trying to do.
           | 
           | It didn't even take that long, just a couple of weekends and
           | I was already familiar with the basics. I'm not going to
           | waste time trying to become super proficient, but it gives me
           | enough context to go back and try WebGPU again.
        
             | chompychop wrote:
             | Could you share what resources you used to learn the
             | essentials of WebGL enough to tackle WebGPU?
        
               | danielvaughn wrote:
               | I read so many different articles that it's difficult to
               | remember exactly which ones. One _very_ good resource is
               | this person: https://www.youtube.com/watch?v=9WW5-0N1DsI
               | 
               | The video is more about shaders in Unity, but she
               | explains the context behind _what shaders are_ so much
               | more clearly than anyone else I 've come across.
        
           | skohan wrote:
           | Is WebGPU ready? It's been a bit, but last time I looked it
           | was pretty unstable and not well supported.
        
             | zamadatix wrote:
             | It's still in origin trials on Chrome and behind flags in
             | Firefox and Safari. That said it's pretty stable at this
             | point, for reference Chrome plans on enabling it for all
             | sites and users by default towards the end of May.
        
         | jms55 wrote:
         | Start with WebGPU. There's a great resource here:
         | https://sotrh.github.io/learn-wgpu/.
        
         | whatever_dude wrote:
         | Unless you live in windows ONLY, start with wgpu (a Rust
         | library that supports WebGPU). It'll work with
         | Metal/Vulkan/Web.
         | 
         | WebGPU is very similar to Vulkan, near a 1:1 in concepts. A lot
         | of the new libraries (Metal, DirectX) are similar in fact, so
         | learning WebGPU will get you extremely prepared for any of
         | those.
        
       | j-pb wrote:
       | WebGPU is a textbook case of Embrace Extend Extinguish performed
       | by Apple to kill Kronos and SPIRV, and everybody is cheering it
       | on. Crazy.
        
         | cma wrote:
         | The whole purpose of Vulkan was an open API that could work on
         | all vendors. I'm not sure why they based WebGPU on Metal over
         | the Mozilla proposal. Now Apple can control the future
         | evolution of it by saying "metal 1.X does it this way." With a
         | Vulkan based webgpu Khronos could do the same, but their
         | "vulkan 1.X does it this way" would have already been the
         | result of a cross-vendor standards committee instead of from a
         | private Apple initiative with many conflicting goals around
         | lock-in.
        
           | Tojiro wrote:
           | WebGPU is not based on Metal. Apple did an experiment several
           | years back in Safari which they called WebGPU at the time. It
           | was effectively a thin wrapper around Metal, and so that's
           | caused some confusion, but the actual W3C WebGPU spec has
           | nothing in common with that experiment. It's a carefully
           | selected set of features that are feasible to run effectively
           | on Vulkan, Metal, and D3D12 and structured to feel like a
           | web-native API.
        
           | Jasper_ wrote:
           | The modern WebGPU spec was more or less based on Google's NXT
           | proposal, rather than Apple's WebGPU prototype.
        
         | flohofwoe wrote:
         | WebGPU has almost nothing in common with Apple's proposal
         | (which was essentially "WebMetal", while WebGPU is basically
         | the common subset of Vulkan, D3D12 and Metal).
        
         | dassurma wrote:
         | The WebGPU spec is currently edited by two Google employees and
         | one Mozilla employee. I am not sure how to make sense of your
         | accusation.
        
           | j-pb wrote:
           | The author list and W3C document are pretty much irrelevant,
           | the actual working group is what counts.
           | 
           | After thinking about it some more, I've come to the
           | conclusion that I've actually not been strong enough in my
           | initial accusation, and that Apple is trying to smother any
           | threat that WebGPU poses to its native App supremacy.
           | 
           | The story is essentially:
           | 
           | Apple wants SPIRV dead. Mozilla wants SPIRV alive. Google
           | wants SPIRV alive.
           | 
           | Google developed a human readable text version for SPIRV,
           | which is isomorphic to the binary version, akin to WASM <->
           | WAT.
           | 
           | Mozilla wants SPIRV, which Apple fully rejects (because of
           | some legal dispute with Chronos, where Apple probably tries
           | to get a patent for something that Chronos has as prior art
           | or something), but they are open to googles proposal of a
           | textural intermediary, which is isomorphic.
           | 
           | Mozilla and Google agree, because they'll all compile down to
           | SPIRV anyways, except for Apple which will target Metal.
           | 
           | This decision is captured in the W3C proposal which to this
           | day contains the Isomorphism property.
           | 
           | That's the Embrace, WGSL is born.
           | 
           | Apple convinces Mozilla and Google that WGSL should be even
           | more human readable, since user readable formats are the
           | backbone of the open web. But for more convenience WGSL will
           | need more features. Extend.
           | 
           | Apple convinces Mozilla and Google that now that the language
           | has grown anyways they should make it much more compatible
           | with other shading languages in general, SPIRV will not be
           | the only target, so while it should be compilable to SPIRV,
           | that's by no means a primary goal. Isomorphism is abandoned.
           | Google tries a last ditch effort to keep the isomorphism with
           | a proposal literally called: "Preserving developer trust via
           | careful conversion between SPIR-V and WGSL".
           | 
           | The proposal is rejected, the working group decides that the
           | isomorphism property will no longer hold (I can only assume
           | malevolence that they haven't removed it from the spec yet,
           | probably to keep dev support.). Extinguished.
           | 
           | https://docs.google.com/presentation/d/1ybmnmmzsZZTA0Y9awTDU.
           | ..
           | 
           | Why is that isomorphism so important? Because it's near
           | impossible to develop truly high performance shaders with a
           | load of compiler optimisations between your hardware and your
           | tooling. Apple is successfully giving us GLSL in new clothes,
           | and developers are cheering them on, because they just read
           | the W3C spec and ignore the realities of the standard.
           | 
           | ---
           | 
           | Apple: "Having an isolated focus on SPIR-V is not a good
           | perspective, we have MSL etc. to consider too"
           | 
           | ...
           | 
           | Apple: "Extreme A is no interaction with SPIRV?, extreme B is
           | very tightly coupled to SPIRV, we should find a middle point"
           | 
           | ...
           | 
           | Google: "We take all targets into consideration. We can allow
           | developers do runtime probing and maybe branch according to
           | that. Optimization occurs in existence of uncertainty.
           | Dealing with fuzziness is a fact of life and we should give
           | the tools to developers to determine in runtime."
           | 
           | From https://docs.google.com/document/d/15Pi1fYzr5F-aP92mosOL
           | tRL4...
           | 
           | See also:
           | 
           | https://github.com/gpuweb/gpuweb/pull/599
           | 
           | https://github.com/gpuweb/gpuweb/issues/582
           | 
           | https://github.com/gpuweb/gpuweb/issues/566
           | 
           | http://kvark.github.io/spirv/2021/05/01/spirv-horrors.html
           | 
           | https://github.com/gpuweb/gpuweb/issues/847#issuecomment-642.
           | ..
           | 
           | https://news.ycombinator.com/item?id=24858172
        
             | Jasper_ wrote:
             | This is a very inaccurate summary of what happened. And I
             | say this as someone who does not particularly like WGSL.
             | Mozilla, Google and Apple are all working to help design
             | WGSL.
        
           | zamadatix wrote:
           | The WebGPU Shading Language spec is edited by Google and
           | Apple, Mozilla were the ones proposing straight SPIRV and are
           | not among the editors in this case.
           | 
           | Still doesn't make any sense but it has nothing to do with
           | who the editors are.
        
       | eole666 wrote:
       | For those looking for a complete 3D engine already supporting
       | WebGPU and with a WebGL fallback if needed, there is BabylonJs
       | (you'll need the 5.0 version still in the release candidate
       | state) :
       | https://doc.babylonjs.com/advanced_topics/webGPU/webGPUStatu...
       | 
       | ThreeJs is alwo working on its WebGPU renderer
        
         | astlouis44 wrote:
         | For any developers interested, our team at Wonder Interactive
         | is working on WebGPU support for Unreal Engine 4 and Unreal
         | Engine 5.
         | 
         | Games and real-time 3D applications in the browser will form
         | the basis of the open metaverse.
         | 
         | You can register here on our website for early access -
         | https://theimmersiveweb.com/
        
         | kylebarron wrote:
         | luma.gl is also working on a new version to support WebGPU:
         | https://luma.gl/docs/v9-api
        
         | rektide wrote:
         | Im excited for Bevy engine too. It uses wgpu, which has many
         | backends, including webgpu.
         | 
         | https://bevyengine.org
         | 
         | https://hn.algolia.com/?q=bevy
        
           | slimsag wrote:
           | I've only heard great things about Bevy, and their community
           | is awesome. Highly recommend looking into it if you like
           | Rust! They also just had a huge gamejam with something like
           | ~70 submissions. Impressive stuff!
           | 
           | I'm super excited about WebGPU and think it will be a winning
           | strategy for game engines going forward.
           | 
           | I'm working on a (very early stages) game engine in Zig
           | called Mach[0] and am using Google Chrome's implementation of
           | WebGPU as the native graphics abstraction layer, so Zig's C++
           | compiler builds it all, you get cross-compilation out of the
           | box. It's quite fun!
           | 
           | [0] https://devlog.hexops.com/2021/mach-engine-the-future-of-
           | gra...
        
       | superkuh wrote:
       | WebGPU, all the bare metal crashes/exploits and none of the broad
       | support. Browsers should not be operating systems and the move to
       | make them so will end up making them just as exploitable as your
       | host OS only much slower due to the additional abstraction
       | layers.
        
         | flohofwoe wrote:
         | WebGPU has the same sandboxing in place as WebGL, e.g. despite
         | the name there is no "direct GPU access" (minus any
         | implementation bugs of course, which will hopefully be ironed
         | out quickly).
        
         | modeless wrote:
         | WebGPU is not significantly more "bare metal" than WebGL is.
         | There are still going to be several layers of abstraction
         | between user code and the GPU in all cases. No operating system
         | implements WebGPU or WGSL directly so there's always a
         | translation and verification layer in the browser, and then
         | there's the OS API layer, and then the driver layer (and
         | probably multiple layers inside each of those layers). In fact,
         | on operating systems that implement OpenGL, WebGL is actually
         | closer to the OS/driver interface than WebGPU is.
         | 
         | WebGL has been around for a long time and the feared exploits
         | never materialized. It's been no worse than other parts of the
         | browser and better than many.
        
         | dman wrote:
         | That ship has long sailed.
        
         | skywal_l wrote:
         | One benefits is that web browsers have to respect a standard
         | set of APIs which abstract the hardware and makes them all
         | inter-operable (except the usual incompatibilities obviously).
         | 
         | Now POSIX was arguably successful, but it was limited in scope
         | (not including the UI aspect of things). Also: Windows...
         | 
         | Additionally, the absence of that big global state which is the
         | filesystem makes the development of portable applications way
         | more easier. There's always been limitations to what browsers
         | can do, but personally I don't regret iTunes, Outlook or all
         | those desktop app advantageously replaced by SPAs. I can log
         | from any computer/smartphone and I get my apps. No installation
         | process, configuration, the usual missing/corrutped
         | library/dlls, etc.
         | 
         | And the problem of data ownership is not a technical problem.
         | If we didn't have browsers, software companies would have moved
         | those data away from you and have desktop app access them
         | remotely anyway.
         | 
         | I get that abstraction levels suck. But only if the
         | cost/benefits ratio is bad. Java applet had awful UX AND they
         | were slow. Now SPAs can be quite responsive and some of them a
         | pretty cool to use.
        
         | no_time wrote:
         | >WebGPU, all the bare metal crashes/exploits
         | 
         | It's a goldmine. Think of all the future jailbreak entrypoints
         | this will make possible.
        
         | TazeTSchnitzel wrote:
         | There's very little that GPUs can do that CPUs can't do so far
         | as exploitation is concerned. The GPU driver runs in a
         | sandboxed userland process just like the browser engine does,
         | and the GPU respects process isolation just like the CPU does.
         | There is no "bare metal" here!
         | 
         | Now, sure, there must be plenty of memory-safety issues in GPU
         | drivers, but why find an exploit in a driver only some users
         | have installed, when you can find an exploit in the browser
         | everyone has installed? The GPU driver exploit doesn't give you
         | higher privileges.
        
       | pjmlp wrote:
       | With the "fun" to rewrite all WebGL shaders into WGSL.
        
         | dassurma wrote:
         | You can use the SPIR-V compiler toolchain to just transpile
         | from GLSL to WGSL :)
        
           | flohofwoe wrote:
           | Minor nitpick: this translation happens in separate
           | libraries/tools which just make use of the SPIRVTools library
           | for some SPIRV processing tasks, but not the actual
           | translation to WGSL.
           | 
           | Tint (Google, implemented in C++):
           | https://dawn.googlesource.com/tint/
           | 
           | Naga (Mozilla(?), implemented mostly in Rust):
           | https://github.com/gfx-rs/naga
           | 
           | Both are a bit similar to SPIRV-Cross, except that they also
           | support WGSL.
        
           | pjmlp wrote:
           | Yes I know, even more fun dealing with additional kludges,
           | because 3D on the Web versus native doesn't have enough of
           | them already.
        
       | lmeyerov wrote:
       | If anyone wants to play with webgl/webgl2/webgpu/high-perf js,
       | we're looking for a graphics engineer for all sorts of fun viz
       | scaling projects (graphs used by teams tackling misinfo, global
       | supply chains, money laundering, genomics, etc.):
       | https://www.graphistry.com/careers
       | 
       | Agreed with other folks here: We see opportunities to advance
       | over what we did with WebGL (opengl es 2.0 from 10-20 years
       | ago)... but as far as we can tell, still a puzzle to work around
       | missing capabilities relative to what we do with modern
       | opencl/cuda on the server. So definitely an adventure!
        
       | faldore wrote:
        
       | bacan wrote:
       | Another thing to turn off in FireFox. Already have WebRTC, WebGL,
       | WASM, ServiceWorkers etc disabled. Most sites are so much faster
       | without all this junk
        
       | dmitriid wrote:
       | So now there are three incompatible graphics APIs in the browsers
       | which are also incomplete in different ways. Each of them is a
       | huge chunk of code that needs to maintained and updated.
        
         | modeless wrote:
         | If I was developing a new browser today, my graphics
         | abstraction would be WebGPU and all other graphics would be
         | built on it. WebGL, Canvas 2D, SVG, and even DOM rendering
         | would be unprivileged libraries on top and only WebGPU would be
         | able to touch the actual hardware. I wouldn't be surprised if
         | some browsers moved that way in the long term.
        
           | Narishma wrote:
           | Doesn't WebGPU require a DX12-level GPU? If so, that would
           | make your browser incompatible with a ton of systems out
           | there.
        
             | modeless wrote:
             | Writing a new browser would take many years and those
             | systems would be obsolete :) But there's also an intention
             | to produce a more compatible restricted subset of WebGPU at
             | some point.
        
           | raphlinus wrote:
           | I find your ideas intriguing and I wish to subscribe to your
           | newsletter.
           | 
           | More seriously, yes, if you were starting a browser
           | completely from scratch, this would probably be a good
           | approach. There are some subtleties, but probably nothing
           | that couldn't be solved with some elbow grease.
           | 
           | If anyone is seriously interested in exploring this
           | direction, one thing that would be very interesting is a port
           | of piet-gpu to run on top of WebGPU (it's currently running
           | on Vulkan, Metal, and DX12). I've done quite a bit of pre-
           | work to understand what would be involved, but don't
           | personally have the bandwidth to code up the WebGPU port. I
           | would be more than happy to work with somebody who does.
        
         | qbasic_forever wrote:
         | There's at least five now: DOM, canvas, SVG, webgl, webgpu.
         | Could maybe make an argument that fonts are an entirely
         | separate rendering engine with quirks and issues too. CSS
         | Houdini might even be another new one.
        
         | undefined_void wrote:
         | yes
        
       | azangru wrote:
       | Will there be an http203 episode about this? ;-)
        
       | beaugunderson wrote:
       | One request for the author: Please don't hide the scroll bars on
       | websites you make like this!
        
         | dassurma wrote:
         | I am not! I have no styles that should affect scrollbar
         | visibility. What browser & OS are you on?
        
           | Ruphin wrote:
           | Hi Surma. Thanks a lot for this writeup. I have been trying
           | to get into WebGPU but good introductory material was sorely
           | lacking some time ago.
           | 
           | I have one question: I have had trouble finding a combination
           | of browser/OS/GPU that would allow WebGPU access at the time,
           | what setup did you use and do you have any recommendations?
           | Particularly looking for options on Linux. (Sorry for
           | hijacking this thread)
        
       | seumars wrote:
       | This is off-topic but I instantly started reading the article in
       | Surmas voice. I mean this as a compliment. Surma and Jake's
       | HTTP203 on YouTube is superb content.
        
       ___________________________________________________________________
       (page generated 2022-03-08 23:00 UTC)