[HN Gopher] Chrome ships WebGPU
       ___________________________________________________________________
        
       Chrome ships WebGPU
        
       Author : itsuka
       Score  : 771 points
       Date   : 2023-04-06 08:22 UTC (14 hours ago)
        
 (HTM) web link (developer.chrome.com)
 (TXT) w3m dump (developer.chrome.com)
        
       | pjmlp wrote:
       | Get ready to rewrite all your shaders in WGSL.
        
         | bschwindHN wrote:
         | You're in almost every thread about WGPU, with negative
         | opinions about it. What would your ideal graphics API be, and
         | why isn't it coming to fruition, do you think?
        
           | kevingadd wrote:
           | Not the person you're replying to, but a start would be to
           | use an existing proven shader format like SPIRV or DXIL
           | instead of making up an entirely new shader ecosystem in
           | order to satisfy the whims of a single browser vendor and
           | waste everyone's time.
        
           | pjmlp wrote:
           | > What would your ideal graphics API be
           | 
           | Same capabilities as native APIs.
           | 
           | > , and why isn't it coming to fruition, do you think?
           | 
           | Politics and lack of tooling.
           | 
           | Why the negativity?
           | 
           | Consider that after 10 years, there is no Web game that can
           | match AAA releases for Android and iOS written in OpenGL ES
           | 2.0 (already lowering the bar here to WebGL 1.0).
           | 
           | And SpectorJS is the best GPU debugger we ever got.
           | 
           | Meanwhile in 2010,
           | 
           | https://www.youtube.com/watch?v=UQiUP2Hd60Y
        
         | KrugerDunnings wrote:
         | With Naga and SPIRV it is possible to "import" routes written
         | in GLSL from WGSL
        
       | hexhowells wrote:
       | The MDN docs can also be found here:
       | https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API
        
       | erk__ wrote:
       | I think Firefox aims to have this in Firefox 113, WebGPU was just
       | enabled by default in nightly last week.
       | https://bugzilla.mozilla.org/show_bug.cgi?id=1746245
        
         | rockdoe wrote:
         | "Firefox's implementation is based on the wgpu Rust crate,
         | which is developed on GitHub and widely used outside Firefox."
        
           | tormeh wrote:
           | Wgpu is the foundation for the Bevy game engine (on native,
           | web is still WebGL), and is also used by some games and UI
           | frameworks.
        
         | throwaway12245 wrote:
         | Any firefox nightly webgpu demos?
        
       | AbuAssar wrote:
       | can this be used in fingerprinting?
        
         | NiekvdMaas wrote:
         | Short answer: yes
         | 
         | https://gpuweb.github.io/gpuweb/#privacy-considerations
        
         | switch007 wrote:
         | The wouldn't have merged it if not
        
         | Scarjit wrote:
         | Yes, checkout the "Privacy considerations" part of the spec:
         | https://www.w3.org/TR/webgpu/#privacy-considerations
        
         | codedokode wrote:
         | This probably will be the main use of the technology.
        
       | rock_hard wrote:
       | This is an exciting day! I have been dreaming about WebGPU to
       | ship in Chrome/Edge for as long as I can remember.
       | 
       | Now hoping Safari won't take another decade to ship proper
       | support...because until then there is only very limited use cases
       | :(
        
       | j-pb wrote:
       | I lost all hope for WebGPU after they decided to roll their own
       | ad-hoc shader language that kinda looks like, but is totally not
       | rust.
       | 
       | At least with WebGL you had C.
       | 
       | Without SPIR-V support this spec is just another tire on the fire
       | that is the modern web.
        
         | orra wrote:
         | No SPIR-V was the cost of getting Apple on board.
        
           | flohofwoe wrote:
           | ...it's not quite as simple:
           | 
           | http://kvark.github.io/spirv/2021/05/01/spirv-horrors.html
           | 
           | WebGPU would most likely have to create its own subset of
           | SPIRV anyway to fulfill the additional safety and validation
           | requirements of the web platform.
        
           | madeofpalk wrote:
           | Getting Apple on board?
           | 
           | Wasn't Apple the originators of the propsal?
           | https://webkit.org/blog/7380/next-generation-3d-graphics-
           | on-... https://webkit.org/wp-content/uploads/webgpu-api-
           | proposal.ht...
        
             | flohofwoe wrote:
             | Apple has a Khronos allergy, and both SPIRV and GLSL are
             | Khronos standards.
             | 
             | Also the original 3D API proposal from Apple was
             | essentially a 1:1 Javascript shim for Metal, which looked
             | quite different from WebGPU.
             | 
             | Apple also originally proposed a custom shading language
             | which looked like - but wasn't quite - HLSL. Compared to
             | that, WGSL is the saner solution (because translation from
             | and to SPIRV is relatively straightforward).
        
         | hutzlibu wrote:
         | "ad-hoc shader language that kinda looks like, but is totally
         | not rust."
         | 
         | But is that language actually bad, or is it just not your
         | favourite language?
         | 
         | What don't you like about it?
        
           | enbugger wrote:
           | Because it is too different from other C-like DSLs GLSL,
           | HLSL. GLSL was doing its job perfectly and there was no
           | reason to invent another one. With wgpu, I now need to spend
           | time on searching through its reference pages for things that
           | I already know how to do in shaders. Now all countless WebGL
           | tutorials need to be migrated to completely new syntax. This
           | could be much easier by just making a superset of GLSL. And I
           | had't got any sane answer on "why it should look like Rust?"
        
           | mkl95 wrote:
           | The language is called WGSL. It's not ad-hoc (I mean, all
           | DSLs are to some extent?) and it has a pretty easy learning
           | curve if you know GLSL. I don't get what the fuzz is about.
        
           | illiarian wrote:
           | No idea what the comment about Rust means, but we already
           | have several shader languages. There was literally no reason
           | to invent another incompatible one
        
             | xchkr1337 wrote:
             | Most current shader languages are very close to C in terms
             | of syntax and behavior and these are some of the worst
             | aspects of C as a language. I guess they could have went
             | with SPIR-V but generally a compilation step shouldn't be
             | required in web standards.
        
           | gardaani wrote:
           | JavaScript is _the_ programming language for the Web. WGSL
           | syntax should have been based on JavaScript syntax. It would
           | have made writing shaders easier for millions of existing Web
           | developers. Now they have to learn a new syntax.
        
           | adrian17 wrote:
           | > What don't you like about it?
           | 
           | AFAIK: It's a mostly-but-not-100% textual equivalent of
           | SPIR-V that, syntax-wise, is a weird hodgepodge of all the
           | other shader languages. At the same time, it's rare it'll be
           | actually written by hand by humans - it'll mostly be
           | generated from another language at application build time,
           | and then it'll have to be compiled back into bytecode by the
           | browser, which feels like really redundant work.
           | 
           | Further, it's often said that the main reason the language
           | exists in the first place is because Apple veto'd SPIR-V due
           | to their legal disputes with Khronos. I'm assuming that's why
           | the comment above called it "ad-hoc".
        
         | xchkr1337 wrote:
         | Syntax-wise GLSL is a mess and having a new language to work
         | with is like a breath of fresh air.
        
         | sirwhinesalot wrote:
         | Ugh... why... (And I say this despite rust being my favorite
         | language ATM)
        
         | flohofwoe wrote:
         | > At least with WebGL you had C.
         | 
         | Huh? WebGL uses two (incompatible to each other) versions of
         | GLSL, not C.
         | 
         | The topic of the WebGPU shading language has been discussed to
         | death already, with no new insights brought to the discussion
         | for a very long time.
         | 
         | If you have a SPIRV based shader pipeline, you can simply
         | translate SPIRV to WGSL in an offline compilation step (and to
         | get to SPIRV in the first place, you need an offline
         | compilation step anyway).
         | 
         | If you use one of the native WebGPU libraries outside the
         | browser, you can load SPIRV directly.
        
         | TazeTSchnitzel wrote:
         | > At least with WebGL you had C.
         | 
         | WebGL's shading language was not C.
        
       | victor96 wrote:
       | This is super exciting for us as we develop interactive browser
       | UIs. I hope future versions of WebGPU will be backwards
       | compatible to all those with Chrome 113.
       | 
       | We need an iOS like pace of updating for browsers!
        
         | paulryanrogers wrote:
         | > We need an iOS like pace of updating for browsers!
         | 
         | How do you mean? Browsers often update every 6 weeks. IOS
         | releases with new behavior are annual.
        
           | victor96 wrote:
           | The difficulty is the number of people still running old
           | versions, as soon as new iOS versions come out a super high
           | percentage of people switch. And with browsers you have to
           | account for the update cycles of every browser on the market.
        
       | amrb wrote:
       | Seriously I'm looking forward to running ML inference on the web!
        
         | fulafel wrote:
         | People have been doing it for long with WebGL, see eg
         | https://github.com/tensorflow/tfjs and
         | https://cloudblogs.microsoft.com/opensource/2021/09/02/onnx-...
        
           | paulgb wrote:
           | It will be interesting to see the performance differential.
           | Tensorflow.js provides a benchmark tool[1]. When I ran
           | them[2] on an M1 MacBook Pro, WebGPU (in Chrome Canary) was
           | usually 2x as fast as WebGL on large models, sometimes 3x.
           | 
           | [1] https://tfjs-benchmarks.web.app/local-benchmark/
           | 
           | [2] https://digest.browsertech.com/archive/browsertech-
           | digest-wh...
        
       | switch007 wrote:
       | How does this help web fingerprinting / Google's revenue?
        
       | [deleted]
        
       | bitL wrote:
       | Is there a PyTorch port running on WebGPU somewhere? So that I
       | could add local processing for ML pipelines into a webapp,
       | bypassing cloud.
        
       | markdog12 wrote:
       | Note that this has not shipped to stable, it's currently in beta.
       | 
       | I read an unqualified "shipped" as "shipped to stable", but maybe
       | that's just me.
        
       | iamsanteri wrote:
       | Wow, this could be huge for enabling more efficient rendering of
       | also more basic animations and effects on the web making it a
       | smooth experience.
        
         | illiarian wrote:
         | If your entire site/app is in WebGPU then yes I guess?
         | 
         | Otherwise it does nothing for "smooth animations on the web"
        
         | kevingadd wrote:
         | Not really. If you want smooth and efficient rendering of basic
         | animations/effects, you should be using CSS, because then the
         | browser will natively rasterize, scroll and composite
         | everything in parallel using hardware acceleration. It's _far_
         | more efficient than rendering basic elements with WebGL and
         | will probably be better than WebGPU in most cases.
        
         | jug wrote:
         | Dear god no, use CSS for that. It's already hardware
         | accelerated in the regular browsers. You're talking of website
         | presentation and that's the purpose of CSS.
        
       | yagiznizipli wrote:
       | I think it's time to support WebGPU on Node.js too. Happy that it
       | is finally on a stable version of Chromium.
        
       | Ciantic wrote:
       | This is a comment from Aras Pranckevicius [1]:
       | 
       | > WebGL was getting really old by now. I do wonder whether WebGPU
       | is a bit late too though (e.g. right now Vulkan decides that PSOs
       | maybe are not a great idea lol)
       | 
       | > As in, WebGPU is very much a "modern graphics API design" as it
       | was 8 years ago by now. Better late than never, but... What's
       | "modern" now seems to be moving towards like: bindless everything
       | (like 3rd iteration of what "bindless" means), mesh shaders,
       | raytracing, flexible pipeline state. All of which are not in
       | WebGPU.
       | 
       | I'm not that versed on details, but would interesting to hear
       | what are the advantages of this modern bindless way of doing
       | things.
       | 
       | [1]: https://mastodon.gamedev.place/@aras/110151390138920647
        
         | dist-epoch wrote:
         | Think about bindless as raw pointers vs handles.
         | 
         | In bindless (pointers) you say "at this GPU memory location I
         | have a texture with this params".
         | 
         | In non-bindless you say "API create a texture with these params
         | and give me a handle I will later use to access it".
         | 
         | Bindless gives you more flexibility, but it's also harder to
         | use since it's now your responsability to make sure those
         | pointers point at the right stuff.
        
           | shadowgovt wrote:
           | How badly can you wreck state in bindless? Badly enough to
           | see the pointers of another process or detect a lot of high-
           | detail information on what computer is running the program?
           | 
           | If so, that'd be a non-starter for a web API. Web APIs have
           | to be, first and foremost, secure and protect the user's
           | anonymity.
        
             | brookst wrote:
             | All of this is in the context of a browser. If a
             | misbehaving web app uses pointers for memory from another
             | process, that should be blocked by all of the same things
             | that prevent non-privileged apps from doing the same thing.
        
               | shadowgovt wrote:
               | Agreed, as long as they sandbox properly (because it's
               | also important that you can't use the API to find out
               | information from another tab).
        
             | dist-epoch wrote:
             | On Windows GPU memory space is virtualized by the OS, so it
             | has the same kinds of access controls as regular system
             | memory.
             | 
             | Linux/Mac also support GPU virtualized memory, but I'm not
             | sure if it's always enabled.
        
             | dcow wrote:
             | "The web" should _not_ first and foremost protect
             | anonymity. It should do what humans need it to do ideally
             | while keeping users private and secure. If there's a
             | concern, my browser should ask me if I'm willing to share
             | potentially sensitive information with a product or
             | service. I fucking hate this weird angsty idea that the web
             | is only designed for anonymous blobs and trolls.
        
               | sclarisse wrote:
               | Letting advertisers identify you through some web
               | accessible GPUs interface so they can track your every
               | move and sell the data to all comers ... won't help you
               | fight anonymous online trolls.
        
               | dcow wrote:
               | So let me opt in to it rather than neuter it.
        
           | kevingadd wrote:
           | It's a bit more complex than that. In classical OpenGL (and
           | thus WebGL) "bindless" is more significant: You had to bind
           | resources to numbered stages like TEXTURE2 in order to
           | render, so every object with a unique texture required you to
           | make a bunch of API calls to switch the textures around.
           | People rightly rejected that, which led to bindless rendering
           | in OpenGL. Even then however you still had to _create_
           | textures, the distinction is that you no longer had to make a
           | billion API calls per object in order to bind them.
           | 
           | Critically however, things like vertex buffers and
           | fragment/vertex shaders are also device state in OpenGL, and
           | bindless textures don't fix that. A fully bindless model
           | would allow you to simply hand the driver a bundle of handles
           | like 'please render vertices from these two vertex buffers,
           | using this shader, and these uniforms+textures' - whether or
           | not you have to allocate texture handles first or can provide
           | the GPU raw texture data is a separate question.
        
         | flohofwoe wrote:
         | Aras is right, but the elephant in the room is still shitty
         | mobile GPUs.
         | 
         | Most of those new and fancy techniques don't work on mobile
         | GPUs, and probably won't for the foreseeable future (Vulkan
         | should actually have been two APIs: one for desktop GPUs, and
         | one for mobile GPUs - and those new extensions are doing
         | exactly that - splitting Vulkan into two more or less separate
         | APIs, one that sucks (for mobile GPUs) and one that's pretty
         | decent (but only works on desktop GPUs).
         | 
         | WebGPU cannot afford such a split. It must work equally well on
         | desktop and mobile from the same code base (with mobile being
         | actually much more important than desktop).
        
           | tourgen wrote:
           | [dead]
        
           | miohtama wrote:
           | I think it unrealistic management of expectations that
           | desktop and mobile must or should be equal. There is plenty
           | of web applications use cases one would like to run on a
           | desktop, but they are irrelevant for mobile, for many other
           | reasons as well. E.g. think editing spreadsheets.
        
             | jayd16 wrote:
             | This is an odd analogy. We should reduce the API space for
             | mobile so devs don't make mobile spreadsheets? I
             | mean...what is this arguing exactly? UX is different, sure,
             | but how does that translate into something this low level?
        
             | nightpool wrote:
             | I edit spreadsheets regularly on mobile. Why should I be
             | prevented from doing so based on my GPU's capabilities?
        
             | slimsag wrote:
             | WebGPU says the baseline should be what is supported on
             | both desktop+mobile, and that extensions (in the future)
             | should enable the desktop-only use cases.
             | 
             | Others seemingly argue that mobile should be ignored
             | entirely, that WebGPU shouldn't work there, or that it
             | should only work on bleeding-edge mobile hardware.
        
           | pier25 wrote:
           | > _with mobile being actually much more important than
           | desktop_
           | 
           | How so?
           | 
           | I always thought the more common use case for GPU
           | acceleration on the web for mobile were 2D games (Candy crush
           | etc). Even on low end devices these are already plenty fast
           | with something like Pixi, no?
        
             | flohofwoe wrote:
             | In general, WebGL has more CPU overhead under the hood than
             | WebGPU, so the same rendering workload may be more energy
             | efficient when implemented with WebGPU, even if the GPU is
             | essentially doing the same work.
        
               | pier25 wrote:
               | Thanks. Good point.
        
             | pdpi wrote:
             | We live in a bubble where we don't notice it, but desktop
             | as a platform is... not dying exactly, but maybe returning
             | to 90s levels of popularity. Common enough, but something
             | tech-minded people use, and not necessarily for everybody.
             | Mobile is rapidly becoming the ubiquitous computing
             | paradigm we all thought desktop computers would be. In that
             | world, WebGPU is much more important on mobile than on
             | desktop.
        
               | Baeocystin wrote:
               | I genuinely think personal computing has been severely
               | hamstrung over the past decade+ due to the race to be
               | all-encompassing. Not everything has to be for everyone.
               | It's ok to focus on tools that only appeal to other
               | people in tech. It really is.
        
           | samstave wrote:
           | >> _" shitty mobile GPUs."_
           | 
           | Uh, no ; it power and heat management so battery and fire
           | risk that limits SFF -- It would be good for mobile devices
           | to have external GPU/battery attachments via a universal
           | connector... this will boost efficacy of devices... but you
           | may not always need the boost provided by the umbilical - but
           | when you do need it - just put it outside the machine, and
           | connect it when needed...
        
           | jayd16 wrote:
           | Can you explain what the split is supposed to be? I'm fairly
           | confused because mobile GPUs (tile based) are creeping into
           | the desktop space. The Apple Silicon macs are closer to tile
           | based mobile GPUs than traditional cards.
           | 
           | What APIs are supposed to be separate, why, and what side of
           | the fence is the M1 supposed to land on?
        
             | flohofwoe wrote:
             | These are good posts to answer your question I think:
             | 
             | - https://www.yosoygames.com.ar/wp/2023/04/vulkan-why-faq/
             | 
             | - https://www.gfxstrand.net/faith/blog/2022/08/descriptors-
             | are...
             | 
             | In places where Vulkan feels unnecessarily restrictive, the
             | reason is mostly some specific mobile GPU vendor which has
             | some random restrictions baked into their hardware
             | architecture.
             | 
             | AFAIK it's mostly not about tiled renderers but about
             | resource binding and shader compilation (e.g. shader
             | compilation may produce different outputs based on some
             | render states, and the details differ between GPU vendors,
             | or bound resources may have all sorts of restrictions, like
             | alignment, max size or how shader code can access them).
             | 
             | Apple's mobile GPUs are pretty much top of the crop and
             | mostly don't suffer from those restrictions (and any
             | remaining restrictions are part of the Metal programming
             | model anyway, but even on Metal there are quite a few
             | differences between iOS and macOS, which even carried over
             | to ARM Macs - although I don't know if these are just
             | backward compatibility requirements to make code written
             | for Intel Macs also work on ARM Macs).
             | 
             | It's mostly on Android where all the problems lurk though.
        
               | jayd16 wrote:
               | Ah ok, so its not so much the mobile architecture as the
               | realities of embedded GPUs and unchanging drivers
               | compared to more uniform nVidia/AMD desktop drivers.
               | 
               | This is a real problem but I'm not sure splitting the API
               | is a solution. If a cheap mobile GPU has broken
               | functionality or misreports capabilities, I'm not sure
               | the API can really protect you.
        
         | mschuetz wrote:
         | Yeah, WebGPU unfortunately ended up becoming an outdated mobile
         | phone graphics API on arrival. Still better than WebGL, but not
         | quite what I would have liked it to be.
        
       | 0xDEF wrote:
       | Chrome and Firefox have supported WebGL since 2011 and
       | WebAssembly since 2017.
       | 
       | What is the reason we don't have at early-2010s quality AAA game
       | experiences running in the browser?
        
         | flohofwoe wrote:
         | One technical reason is that AAA games would require a complete
         | rethinking of their asset loading strategy, since they'd
         | basically have to use the internet as a very slow and very
         | unreliable hard disc to stream their assets from (which
         | basically means you can't stream the kind of high resolution
         | assets expected of AAA games at all, so you'll have to find a
         | simplified graphics style that looks explicitely 'non-AAA').
         | 
         | You don't want to wait minutes or even hours to download all
         | assets before the game can start (and then again next time
         | because the browser can't cache so much data).
         | 
         | TL;DR: the web platform is different enough from native
         | platforms that ports of bleeding edge games (even from 10 years
         | ago) are not feasible. You'd have to design the entire game
         | around the web platform limitations, which are mainly asset
         | streaming limitations.
         | 
         | But that doesn't happen because there's no working monetisation
         | strategy for 'high profile games' on the web platform, the
         | whole business side is just way too risky (outside some niches
         | which mainly focus on casual F2P games).
         | 
         | The 3D API is only a very small part of the entire problem
         | space (and by far not the most important).
         | 
         | In the end it's mostly about the missing 'business
         | opportunity'. If there would be money in (non-trivial) web
         | games, the games would come.
        
         | mike_hearn wrote:
         | Here are some reasons:
         | 
         | AAA is by definition games that aim at the top end of what can
         | be done in performance and graphical quality. Browsers
         | prioritize other things. Put another way you can't be both AAA
         | and in the browser, because if you tried, other people would
         | come along and simply do better than you outside and you
         | wouldn't be AAA anymore.
         | 
         | Specifically, browsers insist on very strong levels of
         | sandboxing at the cost of high overhead, and they don't want
         | you to run native code either, so you lose both performance and
         | compatibility with most existing game libraries/engines. They
         | also insist on everything being standardized and run through
         | the design-by-committee meat grinder. Whilst Microsoft are
         | polishing up the latest version of Direct3D browser makers are
         | still trying to standardize the version from five years ago.
         | 
         | Browsers are optimized for lots of tiny files, but game
         | toolchains tend to produce a small number of big files. For
         | example browsers aren't good at resuming interrupted downloads
         | or pinning data into the disk cache.
         | 
         | PC gamers have unified around Steam, which offers various
         | advantages that raw web doesn't. Steam is intended for native
         | apps.
         | 
         | Many games need to be portable to consoles because that's where
         | the revenue is (bigger audience, less piracy). Consoles don't
         | run web apps.
         | 
         | Browsers not only don't make it easy to implement anti-cheat
         | but actively make it as difficult as possible.
         | 
         | Debugging tools for native code in the browser aren't as good
         | as outside.
         | 
         | And so on and so on. That's not a full list, it's just off the
         | top of my head. Other types of apps the web ignores: CLI apps,
         | servers, anything to do with new hardware, OS extensions ...
         | the list goes on. Really, we must ask why we'd ever think it'd
         | make sense to ship AAA games as web pages. If you want the
         | benefits the web brings for without the problems then you'd
         | want a new platform that tries to learn from all of this and be
         | a bit more generalized. I wrote up a design doc a month ago
         | that tries to do that, see what you think:
         | 
         | https://docs.google.com/document/d/1oDBw4fWyRNug3_f5mXWdlgDI...
        
         | kevingadd wrote:
         | Lots of reasons. Here are a few in no particular order from
         | someone who's shipped games on the web, on consoles, and on PC:
         | 
         | * Deploying large software (i.e. games with all their textures
         | and sounds and models) to the browser is a pain in the ass.
         | Your content will get dumped out of the cache, the user's
         | connection may be spotty, and the browser tab might use up too
         | much RAM and get killed. Console and PC game distribution has
         | an install stage because you need one and that simply is not
         | possible in the web model [1]
         | 
         | * Browsers provide bad latency and stability characteristics.
         | They will drop frames frequently due to garbage collection or
         | activity in other tabs. The amount of multiprocess
         | communication, buffering, etc involved in running a webapp also
         | adds input and rendering latency. This makes games just feel
         | sluggish and janky. If your only option for releasing your game
         | is the web, you'll pick the web, but if players could get a
         | smoother experience on Steam or PlayStation instead, you'd be a
         | fool not to release there. The worst scenario is mobile, where
         | in some cases the input delay on touches is upwards of 100ms.
         | 
         | * Browsers have subpar support for user input, especially on
         | phones. For native games users can pick up an input device of
         | their choice and begin playing immediately (unless it's an
         | ancient PC game that doesn't support hotplug - this is more
         | common on Linux for reasons that aren't obvious to me). In the
         | browser, gamepad input doesn't report until you _press a
         | button_ - moving the analog stick to move a menu cursor isn 't
         | good enough - which is a weird and jarring experience.
         | Fullscreen is required for certain types of input as well,
         | which means people who prefer to game in a window on their
         | desktop are out of luck. Apple gets bonus points for just
         | intentionally making all of this stuff worse on iOS to force
         | you into the App Store for that sweet 30% cut.
         | 
         | * AAA game experiences are expensive and more importantly time-
         | consuming to develop. There are studios that started building
         | AAA web game experiences a long time ago, and over the course
         | of years most or all of them flamed out. Game development is
         | hard so these failures aren't exclusively the fault of the web
         | platform, but the web platform certainly didn't help. See
         | https://www.gamedeveloper.com/business/rts-studio-artillery-...
         | for one example - they started out building an AAA web game,
         | then pivoted to native because they couldn't get around all the
         | problems with web games, and then eventually shut down.
         | 
         | * Browsers have limited access to resources. I gestured at this
         | in the first bullet point, but if you run in a browser tab you
         | have less address space, less compute throughput, less VRAM,
         | and less bandwidth at your disposal than you would in a native
         | game. For "AAA" experiences this is a big problem, but for
         | simpler games this is not really an issue. For large scale
         | titles this can be the difference between 30fps and 60fps, or
         | "all the textures are blurry" and "it looks crisp".
         | 
         | [1]: There are some newer APIs that alleviate some of the
         | issues I listed, but not all of them
        
         | pjmlp wrote:
         | That is always my example why WebGL is only usefull for
         | ecommerce stores, shader toy and little else.
         | 
         | We still don't have any debugger quality like Renderdoc,
         | Instruments, PIX, and there is nothing with the quality of
         | Infinity Blade, the game Unreal and Apple used to demo iPhone's
         | GL ES 3.0 capabilties.
         | 
         | Streaming like XBox Cloud seems to be the only path for "AAA
         | game experiences running in the browser".
        
         | mschuetz wrote:
         | One reason is that WebGL isn't a 2010s technology, but more of
         | a 2005 technology (it doesn't even have compute shaders).
         | WebGPU will finally bring the Web to the state of 2010.
        
         | yread wrote:
         | There is Doom 3
         | 
         | https://wasm.continuation-labs.com/d3demo/
         | 
         | Released in 2004, but still quite impressive
        
           | kllrnohj wrote:
           | From the project page:
           | 
           | > Performance is decent with around 30-40 FPS on a modern
           | desktop system (ranges from 20 FPS in Edge, 40 FPS in
           | Firefox, to 50 FPS in Chrome)
           | 
           | Achieving 2004 levels of performance & quality with nearly 20
           | years of hardware improvements is hardly impressive. It's
           | really rather pathetic if anything, although I also got
           | better performance in the opening area than the project page
           | claims but I didn't play very long to find out if it drops
           | later on.
           | 
           | But also note that it's not actually Doom 3 proper, but
           | includes changes from other ports as well as a completely
           | different renderer. There's sadly no side-by-side original
           | vs. port screenshots to compare what the differences are.
        
       | Mindwipe wrote:
       | I wonder if you could utilise WebVR and this to build a
       | reasonably performant VR application on a Mac.
       | 
       | Of course, that would require there being Quest drivers and
       | software to get any traction...
        
       | macawfish wrote:
       | Still waiting on Linux support! I'd love to start working with
       | this but gave up after a dozen+ tries to get it consistently
       | running on Linux.
        
       | tormeh wrote:
       | Anyone know the reason Google created Dawn instead of going with
       | Wgpu? Attachment to C++, or NIH syndrome?
        
         | kllrnohj wrote:
         | What does Mozilla use Spidermonkey instead of V8? Why did Apple
         | create B3 JIT instead of just using Turbofan?
         | 
         | Competing implementations are a cornerstone of standards.
         | Indeed in many domains it's a requirement for a spec to have
         | multiple compliant implementations to be considered complete at
         | all.
        
           | tormeh wrote:
           | The older I get the more I disagree with this POV. All other
           | things equal, a single open source implementation is superior
           | to several ones. Several implementations lead to duplication
           | of effort, both for those developing them, and more
           | importantly for those developing for those implementations.
           | Software has to be tested separately for each implementation,
           | often with vendor-specific hacks.
        
         | pjmlp wrote:
         | Dawn was there first?
        
           | tormeh wrote:
           | Can't fault that logic.
        
       | infrawhispers wrote:
       | this is really nice. I may need to change the onnx backends in my
       | demo here[1] from wasm to webgpu.
       | 
       | [1] https://anansi.pages.dev/
        
       | HeckFeck wrote:
       | The browser truly is the new OS, for better or for worse.
        
         | surgical_fire wrote:
         | History is indeed one long defeat.
        
         | toyg wrote:
         | They couldn't secure our OSes to run untrusted code safely, so
         | they built a OS on top of a OS (yo-dawg meme here).
         | 
         | It wouldn't even be so terrible, if it didn't tie us down to a
         | crappy language (JS).
        
           | pid-1 wrote:
           | I think the real reason the web was built was because Google,
           | etc... Decided they need a distribution platform that could
           | not be locked by OS vendors, as that would be a theat to
           | their business.
        
             | toyg wrote:
             | That's a cynical take. DHTML predates Google. Demand was
             | already there before the giants appeared.
             | 
             | What really happened was that developers figured out that
             | the deployment story via web was massively simpler and less
             | burdensome. Producing good installers was hard, people in
             | offices often couldn't install anything, and then you had
             | to deal with DLL Hell... whereas the browser was always
             | there already. So a series of unfortunate events was set in
             | motion that ended up with what we have today.
        
           | bagacrap wrote:
           | Probably better than being forced to write your app in 4
           | different languages to hit all the different devices out
           | there.
           | 
           | Also I don't hate typescript and there's always wasm.
        
           | mike_hearn wrote:
           | I guess you're being downvoted because of the swipe at JS,
           | but that's pretty much what's happened yes. Desktop OS
           | vendors dropped the ball on sandboxing and internet
           | distribution of software so badly that we ended up evolving a
           | document format to do it instead. The advantage being that
           | because it never claimed to be an app platform features could
           | be added almost arbitrarily slowly to ensure they were locked
           | down really tight, and because of a pre-existing social
           | expectation that documents (magazines, newspapers) contain
           | adverts but apps don't. So ad money can fund sandboxing
           | efforts and if it lags five years behind the unsandboxed
           | versions, well, it's not like Microsoft or Apple are doing
           | the work.
        
       | Ellie_Palms wrote:
       | [flagged]
        
       | osigurdson wrote:
       | Suppose you have a massive 3D model stored in the cloud, which
       | weighs in at 100GB and requires most of the computation to be
       | handled on the server side. In this scenario, would utilizing
       | something like WebGPU be beneficial, given its primary
       | responsibility for the final 2D projection?
        
         | debacle wrote:
         | You'd probably implement something that does culling server-
         | side, and then pass the culled model to the client.
        
           | osigurdson wrote:
           | Would that work with vtk?
        
         | mschuetz wrote:
         | You'd use level of detail structures, like google earth does.
        
       | d--b wrote:
       | Chrome _will_ ship WebGPU in the next update...
       | 
       | Current is 112, WebGPU is 113.
       | 
       | Seriously google, can't you just wait until you actually ship the
       | stuff before you say you shipped it...
        
         | selectnull wrote:
         | Even worse: "This initial release of WebGPU is available on
         | ChromeOS, macOS, and Windows. Support for other platforms is
         | coming later this year."
         | 
         | I guess Linux support will come right after they ship Google
         | Drive client.
        
           | pjmlp wrote:
           | Not even Google cares about "The Year of Desktop Linux",
           | despite their heavy use of it.
        
         | xbmcuser wrote:
         | They count the beta channel as the release channel so if it is
         | in beta channel it is released from here if no bugs are found
         | would be pushed to stable.
        
           | rado wrote:
           | Doesn't seem to be in beta, which is still 112.
        
       | grishka wrote:
       | Can this be disabled when it is released?
        
         | jeroenhd wrote:
         | Firefox has dom.webgpu.enabled for browser control. Chrome has
         | flags for it right now, but those often disappear once a
         | feature is introduced. You can probably disable hardware
         | acceleration to get rid of webgpu though.
        
       | slimsag wrote:
       | This is very welcome and a long time coming!
       | 
       | If you're eager to learn WebGPU, consider checking out Mach[0]
       | which lets you develop with it natively using Zig today very
       | easily. We aim to be a competitor-in-spirit to
       | Unity/Unreal/Godot, but extremely modular. As part of that we
       | have Mach core which just provides Window+Input+WebGPU and some
       | ~19 standalone WebGPU examples[1].
       | 
       | Currently we only support native desktop platforms; but we're
       | working towards browser support. WebGPU is very nice because it
       | lets us target desktop+wasm+mobile for truly-cross-platform games
       | & native applications.
       | 
       | [0] https://github.com/hexops/mach
       | 
       | [1] https://github.com/hexops/mach-examples/tree/main/core
        
         | ArtWomb wrote:
         | Zig is the "language I'm learning next" ;)
        
         | johnfn wrote:
         | The Mach project led me to this, uhm, _interesting_ article:
         | 
         | https://devlog.hexops.com/2021/i-write-code-100-hours-a-week...
         | 
         | What a maniac!
        
           | lightbendover wrote:
           | 2 years isn't long enough to really experience burnout. As
           | soon as rewards slow down, it will seep in if nothing else
           | changes.
        
           | mlsu wrote:
           | I appreciate the candor of this article. And I'm not making
           | any judgements.
           | 
           | But that calendar. That calendar is _wild_.
        
         | bobajeff wrote:
         | I'm actually keeping a close eye on mach after seeing your talk
         | about gkurve. That has made GPU accelerated 2D graphics look
         | much more approachable to me.
         | 
         | I plan to experiment with that after I get a better
         | understanding of the WebGPU c API.
        
       | 2OEH8eoCRo0 wrote:
       | I'm very curious about isolation. Nvidia doesn't allow virtual
       | GPUs on their consumer card drivers so this isolation feels like
       | it can easily be abused. Will there be more support for vGPUs in
       | the future? I hope Nvidia and others are pushed to include better
       | isolation and vGPU support so that WebGPU doesn't need to do all
       | this security isolation themselves. Your browser could
       | theoretically request a vGPU instance to work on.
        
         | kevingadd wrote:
         | In general it would be great if browser GPU processes could
         | operate on a vGPU, so the web as a whole would be isolated from
         | the rest of your system. Right now that's not the case, so
         | you're relying on drivers for that browser vs apps isolation,
         | and relying on the browser to isolate tabs from each other as
         | well. Both have failed in the past.
        
       | notorandit wrote:
       | For Linux too?
        
       | Alifatisk wrote:
       | Soon, there will be no need to install softwares to the computer.
       | A modern browser will cover it all.
       | 
       | What scares me is browsers getting bloated with all kinds of
       | features while webapps getting bigger and bigger in size for no
       | reason.
       | 
       | Note, I am all for this feature getting widely adopted.
        
         | codewiz wrote:
         | Adding broadly useful features to the web platform is a net
         | gain if it removes bloat from N web apps that most users run.
         | 
         | It might take multiple years for something like WebGPU to start
         | paying off, and even longer for the deprecation of older APIs
         | for 3D rendering, video compression and compute.
        
         | EthicalSimilar wrote:
         | For the general population, is this a bad thing? - less
         | friction for users
         | 
         | - less hassle dealing with installer bloat
         | 
         | - ability to just "close" a webpage instead of having to remove
         | many files in obscure locations if you want to uninstall
         | something
         | 
         | - easier syncing of state across multiple devices with browser
         | session sync
         | 
         | - granular permissions per "app" such as file access, camera
         | access, etc. - lower barrier to enter for developers wanting to
         | ship cross-platform software without having to bundle electron
         | / tauri / whatever
         | 
         | Not to say there aren't downsides.
         | 
         | - no longer "owning" your software (although debatable if this
         | were ever the case)
         | 
         | - potentially being tied into vendor-specific browser
         | implementations
        
         | oaiey wrote:
         | Emacs have a similar story to tell. Effectively, the browser is
         | nowadays an Operating System running a VM. The computer history
         | is having these kind of OS models for 50+ years now.
        
         | tuyiown wrote:
         | The real problem is more that bad and malicious code get more
         | and more easier to deploy, the browser getting more complex to
         | mitigate this.
         | 
         | The good news is that the level of trust in the code to run app
         | natively is very high, and in the age of highly connected
         | computers, if not done in the browser, it would have been
         | needed at OS level anyway.
         | 
         | So maybe browser looks like a sad future as an OS replacement,
         | but at least, it collected issues and solutions to mitigate
         | arbitrary code loaded from the networks.
         | 
         | Whatever happens after, this history will be kept. (it has
         | already started on current OSes with sandboxed software and on
         | demand permissions).
        
         | pjmlp wrote:
         | Except for being less capable.
         | 
         | WebGL 2.0 is basically a PS3 / XBox 360 kind of graphics, and
         | WebGPU would be PS4, and that is about it.
         | 
         | All the other cool things, Mesh Shaders, Ray Tracing, Nanite...
         | forget about it, with luck in the next 10 years, if it follows
         | WebGL 2.0 improvements rate.
        
       | Jhsto wrote:
       | For anyone figuring out how to run webgpu on a remote computer
       | (over webrtc) , see this: https://github.com/periferia-
       | labs/laskin.live
       | 
       | Not sure if it works anymore (I made it 3 years ago), but will be
       | interesting to see if there will be similar products for LLMs and
       | so now.
        
       | codewiz wrote:
       | "This initial release of WebGPU is available on ChromeOS, macOS,
       | and Windows."
       | 
       | Not yet available on Linux, perhaps because the Vulkan backend
       | can't be enabled yet:
       | https://bugs.chromium.org/p/dawn/issues/detail?id=1593
        
       | raphlinus wrote:
       | This is a huge milestone. It's also part of a much larger
       | journey. In my work on developing Vello, an advanced 2D renderer,
       | I have come to believe WebGPU is a game changer. We're going to
       | have reasonably modern infrastructure that runs everywhere: web,
       | Windows, mac, Linux, ChromeOS, iOS, and Android. You're going to
       | see textbooks(*), tutorials, benchmark suites, tons of sample
       | code and projects to learn from.
       | 
       | WebGPU 1.0 is a lowest common denominator product. As 'FL33TW00D
       | points out, matrix multiplication performance is much lower than
       | you'd hope from native. However, it is _possible_ to run machine
       | learning workloads, and getting that performance back is merely
       | an engineering challenge. A few extensions are needed, in
       | particular cooperative matrix multiply (also known as tensor
       | cores, WMMA, or simd_matrix). That in turn depends on subgroups,
       | which have some complex portability concerns[1].
       | 
       | Bindless is another thing everybody wants. The wgpu team is
       | working on a native extension[2], which will inform web
       | standardization as well. I am confident this will happen.
       | 
       | The future looks bright. If you are learning GPU, I now highly
       | recommend WebGPU, as it lets you learn modern techniques
       | (including compute), and those skills will transfer to native
       | APIs including Vulkan, Metal, and D3D12.
       | 
       | Disclosure: I work at Google and have been involved in WebGPU
       | development, but on a different team and as one who has been
       | quite critical of aspects of WebGPU.
       | 
       | (*): If you're writing a serious, high quality textbook on
       | compute with WebGPU, then I will collaborate on a chapter on
       | prefix sums / scan.
       | 
       | [1]: https://github.com/gpuweb/gpuweb/issues/3950
       | 
       | [2]:
       | https://docs.rs/wgpu/latest/wgpu/struct.Features.html#associ...*
        
         | eachro wrote:
         | Suppose you're a ML practictioner. Would you still recommend
         | learning WebGPU, over say spending more time on CUDA?
        
           | raphlinus wrote:
           | This depends entirely on your goals. If you're researching
           | the actual machine learning algorithms, then use a framework
           | like TensorFlow or Torch, which provides all the tensor
           | operations and abstracts away the hardware. If you're trying
           | to get maximum performance on hardware today, stick with
           | Nvidia and use CUDA. If you're interested in deploying across
           | a range of hardware, or want to get your hands dirty with the
           | actual implementation of algorithms (such as wonnx), then
           | WebGPU is the way to go.
        
       | neoyagami wrote:
       | I hope this is disable by default, the amount of fingerprinting
       | it will generate kinda scares me.
        
       | ReptileMan wrote:
       | If apple put it in safari - this is app store killer
        
         | jckahn wrote:
         | How so? The App Store's value proposition is discoverability. A
         | web API can't do much to compete with that.
        
         | kevingadd wrote:
         | Games in iOS safari are still at a big disadvantage even if
         | they have access to WebGPU, because Apple intentionally
         | undermines input and fullscreen APIs there to keep games in the
         | app store.
        
         | illiarian wrote:
         | Apple is literally one of the originators of this API.
         | 
         | And of course this will do nothing to the app store. Much like
         | WebGL did nothing.
        
       | amrb wrote:
       | https://webgpu.github.io/webgpu-samples/samples/gameOfLife
        
       | ianpurton wrote:
       | Is it practical to run machine learning algorithms in parallel
       | with this?
       | 
       | I could imagine people loading a webpage to take part in a
       | massive open source training exercise by donating their Gpu time.
        
         | why_only_15 wrote:
         | Looks like no -- there appears to be no tensor core or similar
         | support and this SGEMM (fp32 matrix multiply) benchmark gets
         | awful results (my laptop gets 330 gflops on this when it should
         | be capable of 13000 gflops fp32 and probably 100000 gflops
         | fp16).
         | 
         | https://github.com/milhidaka/webgpu-blas
        
         | bhouston wrote:
         | Google's main tensor flow library for the browser runs fastest
         | with its webgl2 backend as compared to cpu so I suspect running
         | it on webgpu is possible and maybe preferred. That said there
         | is a WebNN api that should eventually expose neural network
         | accelerator chips which should be faster and more efficient
         | than GPU at some point.
        
       | FL33TW00D wrote:
       | This is very exciting! (I had suspected it would slip to 114)
       | 
       | WebGPU implementations are still pretty immature, but certainly
       | enough to get started with. I've been implementing a Rust +
       | WebGPU ML runtime for the past few months and have enjoyed
       | writing WGSL.
       | 
       | I recently got a 250M parameter LLM running in the browser
       | without much optimisation and it performs pretty well!
       | (https://twitter.com/fleetwood___/status/1638469392794091520)
       | 
       | That said, matmuls are still pretty handicapped in the browser
       | (especially considering the bounds checking enforced in the
       | browser). From my benchmarking I've struggled to hit 50% of
       | theoretical FLOPS, which is cut down to 30% when the bounds
       | checking comes in. (Benchmarks here:
       | https://github.com/FL33TW00D/wgpu-mm)
       | 
       | I look forward to accessing shader cores as they mentioned in the
       | post.
        
         | antimora wrote:
         | Oh great!
         | 
         | I am one of the contributors for Burn (Rust Deep Learning
         | Framework). We have a plan adding a WebGPU backend
         | (https://github.com/burn-rs/burn/issues/243).
         | 
         | Here is more about the framework Burn: https://burn-
         | rs.github.io/
        
         | jimmySixDOF wrote:
         | I guess all the AI type use cases are front seat here but the
         | performance boost increase to Immersive Web (WebXR) and general
         | moves towards webpages as 3D UI/UX for applications -- that is
         | where I hope to see expansion into new creative territory.
        
         | winter_blue wrote:
         | Does the Rust compile to WebAssembly? I'm guessing that means
         | WebGPU is fully accessible from WebAssembly, and one can go a
         | zero JS route?
        
         | tehsauce wrote:
         | It's better to compare against an ML framework than to maximum
         | theoretical flops because sometimes it's not possible to reach.
         | These models are often limited by memory bandwidth rather than
         | flop capability.
        
           | pklausler wrote:
           | Something more than 30 years ago, I had the privilege of
           | working as a young compiler writer for [a supercomputer
           | designer]'s penultimate start-up. I once naively asked him
           | why the machine he was working on couldn't have more memory
           | bandwidth, since the floating-point functional units were
           | sometimes starved for operand data and it was hard to hit the
           | peak Flop/sec figures. And his response has stuck with me
           | ever since; basically, it makes better sense for the memory
           | to be fully utilized, not the floating-point units, because
           | the memory paths were way more expensive than the floating-
           | point units. And this was something you could actually
           | physically _see_ through the transparent top of the system 's
           | case. I guess the lesson would be: Don't let a constraint
           | that would be fairly cheap to overdesign be the limiting
           | factor in a system's performance.
        
         | MuffinFlavored wrote:
         | what would it take to python -> wasm -> webgpu for the entire
         | existing webgpu ecosystem (all of the libraries around neural
         | networks, torch, yada yada)
        
           | korijn wrote:
           | FYI you can already use webgpu directly in python, see
           | https://github.com/pygfx/wgpu-py for webgpu wrappers and
           | https://github.com/pygfx/pygfx for a more high level graphics
           | library
        
             | MuffinFlavored wrote:
             | I think I meant to ask "how is machine learning support for
             | things like neural networks instead of graphical 2D/3D
             | operations with WebGPU"?
        
               | korijn wrote:
               | Well, you could implement that on top of the wrappers as
               | well I guess. Anyway!
        
               | MuffinFlavored wrote:
               | As in, I could, but a library/libraries doing so do not
               | yet exist?
        
               | korijn wrote:
               | Not that I know of!
        
           | grlass wrote:
           | The Apache TVM machine learning compiler has a WASM and
           | WebGPU backend, and can import from most DNN frameworks.
           | Here's a project running Stable Diffusion with webgpu and TVM
           | [1].
           | 
           | Questions exist around post-and-pre-processing code in folks'
           | Python stacks, with e.g. NumPy and opencv. There's some NumPy
           | to JS transpilers out there, but those aren't feature
           | complete or fully integrated.
           | 
           | [1] https://github.com/mlc-ai/web-stable-diffusion
        
         | brrrrrm wrote:
         | > WebGPU ML runtime
         | 
         | oh cool! will this be numpy-like or will it have autograd as
         | well? We're looking around for a web backend for shumai[1] and
         | the former is really all we need :)
         | 
         | [1]: https://github.com/facebookresearch/shumai
        
           | FL33TW00D wrote:
           | Autograd would be a whole new adventure, just inference for
           | now :).
        
         | singularity2001 wrote:
         | How would WebGL matmul fare in comparison?
        
           | FL33TW00D wrote:
           | This blog post breaks it down pretty well:
           | https://pixelscommander.com/javascript/webgpu-
           | computations-p...
        
         | misterdata wrote:
         | Looking forward to your WebGPU ML runtime! Also, why not
         | contribute back to WONNX? (https://github.com/webonnx/wonnx)
        
           | FL33TW00D wrote:
           | Hi Tommy!
           | 
           | I sent you an email a few weeks back - would be great to
           | chat!
           | 
           | WONNX is a seriously impressive project. There is a few
           | reason I didn't just contribute back to WONNX:
           | 
           | 1. WONNX does not parse the ONNX model into an IR, which I
           | think is essential to have the freedom to transform the model
           | as required.
           | 
           | 2. When I started, WONNX didn't seem focused on symbolic
           | dimensions (but I've seen you shipping the shape inference
           | recently!).
           | 
           | 3. The code quality has to be much higher when it's open
           | source! I wanted to hack on this without anyone to please but
           | myself.
        
             | antimora wrote:
             | I'm presently working on enhancing Burn's (https://burn-
             | rs.github.io/) capabilities by implementing ONNX model
             | importation (https://github.com/burn-rs/burn/issues/204).
             | This will enable users to generate model source code during
             | build time and load weights at runtime.
             | 
             | In my opinion, ONNX is more complex than necessary.
             | Therefore, I opted to convert it to an intermediate
             | representation (IR) first, which is then used to generate
             | source code. A key advantage of this approach is the ease
             | of merging nodes into corresponding operations, since ONNX
             | and Burn don't share the same set of operators.
        
               | misterdata wrote:
               | Actually WONNX also transforms to an IR first (early
               | versions did not and simply translated the graph 1:1 to
               | GPU shader invocations in topographically sorted order of
               | the graph). In WONNX the IR nodes are (initially) simply
               | (copy-on-write references to) the ONNX nodes. This IR is
               | then optimized in various ways, including the fusion of
               | ONNX ops (e.g. Conv+ReLU->ConvReLU). The newly inserted
               | node still embeds an ONNX node structure to describe it
               | but uses an internal operator.
        
               | FL33TW00D wrote:
               | Looks great!
               | 
               | ONNX is 100% more complex than necessary. Another format
               | of interest is NNEF: https://www.khronos.org/nnef
        
               | misterdata wrote:
               | Also see the recently introduced StableHLO and its
               | serialization format: https://github.com/openxla/stablehl
               | o/blob/main/docs/bytecode...
        
       | misterdata wrote:
       | This makes running larger machine learning models in the browser
       | feasible - see e.g. https://github.com/webonnx/wonnx (I believe
       | Microsoft's ONNXRuntime.js will also soon gain a WebGPU back-
       | end).
        
       | bhouston wrote:
       | And just 2 weeks ago I launched this WebGPU features and limits
       | tracking website:
       | 
       | https://web3dsurvey.com
       | 
       | It is modelled after the long defunct webglstats website.
        
         | mourner wrote:
         | I've been looking for a replacement to WebGL Stats for a long
         | time -- thank you so much for making it! This is indispensable.
        
         | Kelteseth wrote:
         | Nice! I have linked your site to the Godot WebGPU support
         | proposal issue: https://github.com/godotengine/godot-
         | proposals/issues/6646
        
       | teruakohatu wrote:
       | In case you confused this with webgl as I did:
       | 
       | > WebGPU is a new API for the web, which exposes modern hardware
       | capabilities and allows rendering and computation operations on a
       | GPU, similar to Direct3D 12, Metal, and Vulkan. Unlike the WebGL
       | family of APIs, WebGPU offers access to more advanced GPU
       | features and provides first-class support for general
       | computations on the GPU.
        
         | marktangotango wrote:
         | So someone can put javascript in a page to compute equihash,
         | autolykos, cuckoo cycle, etc? Is there a way to limit this?
        
           | dagmx wrote:
           | They could already do that though?
        
             | marktangotango wrote:
             | But not using the client side gpu!
        
               | JayStavis wrote:
               | I believe webGL "cryptojacking" as it's called, indeed is
               | a thing. Not sure on prevalence though or to what extent
               | this introduction makes it more viable for malicious
               | actors.
               | 
               | I'm not sure if lots of hashing algos are gpu-ready or
               | optimized either.
        
         | mike_hock wrote:
         | And to prevent device fingerprinting, all the operations are
         | specified to deterministically produce the same bit-exact
         | results on all hardware, and the feature set is fixed without
         | any support for extensions, right?
         | 
         | Or is this yet another information leak anti-feature that we
         | need to disable?
        
           | nashashmi wrote:
           | Fingerprinting is a very difficult and unreliable way of
           | identifying users. You would not bank on fingerprinting to
           | protect your money. You cannot bank on it to protect user
           | info. You can just wish that you are targeting the right
           | person.
        
           | NiekvdMaas wrote:
           | I'm afraid it's the latter:
           | https://gpuweb.github.io/gpuweb/#privacy-considerations
        
           | fsloth wrote:
           | "..all the operations are specified to deterministically
           | produce the same bit-exact results on all hardware..."
           | 
           | You have to block floating point calculations as well if that
           | is your intent.
        
             | kevingadd wrote:
             | The animals already fled the barn on that one, WebAssembly
             | floating point is not specified to be bit-exact, so you can
             | use WASM FP as a fingerprinting measure (theoretically - I
             | don't know under which configurations it would actually
             | vary.)
        
           | simion314 wrote:
           | As long you can keep it off or turn it off then I think this
           | is a good option to have. I too would prefer to have the Web
           | split into 2 parts, documents and apps , then I could have a
           | browser that optimizes for JS , GPU speed and a simple safe
           | browser for reading Wikipedia and articles.
           | 
           | I am sure there will be browsers that will not support this
           | or keep it off so at worst you need to give up on Chrome and
           | use a privacy friendly browser.
        
           | phh wrote:
           | Someone mean would say that this is not a bug, but a feature
           | for the people who are paying for Chrome.
        
             | sebzim4500 wrote:
             | Google are the people paying for Chrome, they do not
             | benefit in any way from this kind of fingerprinting. To the
             | contrary, it decreases the value of their browser monopoly.
        
               | illiarian wrote:
               | > Google are the people paying for Chrome, they do not
               | benefit in any way from this kind of fingerprinting.
               | 
               | The largest ad company in the world 80% of whose money
               | comes from online advertising does not benefit from
               | tracking...
        
               | jefftk wrote:
               | Google doesn't benefit because Google has committed not
               | to fingerprint for ad targeting, but their competitors
               | do.
        
               | illiarian wrote:
               | You've got it backwards.
        
               | jefftk wrote:
               | Google Ads, 2020-07-31:
               | 
               |  _What is not acceptable is the use of opaque or hidden
               | techniques that transfer data about individual users and
               | allow them to be tracked in a covert manner, such as
               | fingerprinting. We believe that any attempts to track
               | people or obtain information that could identify them,
               | without their knowledge and permission, should be
               | blocked. We'll continue to take a strong position against
               | these practices._ -- https://blog.google/products/ads-
               | commerce/improving-user-pri...
               | 
               | Google Ads, 2021-03-03:
               | 
               |  _Today, we're making explicit that once third-party
               | cookies are phased out, we will not build alternate
               | identifiers to track individuals as they browse across
               | the web, nor will we use them in our products._ --
               | https://blog.google/products/ads-commerce/a-more-privacy-
               | fir...
               | 
               | (I used to work on ads at Google, speaking only for
               | myself)
        
               | foldr wrote:
               | Even supposing that Google do benefit from it in that
               | manner, there would be far simpler ways for them to make
               | fingerprinting easier. It's extremely unlikely that this
               | is a significant motivation for adding WebGPU. Not to
               | mention that a lot of the fingerprinting you can
               | potentially do with WebGPU can already be done with
               | WebGL.
        
               | illiarian wrote:
               | Google has many hands in many pots. It's not that they
               | are necessarily looking for easier ways to do
               | fingerprinting. But they sure as hell wouldn't put up a
               | fight to make it harder.
        
               | foldr wrote:
               | Then why does Chrome contain loads of features to make
               | fingerprinting harder?
        
               | illiarian wrote:
               | Chrome has to walk a fine line between what it does for
               | privacy and what is says it does. So you have the
               | protection against fingerprinting and at the same time
               | you have the FLoC fiasco
        
               | foldr wrote:
               | The simplest explanation is that the Chrome developers
               | genuinely want to protect privacy and also genuinely want
               | to add features. Every browser has to make that trade
               | off. There are plenty of fingerprinting vulnerabilities
               | in Firefox and Safari too.
               | 
               | The reasoning here seems to be something like "Google is
               | evil; X is an evil reason for doing Y; therefore Google
               | must have done Y because of X". It's not a great
               | argument.
        
               | illiarian wrote:
               | I can only quote Johnathan Nightingale, former executive
               | of Mozilla, from his thread on how Google was sabotaging
               | Firefox [1]:
               | 
               | "The question is not whether individual sidewalk labs
               | people have pure motives. I know some of them, just like
               | I know plenty on the Chrome team. They're great people.
               | But focus on the behaviour of the organism as a whole. At
               | the macro level, google/alphabet is very intentional."
               | 
               | [1] Thread:
               | https://twitter.com/johnath/status/1116871231792455686
        
               | foldr wrote:
               | That whole Twitter thread says nothing about
               | fingerprinting or privacy. The first comment is close to
               | gibberish, but seems to be mostly about some kind of
               | Google office development project in Toronto.
               | 
               | You are literally following the parody argument schema
               | that I mentioned in my previous comment. You make some
               | vague insinuations that Google is evil, then attribute
               | everything it does to non-specific evil motivations. Even
               | if Google _is_ evil, this kind of reasoning is completely
               | unconvincing.
        
               | illiarian wrote:
               | > That whole Twitter thread says nothing about
               | fingerprinting or privacy.
               | 
               | I should've been more clear. In this case I was
               | responding to this: "The reasoning here seems to be
               | something like "Google is evil; X is an evil reason for
               | doing Y; therefore Google must have done Y because of X".
               | It's not a great argument."
               | 
               | > You are literally following the parody argument schema
               | that I mentioned in my previous comment.
               | 
               | Because you have to look at the behaviour of the organism
               | as a whole. If the shoe fits etc.
        
               | sebzim4500 wrote:
               | Of course they do, but they want to allow fingerprinting
               | in ways that only Google gets the data (i.e. spying on
               | chrome users)
        
               | JoshTriplett wrote:
               | They don't benefit from fingerprinting, because the
               | browser has all sorts of easier to track mechanisms
               | available by default. Fingerprinting is for browsers that
               | don't actively _enable_ tracking.
        
           | miohtama wrote:
           | The easiest option to prevent fingerprinting is to disable
           | WebGPU. Or even better, which is already the option for many
           | today, use one of privacy focused web browsers instead of
           | Chrome.
           | 
           | Meanwhile there is a large audience who will benefit from
           | WebGPU features e.g. gamers and this audience is in the
           | numbers of hundreds of millions.
        
           | pl90087 wrote:
           | I wish all information-leaky browser features were turned off
           | by default and I could easily turn them on on demand when
           | needed. Like, the browser could detect that a webpage
           | accesses one of them and tells me that I am currently
           | experiencing a degraded experience which I could improve by
           | turning _this_ slider on.
        
             | jeroenhd wrote:
             | I've set up my Firefox with resistFingerprinting but
             | without an auto deny on canvas access.
             | 
             | It's sickening to see how often web pages still profile
             | you, but the setting seems to work.
             | 
             | Similarly, on Android there's a Chromium fork called
             | Bromite that shows JIT, WebRTC, and WebGL as separate
             | permissions, denied by default. I only use it for when
             | broken websites don't work right on Firefox, but websites
             | seem to function fine without all those permissions being
             | enabled by default.
             | 
             | Competent websites will tell you the necessary settings
             | ("WebGL is not available") so making the websites work
             | isn't much trouble. I'd much rather see those error
             | messages than getting a "turn on canvas fingerprinting for
             | a better experience" popup from my browser every time I try
             | to visit blogs or news websites.
        
               | pl90087 wrote:
               | Right. But I don't want to have to dig into settings
               | hierarchies for those knobs. The threashold for that is
               | too high and almost nobody will bother and do that.
               | Something easier with simple sliders would be much
               | better.
        
             | tormeh wrote:
             | I believe the LibreWolf browser does this. It's basically
             | Firefox with all the fingerprintable features turned off.
        
           | indy wrote:
           | The trade-off for extracting maximum performance from a
           | user's hardware is that it becomes much easier to
           | fingerprint. Judging by the history of the web this is a
           | trade-off that probably isn't worth making.
        
           | Timja wrote:
           | There is no way to escape fingerprinting.
           | 
           | Just one example: A script which runs many different types of
           | computations. Each computation will take a certain amount of
           | time depending on your hardware and software. So you will get
           | a fingerprint like this:                   computation 1: **
           | computation 2: ****         computation 3: **********
           | computation 4: **         computation 5: **************
           | computation 6: ************         computation 7: *********
           | etc
           | 
           | There is no way to avoid this. You can make the fingerprint
           | more noisy by doing random waits. But thats all.
        
             | littlestymaar wrote:
             | You only get fingerprinting from your method if the
             | variation of the "fingerprint" between two different runs
             | by the same user is lower than the difference you get
             | between two different users. This is far from obvious since
             | it depends a lot on the workload running on the machine at
             | the time.
             | 
             | I'm not aware of a single fingerprinting tool that
             | primarily use this king of timing attack rather than more
             | traditional fingerprinting methods.
        
               | Timja wrote:
               | Not sure if the workload makes a difference.
               | 
               | We would have to make examples of what Computation1 is
               | and what Computation2 is to make a prediction if certain
               | types of workloads will impact the ratio of their
               | performance.
               | 
               | Example:                   s=performance.now();
               | r=0;         for (i=0; i<1000000; i++) r+=1;
               | t1=performance.now()-s;              s=performance.now();
               | r=0;         for (i=0; i<1000000; i++)
               | r+="bladibla".match(/bla/)[0].length;
               | t2=performance.now()-s;              console.log("Ratio:
               | " + t2/t1);
               | 
               | For me, the ratio is consistently larger in Chrome than
               | in Firefox. Which workload would reverse that?
        
               | littlestymaar wrote:
               | Fingerprinting in the usual sense the term isn't about
               | distinguishing Chrome from Firefox, it's about
               | distinguishing _user A_ from _user B_ , ... _user X_
               | reliably in order to be able to track the user across
               | website and navigation sessions.
               | 
               | Your example is unlikely to get you far.
               | 
               | Edit: in a quick test, I got a range between 8 and 49 in
               | Chrome, and between 1.27 and 51 (!) on Firefox, on the
               | same computer, the results are very noisy.
        
               | Timja wrote:
               | Chrome and Firefox here are an example for "Two users who
               | use exactly the same hardware but different software".
               | 
               | To distinguish between users between of a larger set, you
               | do more such tests and add them all together. Each test
               | adding a few bits of information.
               | 
               | To make the above code more reliable, you can measure the
               | ratio multiple times:
               | 
               | https://jsfiddle.net/dov1zqtL/
               | 
               | I get 9-10 in Firefox and 3-4 in Chrome very reliably
               | when measuring it 10 times.
        
               | littlestymaar wrote:
               | > Chrome and Firefox here are an example for "Two users
               | who use exactly the same hardware but different
               | software".
               | 
               | But it's also the most pathological example one can think
               | of, yet the results are extremely noisy (while being very
               | costly, which means you won't be able to make a big
               | number of such test without dramatically affecting the
               | user's ability to just browse your website).
        
             | codedokode wrote:
             | Just put WebGL/WebGPU behind permission and the problem is
             | solved. I don't understand why highly paid Google and
             | Firefox developers cannot understand such a simple idea.
        
               | WhyNotHugo wrote:
               | They probably can understand these concepts, but privacy
               | and anonymity are not their main priorities.
        
               | nashashmi wrote:
               | They are highly paid enough to not work on it and smart
               | enough to thwart suggestions like this with "permission
               | overload issue".
               | 
               | But more frankly, fingerprinting is a whack a mole issue
               | and if it were a real security problem, it would slow
               | feature advancements.
               | 
               | And fingerprinting is too unreliable for any real world
               | use.
        
               | illiarian wrote:
               | Just put WebGL/WebGPU behind permission and the problem
               | is solved.
               | 
               | Just put WebUSB behind permission and the problem is
               | solved.
               | 
               | Just put WebHID behind permission and the problem is
               | solved.
               | 
               | Just put WebMIDI behind permission and the problem is
               | solved.
               | 
               | Just put Filesystem Access behind permission and the
               | problem is solved.
               | 
               | Just put Sensors behind permission and the problem is
               | solved.
               | 
               | Just put Location behind permission and the problem is
               | solved.
               | 
               | Just put Camera behind permission and the problem is
               | solved.
               | 
               | Just put ...
               | 
               | I don't understand why highly paid Google and Firefox
               | developers cannot understand such a simple idea.
        
               | rockdoe wrote:
               | I can't tell whether you're kidding or not, but this is
               | exactly the path Firefox was advocating:
               | https://blog.karimratib.me/2022/04/23/firefox-
               | webmidi.html
               | 
               | The page implies it no longer requires permissions, but I
               | just tested and you definitely get a permissions popup,
               | just a different one.
               | 
               | WebHID, WebUSB and Filesystem Access are IIRC,
               | "considered harmful" so they won't get implemented. And
               | Sensor support was removed after sites started abusing
               | battery APIs.
        
               | illiarian wrote:
               | > I can't tell whether you're kidding or not,
               | 
               | I'm not. It's a bit of a sarcasm (?) listing a subset of
               | APIs that browsers implement (or push forward against
               | objections like hardware APIs) and that all require some
               | sort of permission.
               | 
               | > but this is exactly the path Firefox was advocating
               | 
               | Originally? Perhaps. Since then Firefox's stance is very
               | much "we can't just pile on more and more permissions for
               | every API because we can't properly explain to the user
               | what the hell is going on, and permission fatigue is a
               | thing"
        
               | ctenb wrote:
               | Yes please
        
               | shadowgovt wrote:
               | Everything except WebGL and WebGPU allows the system to
               | change more state than what is rendered on a screen.
               | 
               | Users already expect browsers to change screen contents.
               | That's why WebGPU / WebGL aren't behind a permission
               | block (any moreso than "show images" should be... Hey,
               | remember back in the day when that was a thing?).
        
               | justinclift wrote:
               | > why highly paid Google ... developers
               | 
               | "Completely co-incidentally", it's in Google's best
               | interest to be able to fingerprint everyone.
               | 
               | So, changing it to actually be privacy friendly while
               | they have the lion's share of the market doesn't seem
               | like it's going to happen without some major external
               | intervention. :/
        
               | maigret wrote:
               | It's running on Chrome. Google doesn't need
               | fingerprinting. By making it harder for others to
               | fingerprint it actually cements Google position in the ad
               | market.
        
               | justinclift wrote:
               | > It's running on Chrome. Google doesn't need
               | fingerprinting.
               | 
               | Are you saying that because you reckon everyone using a
               | Chromium based browser logs into a Google account?
        
               | elpocko wrote:
               | I do this since forever, but I have to give explicit
               | permission to load and run JS, which solves a lot of
               | other problems as well. Letting any site just willy-nilly
               | load code from whereever and run it on your machine is
               | insane, and it's well worth the effort to manually
               | whitelist every site.
        
               | miohtama wrote:
               | Just don't use Chrome. There are plenty of alternative
               | web browsers you can choose that are more privacy
               | oriented. You are not Chrome's customer unless you pay
               | for it - or you have 100% money back guarantee. Demanding
               | features on free product is never going to go anywhere.
        
               | merlish wrote:
               | For a user to correctly answer a permissions dialog, they
               | need to learn programming and read all the source code of
               | the application. To say nothing of the negative effects
               | of permission dialog fatigue.
               | 
               | In practice, no-one who answers a web permissions dialog
               | truly knows if they have made the correct answer.
               | 
               | Asking the user a question they realistically can't
               | answer correctly is not a solution. It's giving up on the
               | problem.
        
               | tgsovlerkhgsel wrote:
               | I think browsers should distinguish more aggressively
               | between "web application", "web site", and "user hostile
               | web site".
               | 
               | Many APIs should be gated behind being a web application.
               | This itself could be a permission dialog already, with a
               | big warning that this enables tracking and "no reputable
               | web site will ask for it unless it is clear why this
               | permission is needed - in doubt, choose no".
               | 
               | Collect opt-in telemetry. Web sites that claim to be a
               | web application but keep getting denied can then be
               | reclassified as hostile web sites, at which point they
               | not only lose the ability to annoy users with web app
               | permission prompts, but also other privileges that web
               | sites don't need.
        
               | bagacrap wrote:
               | Clearly if we knew how to perfectly identify user hostile
               | websites we'd not need permissions dialogs at all.
               | 
               | Distinguishing between site and app, e.g. via an
               | installation process, is equivalent to a permissions
               | dialog, except that you're now advocating for one giant
               | permission dialog instead of fine-grained ones, which
               | seems like a step backwards.
        
               | tgsovlerkhgsel wrote:
               | Yes, if we knew how to do it perfectly, we wouldn't need
               | them. But we can identify _some_ known-good and known-bad
               | cases with high confidence. My proposal mainly addresses
               | the  "fatigue" aspect: it allows apps to use some of the
               | more powerful features without letting every web site use
               | them, and it prevents random web sites from declaring
               | themselves an app and spamming users with the permission
               | request just so they can abuse the users more.
               | 
               | The new permission dialog wouldn't grant all of the
               | finer-grained permissions - it would be a prerequisite to
               | requesting them in the first place.
        
               | beebeepka wrote:
               | Do you have something specific in mind with your opening
               | paragraph?
               | 
               | Because defining what is a web site and what's an app,
               | strikes me as particularly impractical idea. You
               | correctly point out that yes, there are a number of
               | powerful APIs that should be behind permissions. But
               | there are a number of permissions already, so we need to
               | start bundling them and also figure out how to present
               | all this to the regular user.
               | 
               | Frankly, I wouldn't know where to begin with all this.
        
               | tgsovlerkhgsel wrote:
               | News sites are a particular category that I expect to
               | spam people with permission prompts, as they did when
               | notifications became a thing. Without the deterrent of
               | possibly landing in the naughty box, they'd all do it.
               | With it, I still expect some of them to try until they
               | land in the box.
        
               | codedokode wrote:
               | They don't need to learn programming. Just write that
               | this technology can be used for displaying 3D graphics
               | and fingerprinting and let user decide whether they take
               | the risk.
        
               | pmontra wrote:
               | Most of them will say, "I need to see this site, who
               | cares about fingerprints." Some will notice that they're
               | on their screen anyway, a few will know what it's all
               | about.
               | 
               | Maybe "it can be used to display 3D graphics and to track
               | you", but I expect that most people will shrug and go on.
        
               | bcrosby95 wrote:
               | You could maybe display the request in the canvas instead
               | of a popup. If the user can't see it, they'll never say
               | yes.
        
               | kevingadd wrote:
               | They're going to be confused if you say "display 3D
               | graphics", because canvas and WebGL will still work. The
               | website will just be laggier and burn their battery
               | faster. That's not going to make sense to them.
               | 
               | "Fingerprinting" is a better approach to the messaging,
               | but is also going to be confusing since if you take that
               | approach, almost all modern permissions are
               | fingerprinting permissions, so now you have the problem
               | of "okay, this website requires fingerprinting class A
               | but not fingerprinting class B" and we expect an ordinary
               | user to understand that somehow?
        
               | deelly wrote:
               | > In practice, no-one who answers a web permissions
               | dialog truly knows if they have made the correct answer.
               | 
               | Counterpoint: if webpage with latest news (for example)
               | immediately asks me to allow notification, access to
               | webcamera and location I definitely know what is correct
               | answer to these dialogs.
        
               | kevingadd wrote:
               | "Do you want to allow example.com to send you
               | notifications" is way more understandable to a layperson
               | than "do you want to allow access to WebGPU" or "do you
               | want to allow access to your graphics card". Especially
               | because they would still have access to canvas and WebGL.
               | 
               | Permission prompts are a HUGE user education issue and
               | also a fatigue issue. Rendering is widely used on
               | websites so if users get the prompt constantly they're
               | going to tune it out.
        
               | ericflo wrote:
               | Look to the cookie fatigue fiasco for how that might turn
               | out. This simple idea is not always the right one.
        
               | 7to2 wrote:
               | > [ ] Always choose this option.
        
               | epolanski wrote:
               | Why fiasco?
        
               | 7to2 wrote:
               | It's not that they don't understand it, it's that they
               | don't want the average user to have a convenient way to
               | control this setting. Prompting the user for permission
               | would give the user a very convenient way to keep it
               | disabled for most websites. It's as simple as that.
               | 
               | Think about it this way: Which is more tedious: going
               | into the settings and enabling and disabling webGPU every
               | time you need it or a popup? Which way would see you
               | keeping it enabled?
               | 
               | Its tyranny of the default with an extra twist :)
        
               | kevingadd wrote:
               | Saturating the user with permissions requests for every
               | single website they visit is a dead-end idea. We have
               | decades of browser development and UI design history to
               | show that if you saturate the user with nag prompts that
               | don't mean anything to them, they will just mechanically
               | click yes or no (whichever option makes the website
               | work).
        
               | codedokode wrote:
               | Permission popups can be replaced with an additional
               | permission toolbar or with a button in the address bar
               | user needs to click. This way they won't be annoying and
               | won't require a click to dismiss.
        
               | cyral wrote:
               | Like the site settings page on Chrome, which is in the
               | address bar (clicking the lock icon)? You can set the
               | permissions (including defaults) for like 50 of these
               | APIs.
        
               | codedokode wrote:
               | You can display only permissions that a page requests,
               | starting from most important ones.
               | 
               | For example, toolbar could look like:
               | 
               | Enable: [ location ] [ camera ] [ fingerprinting via
               | canvas ] ...
        
             | fulafel wrote:
             | It's possible to have the runtime execute the computations
             | in fixed time across platforms.
        
               | jeffparsons wrote:
               | Sure. And nobody actually wants that, because it would be
               | so restrictive in practice that you might as well just
               | limit yourself to plain text.
               | 
               | The horse bolted long ago; there's little sense in trying
               | to prevent future web platform features from enabling
               | fingerprinting, because the existing surface that enables
               | it is way too big to do anything meaningful about it.
               | 
               | Here are a couple of more constructive things to do:
               | 
               | - Campaign to make fingerprinting illegal in as many
               | jurisdictions as possible. This addresses the big
               | "legitimate" companies.
               | 
               | - Use some combination of allow-listing, deny-listing,
               | and "grey-listing" to lock down what untrusted websites
               | can do with your browser. I'm sure I've seen extensions
               | and Pi-hole type products for this. You could even stop
               | your browser from sending anything to untrusted sites
               | except simple GET requests to pages that show up on
               | Google. (I.e. make it harder for them to smuggle
               | information back to the server.)
               | 
               | - Support projects like the Internet Archive that enable
               | viewing large parts of the web without ever making a
               | request to the original server.
        
               | paulgb wrote:
               | This would essentially mean that every computation would
               | have to run as slow as the slowest supported hardware. It
               | would completely undermine the entire point of supporting
               | hardware acceleration.
               | 
               | I'm sympathetic to the privacy concerns but this isn't a
               | solution worth considering.
        
               | codedokode wrote:
               | The solution is to put unncesessary features like WebGL,
               | programmatic Audio API, reading bits from canvas and
               | WebRTC behind a permission.
        
               | azangru wrote:
               | Who decides what's unnecessary?
        
               | codedokode wrote:
               | Everything that can be used for fingerprinting should be
               | behind a permission. Almost all sites I use (like Google,
               | Hacker News or Youtube) need none of those technologies.
        
               | mastax wrote:
               | So CSS should be behind a permission?
        
               | codedokode wrote:
               | CSS should not leak fingerprinting information. After all
               | this is just a set of rules to lay out blocks on the
               | page.
        
               | remexre wrote:
               | https://css-tricks.com/css-based-fingerprinting/
        
               | yamtaddle wrote:
               | Main thing that ought to be behind a permission is
               | letting Javascript initiate connections or modify
               | anything that might be sent in a request. Should be
               | possible, but ought to require asking first.
               | 
               | If the data can't be exfiltrated, who cares if they can
               | fingerprint?
               | 
               | Letting JS communicate with servers without the user's
               | explicit consent was the original sin of web dev, that
               | ruined everything. Turned it from a user-controlled
               | experience to one giant spyware service.
        
               | johntb86 wrote:
               | If javascript can modify the set of URLs the page can
               | access (e.g. put an image tag on the page or tweak what
               | images need to be downloaded using CSS) then it can
               | signal information to the server. Without those basic
               | capabilities, what's the point of using javascript?
        
               | kevingadd wrote:
               | No video driver is actually going to implement fixed-time
               | rendering. So you'd have to implement it in user-space,
               | and it would be even slower than WebGL. Nobody wants
               | that. You're basically just saying the feature shouldn't
               | ship in an indirect way (which is a valid opinion you
               | should just express directly.)
        
               | fulafel wrote:
               | I don't mean to prescribe the way to stop fingerprinting,
               | just throwing out a trivial existence proof, and maybe a
               | starting point of thinking, that it's not impossible like
               | was suggested.
               | 
               | Also, WebGPU seems to conceptually support software
               | rendering ("fallback adapter"), where fixed time
               | rendering would seem to be possible even without getting
               | cooperation from HW drivers. Being slower than WebGL
               | might still be an acceptable tradeoff at least if the
               | alternative WebGL API avenue of fingerprinting could be
               | plugged.
        
               | foldr wrote:
               | Could you explain what techniques would make this
               | possible? I can see how it's possible in principle, if
               | you, say, compile JS down to bytecode and then have the
               | interpreter time the execution of every instruction. I
               | don't immediately see a way to do it that's compatible
               | with any kind of efficient execution model.
        
               | fulafel wrote:
               | The rest would be optimization while keeping the timing
               | sidechannel constraint in mind, hard to say what the
               | performance possibilities are. For example not all
               | computations have externally observable side effects, so
               | those parts could be executed conventionally if the
               | runtime could guarantee it. Or the program-visible clock
               | APIs might be keeping virtual time that makes it seem
               | from timing POV that operations are slower than they are,
               | combined with network API checkpoints that halt execution
               | until virtual time catches up with real time. Etc. Seems
               | like a interesting research area.
        
               | foldr wrote:
               | >not all computations have externally observable side
               | effects
               | 
               | You can time any computation. So they all have that side
               | effect.
               | 
               | Also, from Javascript you can execute tons of C++ code
               | (e.g. via DOM manipulation). There's no way all of that
               | native code can be guaranteed to run with consistent
               | timing across platforms.
        
               | fulafel wrote:
               | Depends on who you mean by "you". In context of
               | fingerprinting resistance the timing would have to be
               | done by code in certain limited ways using browser APIs
               | or side channels that transmit information outside the JS
               | runtime.
               | 
               | Computations that call into native APIs can be put in the
               | "has observable side effects" category (but in more fine
               | grained treatment, some could have more specific
               | handling).
        
               | foldr wrote:
               | I'm not sure what you mean. All you need to do is this:
               | function computation() { ... }         before =
               | performance.now();         computation();         t =
               | performance.now() - before;
               | 
               | (Obviously there will be noise, and you need to average a
               | bunch of runs to get reliable results.)
        
               | fulafel wrote:
               | In this case the runtime would not be able to guarantee
               | that the timing has no externally observable side effects
               | (at least if you do something with t). It would then run
               | in the fixed execution speed mode.
        
               | hexo wrote:
               | Runtime doesnt have full controll but could introduce a
               | lot of noise in timing and performance. Could it help?
        
               | fulafel wrote:
               | It's hard to reason about how much noise is guaranteed to
               | be enough, because it depends on how much measurement the
               | adversary has a chance to do, there could be collusion
               | beween several sites, etc. To allow timing API usage I'd
               | be more inclined toward the virtual time thing I
               | mentioned upthread.
        
               | foldr wrote:
               | Lots of code accesses the current time. So I think you'd
               | end up just running 90% of realistic code in the fixed
               | execution speed mode, which wouldn't be sufficiently
               | performant.
        
           | nextaccountic wrote:
           | > all the operations are specified to deterministically
           | produce the same bit-exact results on all hardware,
           | 
           | I want this so badly. A compiler flag perhaps, that enables
           | running the same program with the exact output bit for bit on
           | any platform, perhaps by doing the same thing as a reference
           | platform (any will do), even if it has a performance penalty.
        
             | pjc50 wrote:
             | I'm surprised people accept non-bit-identical output. Intel
             | did a lot of damage here with their wacky 80-bit floating
             | point implementation, but really it should be the norm for
             | all languages.
        
               | geysersam wrote:
               | Why would I want bit-identical output? Genuinely curious.
               | 
               | I see there's some increase in confidence perhaps,
               | although the result can still be deterministically
               | wrong...
        
               | pjc50 wrote:
               | It's very hard to do tests of the form assert(result ==
               | expected) if they're not identical every time.
               | 
               | And it can waste a horrendous amount of time if something
               | is non-bit-identical only on a customer machine and not
               | when you try to reproduce it ...
        
               | geysersam wrote:
               | It's not that hard. You'll just have to decide what level
               | of accuracy you want to have.
               | 
               | Asserting == with floating point numbers is basically a
               | kind of rounding anyway.
        
               | dahart wrote:
               | Trying to reproduce is a good point, but at the same time
               | it's usually a pretty bad idea to do tests of the form
               | assert(result == expected) with a floating point result
               | though. You're just asking for trouble in all but the
               | simplest of cases. Tests with floating point should
               | typically allow for LSB rounding differences, or use an
               | epsilon or explicit tolerance knob.
               | 
               | There's absolutely no guarantee that a computation will
               | be bit-identical even if the hardware primitives are,
               | unless you use exactly the same instructions in exactly
               | the same order. Order of operations matters, therefore
               | valid code optimizations can change your results. Plus
               | you'll rule out hardware that can produce _more_ accurate
               | results than other hardware if we demand everything be
               | bit-identical always, it will hold us back or even
               | regress. Hardware with FMA units are an example that
               | produce different results than using MUL and ADD
               | instructions, and the FMA is preferred, but hardware
               | without FMA cannot match it. There are more options for
               | similar kinds of hardware improvements in the future.
        
           | yazzku wrote:
           | It's the latter.
        
           | bhouston wrote:
           | I track population frequency of WebGPU extensions/limits
           | here: https://web3dsurvey.com. The situation currently is
           | much better when WebGL1/WebGL2 but there is still a lot of
           | surface area.
        
             | codedokode wrote:
             | Interesting. The data shows that WEBGL_debug_rendered_info
             | [1] which allows sites to know the name of your graphic
             | card, is supported almost in 100% of browsers. Seems that
             | better fingerprinting support is really a priority among
             | all browser vendors.
             | 
             | [1] https://web3dsurvey.com/webgl/extensions/WEBGL_debug_re
             | ndere...
        
               | kevingadd wrote:
               | This is sadly a requirement for the time-honored game
               | development tradition of "work around all the bugs in end
               | user drivers", which also applied to WebGL while it was
               | still immature.
               | 
               | At this point there's probably no excuse for continuing
               | to expose that info though, since everyone* just uses
               | ANGLE or intentionally offers a bad experience anyway.
        
       | vrglvrglvrgl wrote:
       | [dead]
        
       | notorandit wrote:
       | For Linux too? Not now!
        
       | BulgarianIdiot wrote:
       | Now I want to see someone figure out a way to performantly
       | distribute a trillion parameter model on a million smartphones
       | looking at a web page.
        
         | sebzim4500 wrote:
         | The bandwidth would end up costing way more than the compute
         | would.
        
           | BulgarianIdiot wrote:
           | Depends how you do it. LLM are huge, but their input and
           | output is miniscule bits of text. If you find a way to put
           | "narrow paths" in the hidden layers, basically to subdivide a
           | model into smaller interconnected models, then the bandwidth
           | will be similarly massively reduced.
           | 
           | This is not without precedent, look up how your brain
           | hemispheres and the regions within are connected.
        
       | pier25 wrote:
       | Don't get me wrong, this is supremely cool... but I wish the W3C
       | and browsers solved more real world problems.
       | 
       | Just think how many JS kBs and CPU cycles would be saved globally
       | if browsers could do data binding and mutate the dom (eg:
       | morhpdom, vdom, etc) _natively_. And the emissions that come with
       | it.
       | 
       | Edit:
       | 
       | For example, just consider how many billions of users are
       | downloading and executing JS implementations of a VDOM every
       | single day.
        
         | shadowgovt wrote:
         | What would that mean? And what version of "mutate the DOM via
         | databinding" would win, because there are at least three
         | different approaches used in various JS libraries?
         | 
         | Accessing the GPU in this way is something that _can 't_ be
         | done without browser-level API support. You're describing a
         | problem already solved in JS. Different category entirely.
        
           | pier25 wrote:
           | > _What would that mean?_
           | 
           | Honestly no idea, but any native implementation would be more
           | useful than a userland JS implementation.
           | 
           | > _Accessing the GPU in this way is something that can 't be
           | done without browser-level API support._
           | 
           | That's true and it will open the door to many use cases. But
           | still, mutating the DOM as efficiently as possible without a
           | userland JS implementation is orders of magnitude more common
           | and relevant to the web as it is today.
        
         | lima wrote:
         | > _Just think how many JS kBs and CPU cycles would be saved
         | globally if browsers could do data binding and mutate the dom
         | (eg: morhpdom, vdom, etc) natively. And the emissions that come
         | with it._
         | 
         | Browsers _are_ solving these real-world problems. With modern
         | JS engines, frameworks are nearly as efficient as a native
         | implementation would be.
         | 
         | And with web components, shadow DOM and template literals, all
         | you need is a very thin convenience layer like Lit/lit-html[1]
         | to build clean and modern web applications without VDOM or
         | other legacy technology.
         | 
         | [1]: https://lit.dev
        
           | pier25 wrote:
           | > _frameworks are nearly as efficient as a native
           | implementation would be_
           | 
           | I have a very hard believing that a C++ implementation of eg
           | a VDOM would not be significantly more efficient than a JS
           | one. I'm just doing some benchmarks and even the fastest
           | solutions like Inferno get seriously bottlenecked after
           | trying to mutate about 2000 DOM elements per frame.
           | 
           | And even if the performance was similar, what about the
           | downloaded kBs? How many billions of users download React,
           | Angular, Vue, etc, every single day? Probably multiple times.
        
             | lima wrote:
             | VDOMs were created precisely _because_ the native DOM was
             | too slow.
             | 
             | This is no longer the case and a tiny web component
             | framework like Lit significantly outperforms[1] React
             | relying entirely on the browser DOM and template literals
             | for re-rendering... so what you're asking for, has already
             | happened :-)
             | 
             | But even the big frameworks are really fast thanks to
             | modern JIT JS engines.
             | 
             | [1]: https://www.thisdot.co/blog/showcase-react-vs-lit-
             | element-re...
        
               | pier25 wrote:
               | Yes, I know Lit exists, and yes it's fast. But I think
               | you're missing the point.
               | 
               | I'm not talking about one JS implementation vs another or
               | if JS solutions are fast enough from the end user
               | perspective.
               | 
               | The fastest JS solution (VDOM, Svelte, Lit, whatever)
               | will still be bottlenecked by the browser. If JS could
               | send a single function call to mutate/morph a chunk of
               | the DOM, and that was solved natively, we'd see massive
               | efficiency wins. When you consider the massive scale of
               | the web with billions of users, this surely will have an
               | impact in the emissions of the electricity needed to run
               | all that JS.
               | 
               | And (again) even if there were no perf improvements
               | possible, you're avoiding the JS size argument. Lit still
               | can be improved. Svelte generally produces smaller
               | bundles than Lit. Eg see:
               | 
               | https://krausest.github.io/js-framework-
               | benchmark/current.ht...
               | 
               | But, again, if all the DOM mutation was solved natively
               | there would be huge savings in the amount of JS shipped
               | globally.
        
               | epolanski wrote:
               | But Dom manipulation happens in the browser and is
               | implemented in c++.
        
           | skybrian wrote:
           | Who is using web components? It seems like frameworks that
           | don't use them are a lot more popular?
        
       | waldrews wrote:
       | How much easier does this make it for every ad and background
       | animation to heat up my device and eat up battery time?
        
       | TheAceOfHearts wrote:
       | Can this be used to mine cryptocoins by malicious actors?
        
         | jiripospisil wrote:
         | The spec touches on this, but the short answer is "yes".
         | 
         | https://gpuweb.github.io/gpuweb/#security-abuse-of-capabilit...
        
           | codedokode wrote:
           | If developers were a bit smarter and put this behind a
           | permission then the answer would be no, and WebGPU couldn't
           | be used for fingerprinting as well.
        
         | nailer wrote:
         | Yes but keep in mind that most of all the active chains are
         | using proof of stake now.
        
           | jefftk wrote:
           | What matters is the ROI, not what fraction of active chains
           | use what tech, no?
        
             | nailer wrote:
             | That's true. But active chains tend to be higher value and
             | Bitcoin mining rewards have halved a few times over the
             | last few years or so.
        
         | dezmou wrote:
         | yes https://github.com/dezmou/SHA256-WebGPU
        
         | echelon wrote:
         | I doubt there's as much effort being put into crypto these
         | days.
         | 
         | The folks hustling on crypto are now in the AI space.
        
           | nwienert wrote:
           | > storyteller.ai
        
             | echelon wrote:
             | If anything I'd argue my vantage point gives me a better
             | perspective on this.
        
           | sebzim4500 wrote:
           | Crypto prices are still quite high, I don't see why the
           | incentives to steal gpu cycles to mine crypto are any less
           | now than they were during the blockchain summer.
        
             | kevingadd wrote:
             | The prices don't matter if the quantity of coins you can
             | mine is a tiny fraction, which is the problem
        
       | puttycat wrote:
       | So, making it easier to directly harvest unsuspecting GPUs for
       | mining?
        
       | tormeh wrote:
       | This is HUGE news. Webgpu solves the incentives problem where all
       | actors tries to lock you in to their graphics/compute ecosystem
       | by abstracting over them. It's already the best way to code
       | cross-platform graphics outside the browser. This release in
       | Chrome ought to bring lots more developer mindshare to it, which
       | is an awesome thing.
        
         | pjmlp wrote:
         | There are lots of ways to write code cross-platform graphics
         | outside the browser, it has been available for decades, it is
         | called middleware.
         | 
         | WebGPU outside of the browser not only suffers the pain of
         | being yet another incompatible shading language, it is also
         | offers the additional possibility of having code incompatible
         | with browsers WebGPU support, due to the use of extensions not
         | available on the browsers.
        
           | flohofwoe wrote:
           | AFAIK the WebGPU implementations outside the browser can all
           | load SPIRV directly, no need to translate to WGSL first.
        
             | pjmlp wrote:
             | Yes, and that is one of the reasons why I wrote _it is also
             | offers the additional possibility of having code
             | incompatible with browsers WebGPU support_.
        
               | flohofwoe wrote:
               | These differences are smaller than the differences
               | between desktop GL and WebGL though. It's just another
               | option in the low-level vs high-level 3D API zoo. As far
               | as wrapper APIs go, the native WebGPU implementation
               | libraries are a pretty good option, because being also
               | used in browsers there's a ton of testing and driver bug
               | workarounds going into them which other 3D API wrapper
               | projects simply can't afford to do.
        
               | pjmlp wrote:
               | Not at all.
               | 
               | First of all, if one depends on SPIR being present, then
               | a SPIR to WGSL compiler needs to be present when
               | deploying to the Web part of WebGPU.
               | 
               | Secondly, it will be yet another extension spaghetti that
               | plagues any API that Khronos has some relation to.
        
               | flohofwoe wrote:
               | It already made sense to have a SPIRV- and SPIRVCross-
               | based shader pipeline in cross-platform engines with
               | WebGL support though (because you'd want to author
               | shaders in one shading language only, and then translate
               | them to various target languages, like some desktop GLSL
               | version, then separate versions for GLES2/WebGL and
               | GLES3/WebGL2, and finally HLSL and MSL, WGSL is just
               | another output option in such an asset pipeline).
        
               | pjmlp wrote:
               | And we all know how well it works when features don't map
               | across languages, WebGPU meeting minutes is full of such
               | examples, hence why threejs is adopting a graphical node
               | editor for easing the transition between GLSL and WGSL.
        
         | gostsamo wrote:
         | or it will be yet another standard
        
           | pl90087 wrote:
           | [flagged]
        
           | tormeh wrote:
           | Not really. I don't see how any of the vendors could break it
           | without being extremely overt about it. And it has no direct
           | competition. Nothing else runs on any OS and any GPU, except
           | WebGL, which is abandoned.
        
             | kllrnohj wrote:
             | Vulkan & GLES are both the obvious competition here, I
             | don't know why you seem to be ignoring them?
        
               | tormeh wrote:
               | Vulkan doesn't run on Apple platforms. And Apple refuses
               | to upgrade their OpenGL drivers. So no, they don't count.
               | Then there are the consoles, which also have their own
               | APIs. Expecting all vendors to implement a common API is
               | a fool's errand, and we should stop trying. The vendors
               | don't want this, and we can't force them. Any common GPU
               | API must be an abstraction over vendor-specific ones.
        
               | kllrnohj wrote:
               | MoltenVK exists and runs Vulkan on Apple devices
               | perfectly fine. Almost certainly better than WebGPU does
               | even. Vulkan is already a common API both in theory and
               | in practice.
               | 
               | It's not a _nice_ API, but it is common and vendor
               | neutral all the same. A lot moreso than WebGPU is, even,
               | since Khronos has a much more diverse set of inputs than
               | the web does. See eg all the complaints about Chrome
               | dominating the web discussion, and even beyond that you
               | only really need 3 companies on board. Khronos has
               | significantly more at the table than that.
        
             | illiarian wrote:
             | > I don't see how any of the vendors could break it without
             | being extremely overt about it.
             | 
             | > except WebGL, which is abandoned.
             | 
             | So it can be abandoned
        
               | tormeh wrote:
               | It becoming a stale standard is maybe the biggest threat,
               | yes. It has the issue all standards have since several
               | actors have to play along, except this time it isn't
               | Nvidia, Apple, and AMD, but Wgpu/Mozilla and Google.
               | Their incentives are hopefully better aligned with users
               | than those of the hardware vendors.
               | 
               | I suspect WebGL was different, since it was based on the
               | old OpenGL/DX9 way of doing things, so a clean break was
               | desireable. But I honestly am not that knowledgeable in
               | neither graphics programming nor WebGL history, so take
               | that with several grains of salt.
        
               | pjmlp wrote:
               | WebGL was based on OpenGL ES 2.0 and a subset of OpenGL
               | ES 3.0.
               | 
               | Google refused to support WebGL compute based on OpenGL
               | ES 3.0 compute shaders, citing WebGPU alternative, and
               | there are still quite a few capabilites from OpenGL ES
               | 3.2 missing from WebGL.
               | 
               | Some of which still not available in WebGPU 1.0.
        
         | illiarian wrote:
         | > This is HUGE news. Webgpu solves the incentives problem where
         | all actors tries to lock you in to their graphics/compute
         | ecosystem by abstracting over them.
         | 
         | Instead we have a third graphics standard (after canvas and
         | WebGL) fully incompatible with the previous two that does
         | exactly that: abstracts the OS graphics stack in a new layer
        
           | dilawar wrote:
           | obligatory xkcd: https://xkcd.com/927/
        
           | flohofwoe wrote:
           | WebGPU brings some important improvements that are simply
           | impossible to implement with the WebGL programming model
           | (such as moving all the expensive dynamic state validation
           | from the draw loop into the initialization phase).
           | 
           | It's not perfect mind you (e.g. baked BindGroups may be a bit
           | too 'rigid'), but it's still a massive improvement over WebGL
           | in terms of reduced CPU overhead.
        
           | adrian17 wrote:
           | Further, in some ways it's the "lowest common denominator"
           | (so that it can be easily implemented on top of each
           | "underlying" graphics API), with union of all their downsides
           | and only intersection of upsides.
        
         | mike_hearn wrote:
         | _" Webgpu solves the incentives problem where all actors tries
         | to lock you in"_
         | 
         | Or flipped around, it creates an incentive problem where none
         | of the vendors see much benefit in doing R&D anymore, because
         | browsers aren't content to merely abstract irrelevant
         | differences, they also refuse to support vendor extensions
         | except for their own. No point in adding a cool new feature to
         | the GPU if Chrome/Safari insists on blocking it until your
         | competitors all have it too.
         | 
         | Luckily the GPU industry is driven by video games and nowadays
         | AI, industries that don't write code using the web stack. So
         | they'll be alright. Still, that same incentives problem exists
         | in other areas of computing that sit underneath the browser
         | (CPUs, hardware extensions, operating systems, filesystems,
         | etc). Abstractions don't _have_ to erase all the differences in
         | what they abstract, but people who create them often do so
         | anyway.
        
           | fulafel wrote:
           | It doesn't pay off for most apps to run on GPUs because of
           | the fragmentation, bad dev experience, sad programming
           | language situation, proprietary buggy sw stacks and so on.
           | Reining in the cambrian explosion to let non-niche
           | applications actually use GPUs would be a good tradeoff.
           | 
           | (But of course like you say this doesn't detract from the
           | native code apps that are content being nonportable, or want
           | to debug, support, bug-workaround, perf-specialcase their app
           | on N operating systems x M GPU hardware archs x Z GPU APIs, x
           | Y historical generations thereof)
        
           | tormeh wrote:
           | Good and valid point. Effort will naturally focus on the
           | lowest common denominator. Any new features would have to be
           | implemented in software in the WebGPU layer for targets
           | without API support. That's probably going to slow
           | development of new extensions.
           | 
           | However, I think what GPU programming needs at the moment is
           | better ergonomics, and code portability is a step in the
           | right direction, at least. Currently it's quite a bit more
           | annoying than CPU programming.
        
             | mike_hearn wrote:
             | Yeah, but that's because performance always takes priority.
             | The Vulkan/DX12/Metal APIs are lower level and much more
             | tedious than OpenGL and shader based OpenGL was itself a
             | step backwards in usability from the fixed function
             | pipeline (when doing equivalent tasks). So the trend has
             | been towards more generality and performance at the cost of
             | usability for a long time in graphics APIs. Prefixing it
             | with "Web" won't change that. I'm not saying that's a bad
             | trend - the vendors take the perspective that people who
             | need usability are using game engines anyway so "easy but
             | low level" like classic GL is a need whose time has passed.
             | There's probably a lot of truth to that view.
        
               | tormeh wrote:
               | Which is fine, IMO, as long as there are user-friendly
               | abstractions on top of those foundations. I'd "just" like
               | to write high-level code that runs on GPUs without having
               | to worry about vendor/OS differences (yay, WebGPU) or
               | writing boilerplate (nay, WebGPU). Currently it's all a
               | bit too annoying for normal programmers to bother with.
               | Every month there's a new programming language for CPUs
               | that intends to make programming easier. I'd like this
               | state of things, but for GPUs.
        
           | adam_arthur wrote:
           | You could say the same about working off a common html,
           | browser, language spec. There are tradeoffs, but industry
           | consensus hasn't really stifled innovation in a meaningful
           | way.
           | 
           | Why can't a common graphics API evolve through well
           | researched and heavily scrutinized proposals? The amount of
           | societal loss of efficiency in having competing specs that do
           | the same thing in slightly different ways, but require
           | immense effort to bridge between is truly vast.
           | 
           | The incentive to innovate comes from limitations with the
           | current spec. That doesn't change just because there's
           | consensus on a common base spec
        
             | mike_hearn wrote:
             | _There are tradeoffs, but industry consensus hasn 't really
             | stifled innovation in a meaningful way._
             | 
             | Stifled innovation in what context? I wouldn't describe the
             | web stack as an industry consensus given the rejection of
             | it (for apps) by mobile devs, mobile users, video game
             | devs, VR devs and so on. If you mean _within_ the web
             | stack, then, well, there have been plenty of proposals that
             | died in the crib because they couldn 't get consensus for
             | rather obscure reasons that are hard to understand from the
             | outside. Certainly, there were innovations in the earlier
             | days of the web when it was more open that have never been
             | replicated, for example, Flash's timeline based animation
             | designer died and nothing really replaced it.
             | 
             | Fundamentally we can't really know what cool things might
             | have existed in a different world where our tech stack is
             | more open to extensions and forking.
             | 
             |  _Why can 't a common graphics API evolve through well
             | researched and heavily scrutinized proposals?_
             | 
             | Why can't everything be done that way? It's been tried and
             | results are a tragedy of the commons. These ideas don't
             | originate in the web world. The incentive to develop the
             | tech isn't actually limits in web specs, that's a decade+
             | after the innovation happens. What we see is that the web
             | stuff is derivative and downstream from the big players.
             | WebGPU traces its history through Vulkan to Metal/D3D12 and
             | from there to AMD's Mantle, where it all started (afaik).
             | So this stuff really starts as you'd expect, with some risk
             | taking by a GPU designer looking for competitive advantage,
             | then the proprietary OS design teams pick it up and start
             | to abstract the GPU vendors, and then Khronos/Linux/Android
             | realizes that OpenGL is going to become obsolete so they'd
             | better have an answer to it, and then finally the HTML5
             | people decide the same for WebGL and start work on making a
             | sandboxed version of Vulkan (sorta).
             | 
             | N.B. what made Mantle possible is that Windows is a
             | relatively open and stable environment for driver
             | developers. Ditto for CUDA. They can expose new hardware
             | caps via vendor-specific APIs and Microsoft don't
             | care/encourage it. That means there's value in coming up
             | with a radical new way to accelerate driver performance. In
             | contrast, Apple write their own drivers / APIs, Linux
             | doesn't want proprietary drivers, ChromeOS doesn't even
             | have any notion of an installable vendor driver model at
             | all.
        
         | singularity2001 wrote:
         | the best way to code cross-platform graphics outside the
         | browser.
         | 
         | so WebGPU is a bit like WebAssembly which might really shine
         | outside the Web (on the Edge, as universal bytecode format, for
         | plugins ...)
        
           | kllrnohj wrote:
           | Except unlike WebAssembly this one is actually useful. WASM
           | enters a domain full of existing universal bytecodes that all
           | failed to become truly universal because the actual assembly
           | is never the hard part to get portable, it's the API surface.
           | Which WASM doesn't even pretend to attempt to handle, unlike
           | a Java or C# which ship pretty fat bundled libraries to
           | abstract most OS differences.
           | 
           | Meanwhile on the GPU front you don't have much in the way of
           | large, good middlewares to abstract away the underlying API.
           | OpenGL used to be OK at this, but now neither platforms nor
           | Khronos wants to touch it. Vulkan is "portable" but it's like
           | writing assembly by hand, it's ludicrously complicated.
           | WebGPU fills the role of essentially just being GLES 4.0. It
           | won't be fast, it won't be powerful, but it doesn't actually
           | need to for the long tail of things that just want to
           | leverage a GPU for some small workloads.
        
         | mschuetz wrote:
         | Unfortunately WebGPU is also an essentially a 5 year old mobile
         | phone graphics API, which means for most desktop features
         | younger than 10 years, you'll still need DirectX, Vulkan, or
         | Metal.
        
       | senttoschool wrote:
       | It's good to know that Apple, MS, Firefox, and Google are all
       | onboard. It's great that browsers are finally taking advantage of
       | the underlying hardware.
       | 
       | Next up for browsers should be NPUs such as Apple's neural
       | engine.
        
         | pzo wrote:
         | There is already spec in the making: WebNN [1]. Will be
         | interesting to see if Apple will be on board or not.
         | 
         | [1] https://webmachinelearning.github.io/webnn-intro/
        
       | codedokode wrote:
       | I remember when WebRTC was introduced, it was found that it
       | became very popular and was used by almost every page. But closer
       | inspection showed that its use was to get user's IP address for
       | better fingerprinting.
       | 
       | I predict that WebGL/WebGPU will be mainly used for the same
       | purposes. Nobody needs new fancy features, what people really
       | need is more reliable fingerprinting (this is proven by number of
       | uses of WebRTC for fingerprinting vs number of uses for
       | communication).
        
         | 7to2 wrote:
         | > But closer inspection showed that its use was to get user's
         | IP address for better fingerprinting.
         | 
         | Maybe that's why it fell to the wayside: scripts are no longer
         | allowed to get the local IP address (taking with it the most
         | useful aspect of WebRTC, true serverless p2p without
         | internet[1]).
         | 
         | [1] I'm not saying that I disagree with the decision, but still
         | sad that we can't have nice things :(
        
           | codedokode wrote:
           | Yes instead of IP address the API now provides Apple DNS
           | hostname. But a proper solution would be to put this
           | unnecessary API behind a permission.
        
             | illiarian wrote:
             | There are too many APIs that need to put behind permissions
             | for permissions to be useful.
             | 
             | No idea how to solve this though.
        
               | notatoad wrote:
               | i want the chrome apps model to come back - put your
               | permission requests in a manifest, and when the user
               | clicks and install button the app gets its permissions.
               | so "web apps" that the user cares enough about to install
               | get useful features, but pages that you just visit
               | briefly don't.
        
               | illiarian wrote:
               | > i want the chrome apps model to come back - put your
               | permission requests in a manifest, and when the user
               | clicks and install button the app gets its permissions.
               | 
               | So you will get the Android case where flashlight apps
               | where asking for everything, including location data and
               | contact access, and people were giving it to them
        
           | littlestymaar wrote:
           | > taking with it the most useful aspect of WebRTC, true
           | serverless p2p without internet[1].
           | 
           | Would you mind elaborating? In any case WebRTC always needed
           | some kind of third party server to connect the peers together
           | (sure, such a server can be on your local network), and then
           | they replaced the local IP in ICE candidate with mDNS
           | addresses which serve the same purpose and allow for direct
           | P2P communication between the two peers without going through
           | the internet.
        
           | azangru wrote:
           | Dang, this is so sad.
        
         | grishka wrote:
         | People tend to forget that the good part about Flash was that
         | it offered all this (there was an API for low-level high-
         | performance 3D graphics, Stage3D), except there was also an
         | explicit boundary between the "document" and the "app" and most
         | browsers offered an option to only load Flash content when you
         | click on its area. I'm thus still convinced that turning
         | browsers themselves into this application platform kind of
         | thing is a disaster. The industry is going very wrong and
         | dangerous way right now.
         | 
         | It's never too late to delete all this code and pretend it
         | never happened though. I want to see a parallel reality where
         | Flash became an open standard, with open-source plugins and all
         | that. It _is_ an open standard and there _is_ at least one
         | project of an open source Flash player (Ruffle), but it 's too
         | late. I still hope Flash makes a comeback though.
        
           | adrian17 wrote:
           | > and most browsers offered an option to only load Flash
           | content when you click on its area
           | 
           | > there is at least one project of an open source Flash
           | player (Ruffle)
           | 
           | Just so you know, Ruffle supports (optionally) autoplay
           | without clicking (this has some restrictions, like no sound
           | until it gets a user interaction) _and_ we do plan to try
           | running on webgpu very soon - so unfortunately we rely on
           | this one way or another :(
           | 
           | (also there are other projects, Ruffle just happened to get
           | the best word-of-mouth PR)
        
         | pulpfictional wrote:
         | I was reading that WebGPU leaves you open to fingerprinting by
         | your hardware info, particularly your GPU. It seems that it is
         | even possible to distinguish between GPUs of the same model
         | through tolerances in frequency.
        
           | codedokode wrote:
           | WebGL allows to read graphic card name.
        
         | isodev wrote:
         | And if the Chrome team is excited about a feature, then
         | probably it's good for ads or upselling other Google products.
        
           | sfink wrote:
           | That's an odd take.
           | 
           | Last I knew, the Chrome team was made out of people. If the
           | people working on a new technology are excited about that
           | technology, it probably means that they are excited about
           | that technology. It doesn't say much about how excited the ad
           | people may or may not be about it, other than indirectly by
           | knowing that the technology people are allowed to spend time
           | on it.
           | 
           | That is the link I believe you're referring to, but my guess
           | is that it's going to be fairly tenuous. The set of things
           | that the technology people get to work on probably has more
           | to do with the set of things that they try to persuade the
           | powers that be are important, and that set is more likely to
           | be driven by their own interests and visions of what could
           | and should be done on the web, than it is to be driven by
           | what will make their paymasters happiest.
           | 
           | Sorry, I'm sure there is a much more brief way to say that.
           | 
           | (nb: I work for Mozilla)
        
         | dcow wrote:
         | Honestly I wish we just had a good fingerprinting API that
         | people could opt in to so that we didn't poison all these other
         | good use cases with cringy "it's gotta be anonymous so I can
         | troll" requirements.
        
       | waynecochran wrote:
       | Is the camera capture support on mobile devices (perhaps loads
       | video frames into texture map)? I assume this is what webxr is
       | doing somehow?
        
       | phendrenad2 wrote:
       | Maybe I'm dumb but what does WebGPU get you _in reality_ that
       | WebGL doesn 't currently get you? I don't think we're going to
       | see browser games any time soon (because it's trivial to just
       | install a game and run it, and also the browser audio API sucks).
       | And most visualizations on the web would be perfectly fine with
       | plain old WebGL.
        
         | simlevesque wrote:
         | > And most visualizations on the web would be perfectly fine
         | with plain old WebGL.
         | 
         | It's not made to make visualizations that weren't possible
         | before, this explains that.
        
       | topaz0 wrote:
       | I'd rather not have websites soak up my compute resources,
       | thanks. They are already doing that many times too much for my
       | taste.
        
         | mschuetz wrote:
         | You're free to disable features you don't like. For others,
         | this is good news.
        
           | topaz0 wrote:
           | Incredible insight, but I fear it won't work out well: when
           | the option is there and ubiquitous, websites will rely on it,
           | so disabling features is basically choosing to save some of
           | my resources in exchange for a broken web experience.
        
             | mschuetz wrote:
             | Since I'm doing 3D graphics in Web Browsers, I consider not
             | having 3D graphics APIs in Web Browsers a broken web
             | experience. The browser is the easiest and safest way to
             | distribute applications with virtually no entry barrier to
             | users. I don't trust downloading just any binary and
             | therefore I won't execute them unless they are from a
             | trusted source. But with 3D in browsers, I can trust any
             | web site not do damage my system and try a large amount of
             | experiences that others create and upload.
        
               | topaz0 wrote:
               | There are applications where the point is 3d graphics. I
               | don't object to letting them use my gpu if they ask
               | nicely. There are other applications -- really the vast
               | majority of them -- where 3d graphics is completely
               | beside the point, and has nothing to do with the reason
               | I'm on your website. That is my concern: that that second
               | class, being the vast majority, will say, "hey, I have
               | access to all this compute power, why don't I just throw
               | some of it at this needless animation that I think looks
               | cool, or at this neural network, or at this cryptominer".
               | Past experience suggests that this is a reasonable
               | concern.
        
         | cyral wrote:
         | Ironically by hardware accelerating those resources, they will
         | be less noticeable.
        
           | topaz0 wrote:
           | What do you think there will be more of: a) existing
           | functionality moves to more efficient implementation on gpu,
           | or b) "hey, we can use gpus now, let's shove some more
           | compute-intensive features into this app that add nothing but
           | make it seem more innovative".
        
             | cyral wrote:
             | If whatever compute intensive feature they come up with
             | wasn't possible on the CPU, I highly doubt it is a bloated
             | useless feature they are adding just for the heck of it.
        
             | Lichtso wrote:
             | > Jevons paradox occurs when technological progress or
             | government policy increases the efficiency with which a
             | resource is used (reducing the amount necessary for any one
             | use), but the falling cost of use increases its demand,
             | increasing, rather than reducing, resource use.
             | 
             | https://en.wikipedia.org/wiki/Jevons_paradox
        
       ___________________________________________________________________
       (page generated 2023-04-06 23:00 UTC)