[HN Gopher] Safely Reviving Shared Memory
       ___________________________________________________________________
        
       Safely Reviving Shared Memory
        
       Author : MindGods
       Score  : 128 points
       Date   : 2020-07-21 16:20 UTC (6 hours ago)
        
 (HTM) web link (hacks.mozilla.org)
 (TXT) w3m dump (hacks.mozilla.org)
        
       | modeless wrote:
       | Cool! Looks like SharedArrayBuffer will be re-enabled next week
       | with the release of Firefox 79: https://developer.mozilla.org/en-
       | US/docs/Mozilla/Firefox/Rel...
       | 
       | This will be big for Web Assembly. Real threads will make it
       | possible to port more stuff and get better performance.
        
       | secondcoming wrote:
       | The web is such a mess.
        
       | Santosh83 wrote:
       | So how will these headers work for shared hosting sites where you
       | normally cannot modify the hosting provider's HTTP server?
        
         | [deleted]
        
         | simlevesque wrote:
         | You can set headers with .htaccess.
        
           | birdyrooster wrote:
           | And boy do they, I used to be a shared hosting administrator
           | and sometimes people would do stupid things like create 10MB
           | .htaccess files with the subnets of all of the countries they
           | wanted to block and then would call to complain that their
           | site was loading slow. (Probably not a great idea to parse a
           | config file for every request but at least it exists)
        
           | danudey wrote:
           | Only in specific situations, where the site is using Apache
           | and has .htaccess files enabled. I would argue that using
           | Apache in the first place is non-optimal, but enabling
           | arbitrary .htaccess files for clients is also a potential
           | disaster.
           | 
           | Then again, I suppose there are enough people out there who
           | just want to FTP up their wordpress code and call it a day,
           | so... ugh.
        
             | duskwuff wrote:
             | WordPress expects a working .htaccess for its URL
             | structure, as do most modern PHP applications. So virtually
             | all shared hosts will support .htaccess.
             | 
             | > I would argue that using Apache in the first place is
             | non-optimal
             | 
             | What would you prefer? nginx is not suited to shared
             | environments at all.
        
             | the8472 wrote:
             | If you're only uploading some stuff to wordpress do you
             | need shared array buffers?
        
       | pjmlp wrote:
       | Nice to see it is eventually going to be re-enabled, however if
       | Firefox doesn't make it as easy as Chrome, that is what most
       | costumers will focus on regarding applications that make use of
       | threading alongside WebAssembly.
        
         | domenicd wrote:
         | In Chrome we already require these headers on mobile, and plan
         | to require them on desktop soon.
        
           | pjmlp wrote:
           | Fair enough, so far I only toyed around with the desktop
           | version.
           | 
           | Just making a note that if you want to foster PWAs over
           | native, or Electron workarounds, better not make us jump
           | through many hoops.
        
       | azmenak wrote:
       | While still a ways away, its great to see progress on this front.
       | After SABs were disabled, the use cases and power of WebWorkers
       | shrank considerably.
       | 
       | Very excited to dive back into some WASM/WebWorker projects that
       | got abandoned due to performance limitations.
        
       | inetknght wrote:
       | tl;dr:
       | 
       | `about:config` -> `dom.workers.serialized-sab-access` -> true
       | 
       | Otherwise you risk sites opting in to Spectre. Why Mozilla thinks
       | this is a good thing is beyond me.
        
       | waynecochran wrote:
       | Imagine what the world would be like if didn't need to worry
       | about bad actors. This is a considerable amount of engineering
       | energy to make this safe.
       | 
       | I know safety isn't all about about malicious attacks, but I like
       | to imagine living in a world without the need for locks,
       | passwords, keys, safes, signatures, contracts, lawyers, ... we
       | would probably all be fed and populating the solar system by now.
        
         | beervirus wrote:
         | It would look a lot like the internet did 25-30 years ago.
        
           | adamsea wrote:
           | Except with lots more people ; )
        
         | IgorPartola wrote:
         | I have a hard time imagining such a world precisely because
         | being a bad actor is a part of our DNA and the DNA of many many
         | animals, plants, bacteria, and fungi we interact with. The
         | unintended consequence of lack of selfishness could be mind
         | boggling. Think about it this way: Silicone Valley mostly
         | provides the last 5% of the tech needed to make any given
         | product. You know where the rest comes from? Military tech that
         | eventually makes its way to civilian use. Things like
         | semiconductors, the Internet, GPS, rocket propulsion, etc. No
         | conflict == no military == no tech == we wouldn't be having
         | this conversation. I am definitely not saying we couldn't all
         | be a whole lot nicer to each other. But I am saying that we
         | wouldn't be human if that was wired into us.
        
           | clusterfish wrote:
           | The military's role in this seems very much rooted in
           | selfishness and adversity, to the extent that a more
           | cooperative species (let's be honest it's not gonna be us)
           | would find plenty of non-military motivation to do similar
           | amounts of basic and applied research.
        
             | mnsc wrote:
             | Is there a name for the sensation of being born the wrong
             | species?
        
               | IgorPartola wrote:
               | Maybe in German. I personally regularly curse the fact
               | that I was born a human when I am confronted with
               | uniquely human problems: having to cook, sweating my
               | behind off, having to waste time using the bathroom,
               | experiencing strong emotions like anger or jealousy or
               | envy. These are the things that make us human, but they
               | often times I find them highly annoying and wish my
               | ancestors had evolved out of some of them.
        
           | strbean wrote:
           | > Silicone Valley mostly provides the last 5%
           | 
           | Typo? Or are we finally acknowledging the contributions of
           | the porn industry? :)
        
             | IgorPartola wrote:
             | I mean... given the contributions of the porn industry to
             | the advancements of the web, an appropriate typo?
        
           | waynecochran wrote:
           | Do you think ancient Rome could have landed someone on the
           | moon had they not fell?
        
             | IgorPartola wrote:
             | Ancient Rome was very military driven. They also had
             | classes of people, which I'd argue is about as selfish as
             | you can get. They could absolutely land on the moon with
             | enough time.
        
             | tintor wrote:
             | Given enough time, yes.
        
           | adamsea wrote:
           | Yeah but as humans we have an ability to learn and change.
        
             | kevin_thibedeau wrote:
             | You can lead a horse to water...
        
         | RcouF1uZ4gsC wrote:
         | In that world, since you don't have bad actors, you would
         | likely be ruled by a benevolent king or queen.
         | 
         | A large part of why have democracy and separation of powers is
         | to protect from bad actors.
         | 
         | But yeah, not having to deal with bad actors can enable massive
         | achievements. The pyramids were built when Egypt was more or
         | less safe from any external invasion and ruled by god-kings who
         | could coordinate massive building projects.
        
           | waynecochran wrote:
           | Yeah, it has always been in the thoughts of mankind to have a
           | benovolent, righteous king to even the point where the
           | natural world and the animal kingdom would be in harmony. If
           | the events of Isaiah 11 ever come to fruition, I imagine
           | we'll cure cancer, have quantum computers, and unlock the
           | mysteries of the universe.
        
         | octetta wrote:
         | I would love to live in that world. Here's hoping for human
         | evolution.
        
         | naringas wrote:
         | ...and no religion too
         | 
         | [cue John Lenon at the piano]
        
           | waynecochran wrote:
           | Yes, religion tends to make lots of bad actors. That is why
           | Jesus excoriated the religious in Matthew 23.
        
             | kevin_thibedeau wrote:
             | We'll just conveniently ignore that while prattling on
             | about Adam and Steve. \s
        
         | x87678r wrote:
         | You need an intranet job at a corporate with on prem servers
         | and firewalls. Its much simpler.
        
           | sjnu wrote:
           | Yes, ignoring the problem is easier.
        
       | dfabulich wrote:
       | In this article, Mozilla refers to new HTTP headers, COOP and
       | COEP, that can be used to opt-in to cross-origin isolation and
       | thereby grant access to features that would be dangerous under
       | Spectre side channel attacks.
       | 
       | IMO, this Google doc is a better explainer of COOP and COEP, how
       | they work, and why they help.
       | 
       | https://docs.google.com/document/d/1zDlfvfTJ_9e8Jdc8ehuV4zME...
       | 
       | > _" We now assume any active code can read any data in the same
       | address space. The plan going forward must be to keep sensitive
       | cross-origin data out of address spaces that run untrustworthy
       | code, rather than relying on in-process checks."_
        
         | ekr wrote:
         | I may sound like a grumpy old man, but I really really dislike
         | this non-stop influx of complexity into every level of the
         | stack. This is a clear case of an abstraction leaking.
         | Microprocessor vulnerabilities should never lead to changes in
         | a high level application protocol like that.
         | 
         | The work needed to implement an HTTP server is growing and
         | growing and growing. There was some speculation a few days ago
         | on why this is happening, and why big companies are benefiting
         | from this. I don't think there's any conscious conspiracy
         | anywhere, just a lot of people try to make a name for
         | themselves. (There was a discussion on this a few days ago
         | here: https://news.ycombinator.com/item?id=23833362)
         | 
         | But I just hate this growing complexity everywhere. HTTP can
         | and should be much simpler than it's becoming.
        
         | domenicd wrote:
         | The following articles also go into more detail:
         | 
         | - https://web.dev/coop-coep/ - https://web.dev/why-coop-coep/
         | 
         | although I think the Mozilla article in the OP is a really
         | nice, succinct high-level overview.
        
       | gridlockd wrote:
       | _" The system maintains backwards compatibility. We cannot ask
       | billions of websites to rewrite their code."_
       | 
       | I don't understand this requirement. Very few sites use
       | SharedArrayBuffer, those few that do probably had to rewrite code
       | to deal with it being disabled.
       | 
       | I also don't understand how cross-origin has anything to do with
       | it either. Either your sandbox works, in that case cross-origin
       | isolation shouldn't matter, or it doesn't work, in which case
       | cross-origin isolation is not a real protection.
       | 
       | Am I missing something here?
       | 
       | Firefox is only maybe 5% of users and it has other performance
       | problems, if SharedArrayBuffer doesn't "just work" then I'm
       | inclined to have them take that performance hit or use a
       | different browser.
        
         | jefftk wrote:
         | To safely use SharedArrayBuffer you have to give something else
         | up, like the ability to fetch arbitrary resources with <img>.
         | Most sites that want SharedArrayBuffer would be fine with a
         | tradeoff like this, and so this post describes a way they can
         | opt in to the necessary restrictions.
        
         | dfabulich wrote:
         | Under Spectre, if the attacker can run SharedArrayBuffer code
         | in your process, even "sandboxed," it can read memory from
         | anywhere else in that process.
         | 
         | https://docs.google.com/document/d/1zDlfvfTJ_9e8Jdc8ehuV4zME...
         | 
         | So I guess you're right that if the sandbox "works" you don't
         | care about cross-origin isolation, but it turns out that
         | sandboxes don't work if you run multiple sandboxes in the same
         | process.
         | 
         | The mitigation browsers have chosen is to isolate each origin
         | in its own process, preventing other origins from communicating
         | with it. To regain access to SharedArrayBuffer, you have to opt
         | in to this extreme form of cross-origin isolation.
         | 
         | It would be nice to just make the whole web default to cross-
         | origin isolation, but tons of websites rely on cross-origin
         | communication features, and browsers can't just force them all
         | to be compatible with isolation, so isolation has to be opt-in.
        
           | gridlockd wrote:
           | How exactly does site-isolation prevent cross-origin
           | communication that doesn't rely on SharedArrayBuffer, i.e.
           | that vast majority of use-cases? It's just message passing.
           | 
           | I can see that site-isolation is arguably too expensive on
           | mobile and why you might want an opt-in mechanism there,
           | somewhere down the line.
           | 
           | However, I don't think there are good arguments for not just
           | enabling it on Desktop right now, without making developers
           | jump through hoops. Until _Chrome_ enables SharedArrayBuffers
           | on mobile, I have no reason to care anyway.
        
             | dfabulich wrote:
             | Site isolation disables all of it. With "Cross-Origin-
             | Embedder-Policy: require-corp," you can't even embed a
             | cross-site image unless the other image allows it with a
             | "Cross-Origin-Resource-Policy: cross-origin"
             | 
             | Enabling _that_ on desktop today would break every website
             | that embeds cross-origin images, e.g. everybody using a
             | separate CDN for images would be broken.
        
               | gridlockd wrote:
               | You're describing how this proposed cross-origin
               | isolation scheme works. I understand that, I don't
               | understand why it is _necessary_ to make it work that
               | way.
               | 
               | Chrome has been doing site isolation with multiple
               | processes for a for a while, it "just works" and it
               | doesn't break sites.
        
               | roblabla wrote:
               | Site isolation and origin isolation are separate
               | concerns. In the "origin isolation" model, you need to
               | ensure different origins are in different processes, and
               | that their data don't leak from one to the other. In site
               | isolation, you only care about tabs not being able to
               | communicate with each-other.
               | 
               | Also, you seem to be missing something: Chrome is going
               | to implement the same set of headers, with the same set
               | of restrictions when they are applied. This isn't an
               | arbitrary firefox decision, every web browser is expected
               | to follow suit. See the various mentions of "chrome" in
               | https://web.dev/coop-coep/
        
         | topspin wrote:
         | > I don't understand this requirement.
         | 
         | Why? It's a straightforward matter. You can have the
         | conventional behavior with the necessary limitations to which
         | everyone has adapted, or you can opt in to a modified
         | environment with new rules that would break some sites but
         | provides additional capabilities.
         | 
         | > Am I missing something here?
         | 
         | Yes; the clearly explained rational is somehow being missed.
         | The sandbox is an OS process as necessitated by Spectre.
         | Without the new opt-in capability content from multiple origins
         | -- some of them hostile -- are mixed into a process, and so the
         | shared memory capabilities must be disabled. This new opt-in
         | capability creates the necessary mapping; when enabled the
         | content from arbitrary origins will not be mixed in a process
         | and so the shared memory and HRT features can be permitted.
        
           | gridlockd wrote:
           | > Without the new opt-in capability content from multiple
           | origins -- some of them hostile -- are mixed into a process,
           | and so the shared memory capabilities must be disabled.
           | 
           | That's an arbitrary requirement on the part of Firefox
           | developers, and it's a security issue in its own right. Any
           | of the numerous exploits that regularly show up in Firefox
           | could take advantage of this, not just Spectre.
           | 
           | Chrome has site-isolation enabled by default, at least on
           | Desktop, I don't see why Firefox shouldn't follow suit.
        
             | bzbarsky wrote:
             | This is a concern somewhat orthogonal to site isolation as
             | implemented in Chrome.
             | 
             | Say you have a web page at https://a.com that does <img
             | src="https://b.com/foo.png">. That's allowed in browsers
             | (including Chrome with site isolation enabled), because
             | it's _very_ common on the web and has been for a long time,
             | and disallowing it would break very many sites. But in that
             | situation the browser attempts to prevent a.com from
             | reading the actual pixel data of the image (which comes
             | from b.com). That protection would be violated if the site
             | could just use a Spectre attack to read the pixel data.
             | 
             | So there are three options if you want to keep the security
             | guarantee that you can't read image pixel data cross-site.
             | 
             | 1) You could have the pixel data for the image living in a
             | separate process but getting properly composited into the
             | a.com webpage. This is not something any browser does right
             | now, would involve a fair amount of engineering work, and
             | comes with some memory tradeoffs that are not great. It
             | would certainly be a bit of a research project to see how
             | and whether this could be done reasonably.
             | 
             | 2) You can attempt to prevent Spectre attacks, e.g. by
             | disallowing things like SharedArrayBuffer. This is the
             | current state in Firefox.
             | 
             | 3) You can attempt to ensure that a site's process has
             | access to _either_ SharedArrayBuffer _or_ cross-site image
             | data but never both. This is the solution described in the
             | article. Since current websites widely rely on cross-site
             | images but not much on SharedArrayBuffer, the default is
             | "cross-site images but no SharedArrayBuffer", but sites can
             | opt into the "SharedArrayBuffer but no cross-site images"
             | behavior. There is also an opt-in for the image itself to
             | say "actually, I'm OK with being loaded cross-site even
             | when SharedArrayBuffer is allowed"; in that case a site
             | that opts into the "no cross-site images" behavior will
             | still be able to load that specific image cross-site.
             | 
             | I guess you have a fourth option: Just give up on the
             | security guarantee of "no cross-site pixel data reading".
             | That's what Chrome has been doing on desktop for a while
             | now, by shipping SharedArrayBuffer enabled unconditionally.
             | They are now trying to move away from that to option 3 at
             | the same time as Firefox is moving from option 2 to option
             | 3.
             | 
             | Similar concerns apply to other resources that can
             | currently be loaded cross-site but don't allow cross-site
             | access to the raw bytes of the resource in that situation:
             | video, audio, scripts, stylesheets.
             | 
             | I hope that explains what you are missing in your original
             | comment in terms of the threat model being addressed here,
             | but please do let me know if something is still not making
             | sense!
        
         | ori_b wrote:
         | > I also don't understand how cross-origin has anything to do
         | with it either. Either your sandbox works, in that case cross-
         | origin isolation shouldn't matter, or it doesn't work, in which
         | case cross-origin isolation is not a real protection.
         | 
         | It doesn't work in general. It kind of works if you're putting
         | each sandbox into its own process. Assuming there aren't any
         | undiscovered microarchitectural attacks at the moment.
        
       | nazgulsenpai wrote:
       | I played with the K on the word HACKS at the top of the screen
       | for far too long.
        
       ___________________________________________________________________
       (page generated 2020-07-21 23:00 UTC)