[HN Gopher] How SGX Fails in Practice
       ___________________________________________________________________
        
       How SGX Fails in Practice
        
       Author : gbrown_
       Score  : 86 points
       Date   : 2020-06-09 17:31 UTC (5 hours ago)
        
 (HTM) web link (sgaxe.com)
 (TXT) w3m dump (sgaxe.com)
        
       | makomk wrote:
       | This is one of two lovely side-channel vulnerabilities in Intel
       | processors released today. Apparently they also leak RDRAND and
       | RDSEED data across cores and took 18 months to fix it:
       | https://www.vusec.net/projects/crosstalk/
        
         | Dunedan wrote:
         | > The mitigation locks the entire memory bus before updating
         | the staging buffer and only unlocks it after clearing its
         | content.
         | 
         | Entire memory bus as in entire access to any kind of shared
         | memory (shared CPU caches, RAM, ...)? If yes, wow, that's a
         | pretty desperate mitigation. Doesn't that starve large parts of
         | the whole CPU whenever an affected, but mitigated instruction
         | is executed, as all other cores can't interact with memory
         | anymore for the duration of this execution? Doesn't that also
         | open the ability to "DoS" CPUs by constantly executing such
         | instructions?
        
           | the8472 wrote:
           | You can already do that with split locks, which also lock the
           | memory bus.
        
           | 0x0 wrote:
           | Hmm, now I wonder if this could turn into a F00F-like bug?
           | What would LOCK REP RDRAND or just REP CPUID do...
        
           | avianes wrote:
           | > Entire memory bus as in entire access to any kind of shared
           | memory (shared CPU caches, RAM, ...)?
           | 
           | I guess this is the only solution for Intel (apart from an
           | hardware fix) as one core doesn't have control over the loads
           | of another core.
        
       | akersten wrote:
       | Remember, SGX is not a security feature. It is a DRM feature
       | meant to take control away from the owner of the machine and give
       | it to content owners. Happy to see every successful attack
       | against this module, and hoping that Intel decides to just ditch
       | it entirely.
        
         | lxgr wrote:
         | Oh boy, here we go again.
         | 
         | No, SGX is a technology allowing for the trusted execution of
         | code in an untrusted environment. DRM is one potential
         | application of that technology.
         | 
         | See also the comment section on literally _every other HN
         | submission with SGX in the title_, for example here:
         | https://news.ycombinator.com/item?id=22495251
        
           | akersten wrote:
           | Yes, indeed here we go again, because I fundamentally
           | disagree that "allowing the trusted execution of code" is
           | _any_ different than DRM. A rose by any other name.
           | 
           | I believe a user should have full control over the hardware
           | they own. SGX and other enclaves are a direct affront to
           | that, and I won't give up the fight. If you need a "trusted
           | execution" of code, run it on a server you control.
        
             | closeparen wrote:
             | There is no world where you just get to tamper with stuff.
             | In the absence of trusted computing, the anti-tamper
             | strategy is server-side policy logic and central databases.
             | 
             | The ability to control and trust devices that have left the
             | issuer's physical possession enables decentralized and
             | privacy-preserving architectures.
             | 
             | If this tech were perfect (it isn't) you could even do a
             | peer to peer cryptocurrency without proof of work. Just
             | make your peers prove they are running software which will
             | refuse to double-spend.
        
               | [deleted]
        
               | CamperBob2 wrote:
               | _The ability to control and trust devices that have left
               | the issuer 's physical possession enables decentralized
               | and privacy-preserving architectures._
               | 
               | One question: who, exactly, is this "issuer" you're
               | referring to? Me? The people who sold me the computer?
               | The people who manufactured the computer? The people who
               | wrote the CPU microcode that I don't have access to? The
               | government agencies they may or may not answer to?
               | 
               |  _Just make your peers prove they are running software
               | which will refuse to double-spend._
               | 
               | OK, make that two questions: how is this not equivalent
               | to solving the Halting Problem?
               | 
               |  _There is no world where you just get to tamper with
               | stuff._
               | 
               | There is, if it's my stuff.
        
               | closeparen wrote:
               | If it's your stuff, then you control whether remote
               | attestations are required and what checksums / signatures
               | are considered valid.
        
             | _jal wrote:
             | > If you need a "trusted execution" of code, run it on a
             | server you control.
             | 
             | What CPU do you use for this purpose?
        
               | akersten wrote:
               | I understand the implication here to be "how can you
               | trust your own hardware anyway," and the answer is "of
               | course in theory you can't."
               | 
               | Even if you had SGX and wrote your algorithm or whatever
               | to run in it, if your hardware's compromised, the horse
               | has left the stable. Your proprietary algorithm and input
               | that you developed on the compromised hardware are
               | already available to the attacker. So having SGX locally
               | is not going to help.
               | 
               | At the end of the day, SGX is not going to defend a
               | proprietary computation if the input and algorithm are
               | already compromised.
        
               | _jal wrote:
               | I'm not trying to make that argument.
               | 
               | I'm literally asking you, what do you use for that
               | purpose?
        
               | anonymousDan wrote:
               | As I said in a sibling comment, think if it more as a way
               | of reducing your TCB in cloud environments. Instead of
               | trusting the whole cloud software stack, you now just
               | trust your app and the hw, both of which you need to
               | trust anyway.
        
             | cjbprime wrote:
             | Do "content owners" actually.. use SGX? I've only ever seen
             | it used for secure boot/trusted environment stuff under the
             | control of the machine's owner, not content DRM.
        
               | akersten wrote:
               | Maybe not widely yet. But here's[0] an Intel employee
               | saying it's supported, which confirms that it's one of
               | their visions for the module.
               | 
               | [0]: https://software.intel.com/en-us/forums/intel-
               | software-guard...
        
             | lxgr wrote:
             | Trusted computing isn't so much about control as it is
             | about trust/attestation. You can run privileged malware on
             | your computer without trusted computing. On the other hand,
             | you can also sandbox trusted computing.
             | 
             | If you don't want to run somebody else's code on your
             | computer - don't! (Chances are you're not a cloud provider
             | and nobody is asking you to, anyway.) If you oppose DRM -
             | don't consume content protected by it, whether the DRM is
             | implemented in SGX, TrustZone or just in obfuscated
             | software.
             | 
             | Don't get me wrong, I am not the biggest fan of DRM either
             | (I personally see it as an evil, maybe a necessary one
             | though). But shunning the entire field of trusted computing
             | is a bit like opposing theoretical physics because it
             | ultimately has brought us nuclear weapons.
        
               | akersten wrote:
               | I'm not entirely against the concept of trusted
               | computing, but it's hard to believe that "trusted
               | computing" as we know it today is so innocuous when it's
               | included on consumer CPUs. Why not keep it as a premium
               | feature for server-class CPUs instead?
        
               | uluyol wrote:
               | I don't think SGX is available on any consumer CPUs.
        
               | akersten wrote:
               | It has been part of the architecture since Skylake:
               | 
               | > Most Desktop, Mobile (6th generation Core and up) and
               | low-end Server processors (Xeon E3 v5 and up) released
               | since Fall 2015 support SGX.[0]
               | 
               | [0]: https://fortanix.com/intel-sgx/
        
               | matheusmoreira wrote:
               | My laptop has an Intel i7-8750H CPU.
               | 
               | https://ark.intel.com/content/www/us/en/ark/products/1349
               | 06/...
               | 
               | > Intel(r) Software Guard Extensions (Intel(r) SGX)
               | 
               | > Yes with Intel(r) ME
               | 
               | I have it disabled in the firmware settings.
        
               | anonymousDan wrote:
               | I believe it has actually been deprecated for consumer
               | CPUs.
        
               | lxgr wrote:
               | Remote attestation can be incredibly useful and mutually
               | beneficial on client devices as well if you're willing to
               | consider applications beyond DRM.
               | 
               | For example, every smartcard is essentially a device in
               | your physical possession, running somebody else's code.
               | I'd argue that for example in the case of EMV payment
               | cards, the benefit is mutual (less fraud).
               | 
               | Android supports a variant of this for generic secure
               | transaction confirmation: https://android-
               | developers.googleblog.com/2018/10/android-pr...
        
               | akersten wrote:
               | I'm comfortable with it on SmartCards or EMV chips, since
               | those are tailored devices for a specific purpose and the
               | trust model is understood by the participants. I'm not
               | particularly upset that I can't root my credit card.
               | 
               | Mobile phones, it's disappointing, but total consumer
               | control was kind of a lost cause from day 1.
               | 
               | What truly bothers me is the introduction of trusted
               | computing into general-purpose consumer CPUs, where
               | previously we had complete freedom. There's an old but
               | still very good CCC talk that encapsulates my feelings
               | about this [0].
               | 
               | [0]: https://www.youtube.com/watch?v=HUEvRyemKSg
        
               | lxgr wrote:
               | Just to better understand your line of reasoning: Do you
               | generally equate trusted computing with elevated
               | privileges for the attestable code being executed?
               | 
               | I think this is where a lot of trust (on a meta-level) in
               | the technology has been lost, and as far as I understand
               | it, modern implementations are, on the contrary, tightly
               | sandboxed.
               | 
               | In that sense, modern trusted computing can actually be
               | more freedom preserving: If you are generally willing to
               | tolerate DRM on your system, as long as it's not able to
               | access data on your computer that are none of its
               | business, you're more likely to see that happen on a
               | modern, hardware-assisted DRM platform than on the
               | rootkit based software shenanigans of the early 2000s.
        
               | akersten wrote:
               | Elevated privileges for a hardware blob is an even worse
               | situation that unfortunately exists (Intel ME/AMD PSP).
               | 
               | For me the issue is the blob itself. "Trusted" for the
               | manufacturer is "trust us" for the end-user. My
               | expectation for a general-purpose CPU is that I can
               | inspect the code that's running on it. Building a TPM
               | into consumer CPUs defies that freedom.
               | 
               | So, it's about hackability (in the Hacker News sense). If
               | you can't see what's happening inside of your computer,
               | is it really yours?
        
               | littlestymaar wrote:
               | > if you don't want to run somebody else's code on your
               | computer - don't! Chances are you're not a cloud provider
               | and nobody is asking you to, anyway
               | 
               | That's a pretty dishonest argument to say the least.
        
             | anonymousDan wrote:
             | Nah. There is massive interest in SGX as a defense in depth
             | technique for secure cloud computing (see Azure
             | confidential computing for example). In fact I believe
             | Intel have recently announced they are deprecating SGX on
             | consumer devices in favour of server hardware. I'm as anti-
             | DRM as the next man but this is just scaremongering.
        
               | akersten wrote:
               | If they're getting rid of SGX on consumer devices, I
               | consider that an absolute win, and am happy to tone down
               | my rhetoric. However, I couldn't find anything to that
               | effect with a quick search. Can you share where you saw
               | that announcement?
        
               | lima wrote:
               | Consumer devices still have other DRM technologies like
               | PAVP/HDCP. SGX is simply unnecessary for consumer-hostile
               | DRM.
               | 
               | I don't think SGX + PAVP ever made it past the prototype
               | stage, and the SDK is proprietary. My company actually
               | wanted to use it for a trusted computing application
               | (think "secure transaction approval") and Intel told us
               | it was deprecated and unsupported.
        
           | matheusmoreira wrote:
           | > trusted execution of code in an untrusted environment
           | 
           | This implies taking power and control away from the user.
           | Trusted execution means execution the user can't tamper with
           | or analyze as well as memory the user has no access to. The
           | user's own machine has been sealed off.
           | 
           | It's not just DRM. From a computing freedom point of view,
           | all trusted computing is bad. It prevents legitimate
           | activities like reverse enfineering proprietary software in
           | order to create a free software replacement.
           | 
           | Trusted by whom? Invariably it's the company making the
           | software. Users are never empowered by this. Corporations are
           | already in a position of power over most people, we don't
           | need even more technology that gives them even more control.
           | Imagine if this becomes common enough to show up in browsers
           | and the web. Suddenly inspect element and view source no
           | longer work, scripts and ads cannot be blocked and extensions
           | become a thing of the past. All in the name of corporate
           | control.
        
             | wmf wrote:
             | You're presenting one situation: someone else's code
             | running in the user's environment. But what about running
             | the user's code in someone else's untrusted environment?
        
               | matheusmoreira wrote:
               | > someone else's untrusted environment
               | 
               | That "someone else" should have the same powers over
               | their machines that users do. They should be allowed to
               | see everything and tamper with anything. After all, the
               | computer is theirs. Other people's computers should
               | always be untrusted.
        
               | lxgr wrote:
               | You're of course free to hold that view.
               | 
               | But if you were a cloud service provider, I'd give my
               | data and money to the competition (assuming I believe
               | their claims regarding trusted computing).
        
               | Spivak wrote:
               | Okay sure, but that still leaves the problem me sending
               | you (someone untrusted) my code and asking for proof that
               | it's running unmodified. Such a thing would let me trust
               | your computer as my own if you allow it.
        
               | saurik wrote:
               | We have mathematical techniques for this--such as ZKSTARK
               | --that allow for the construction of proofs while not
               | also hiding the execution from the user. Showing you "I
               | did this correctly" should not imply being unable to
               | watch it happen and decide to put a stop to it if you
               | don't like what it was doing.
        
               | lxgr wrote:
               | I'm not familiar with ZKSTARK, but does it allow running
               | computations in a trusted way without revealing some of
               | the input parameters?
               | 
               | The latter constraint is something that I think is
               | usually not addressed by zero knowledge proofs but rather
               | only trusted computing (or the software equivalent,
               | whitebox cryptography, assuming it exists).
        
               | Spivak wrote:
               | But it does? That's a fairly narrow definition of
               | "watching it happen" when both the real computations and
               | the data are hidden from you. Have you gained anything
               | except fewer practical uses?
               | 
               | I mean at the end of the day wouldn't it be nice to be
               | able to use a cloud service and be sure that server is
               | running exactly the published source and your secrets are
               | hidden? That the trust-domain of your phone can extend
               | into a datacenter?
        
               | johncolanduoni wrote:
               | Do you feel the same way about cloud providers? Should
               | they always be able to see everything and tamper with
               | anything for the workloads their users run? If that's the
               | case, would you be in favor of making contracts to that
               | effect (as already exist for all cloud providers I'm
               | aware of) unenforceable?
        
               | matheusmoreira wrote:
               | I agree that trusted computing can be a good thing for
               | cloud computing providers. They own the computer hardware
               | but other people are paying them in order to use those
               | resources. That's the _entire point_ of cloud computing.
               | The user and owner of the hardware are different people
               | and so offering trusted computing as a feature makes
               | sense.
               | 
               | Consumers are the opposite: they pay companies in order
               | to use their software on their computers. The owner and
               | user of the machine are the same person. Users should get
               | to maintain complete control over the hardware, including
               | the ability to reverse engineer the software and even
               | make "unauthorized" copies of anything they want.
               | 
               | I don't trust that this technology will be restricted to
               | cloud computing though. The copyright industry is worth
               | billions of dollars. They'll make use of it on consumer
               | machines if it's available. This is something that should
               | be prevented at all costs, since the worst case scenario
               | is a world where all commercial software runs in a secure
               | execution environment where the user has no control.
               | 
               | Threat models where the user is an adversary are
               | obviously user hostile. So of course whenever these
               | technologies are compromised it is a victory for software
               | freedom.
        
               | lxgr wrote:
               | > Users should get to maintain complete control over the
               | hardware, including the ability to reverse engineer the
               | software
               | 
               | Secure enclaves and auditable software are not mutually
               | exclusive. It's totally possible to run open source
               | software in an enclave!
               | 
               | > the worst case scenario is a world where all commercial
               | software runs in a secure execution environment where the
               | user has no control.
               | 
               | This would be a very bad scenario indeed, but I consider
               | it completely unrealistic. The trusted computing base of
               | an entire graphical computing device including its
               | operating system and all installed applications is
               | absolutely impossible to audit. It didn't work for
               | Microsoft (Palladium), and it's also not the direction
               | that Apple is going, arguably one of the most restrictive
               | client platforms today.
               | 
               | Practical trusted computing are tiny (and ideally heavily
               | sandboxed!) trusted enclaves running the least amount of
               | critical, audited code.
        
               | pmontra wrote:
               | I'm more concerned about this scenario: running my
               | program A on my machine side by side with somebody's else
               | program B that I don't fully trust. Your scenario looks
               | like the authors of program B not trusting me. Sorry for
               | them but it's my machine.
        
               | akersten wrote:
               | In my opinion, the idea there is the same as it's always
               | been: don't trust the client.
        
               | lxgr wrote:
               | SGX is mostly about servers, not clients.
        
       | pstrateman wrote:
       | So this breaks SGX completely.
       | 
       | Signals new PIN thing relies (almost) entirely on SGX being
       | secure to make their encrypted profile and contact backups
       | secure.
       | 
       | This attack reduces the security of the Signal encrypted backups
       | to just the PIN.
       | 
       | Edit: Indeed the authors point this out explicitly in the SGAxe
       | paper.
        
         | anonymousDan wrote:
         | Nah. If it can be fixed with a microcode update it's not the
         | end of the world.
        
           | rcxdude wrote:
           | If you're relying on it for remote attestation (which is what
           | signal is doing) it's not any use for that, because you have
           | no real verification that the microcode has been updated.
        
             | ENOTTY wrote:
             | In SGX, you do have verification that the microcode has
             | been updated. This is known as the CPUSVN value, it is part
             | of the SGX report that gets issued, and mixed into the keys
             | used to sign reports.
        
               | pstrateman wrote:
               | Except this attack extracted the attestation keys, so I
               | can attest to any version of the microcode that I want.
               | (Even ones that don't exist).
        
               | ENOTTY wrote:
               | This attack did not extract the root attestation secret
               | or sealing secret, both stored in CPU fuses.
               | 
               | Instead, this attack extracted the sealing and
               | attestation keys stored by the current version of Intel's
               | Quoting Enclave under the current microcode revision (and
               | presumably all previous microcode revisions).
               | 
               | Assuming Intel fixes the vulnerability in the next
               | revision of microcode (lets call it Rev H), the Quoting
               | Enclave will need to generate and/or store new
               | attestation keys. Because the root secrets were not
               | leaked AND the microcode revision has revved forward,
               | these cannot be derived under previous microcode
               | revisions. Thus, they are assumed not to be available to
               | the attacker under the SGXAxe/Cacheout vulnerabiilty.
               | 
               | Intel generates or provides access to new public keys
               | that correspond to the attestation keys. You use these to
               | verify the Quoting Enclave's attestations.
               | 
               | Intel additionally asserts that these new public keys
               | will only verify attestations created by the Quoting
               | Enclave running under Rev H of the microcode.
               | 
               | You must determine whether to trust this assertion.
        
             | anonymousDan wrote:
             | Nope. Microcode updates bump the CPU security version
             | number I believe and incorporate that into the attestation,
             | i.e. you can only accept attestations from machines with a
             | certain SVN.
        
           | lxgr wrote:
           | Definitely not the end of the world, but the utility of a
           | trusted computing implementation depends a lot on the track
           | record of actually being one, and SGX's isn't exactly great.
        
             | uluyol wrote:
             | To be honest, you could say the same about SSL/TLS,
             | browsers, and (probably) virtualization.
             | 
             | Secure enclaves are new. It takes time to develop new
             | technologies and work out all the issues. I probably
             | wouldn't trust my data to SGX today, but I'm not opposed to
             | it as an idea. In 5-10 years it may be in a reasonable
             | state.
        
               | yjftsjthsd-h wrote:
               | If we had a heartbleed every month, then I'd be
               | rethinking whether I want to trust anything to TLS, yes.
               | Browsers _are_ awful, and tolerable only because they
               | provide so much value (and even then, I overwhelmingly
               | only run a browser with extra mitigations in place). And
               | no, virtualization probably _shouldn 't_ be trusted
               | against hostile code.
        
         | usmannk wrote:
         | > Signals new PIN thing relies (almost) entirely on SGX being
         | secure to make their encrypted profile and contact backups
         | secure.
         | 
         | This just seems irresponsible. How could they excuse this? It
         | seems like anyone who has even peripherally been working with
         | TEEs recently is _well_ aware that SGX is broken beyond repair.
         | It's not just a matter of patching bugs, this whole model seems
         | bunk.
        
           | lima wrote:
           | Still useful for plausible deniability if the government
           | comes knocking.
        
           | RL_Quine wrote:
           | Seems unlikely they were mentioned in the paper in this depth
           | without being made even casually aware that this paper would
           | be published.
           | 
           | Which means they just went ahead with it anyway.
        
       | usmannk wrote:
       | At this point SGX is just so broken that it seems like its only
       | purpose is to provide PhD students something to write a paper on
       | :)
       | 
       | I'm hesitantly excited for AMD's SEV enclave to roll out. Anyone
       | know if it's shaping up to be any better?
        
         | lima wrote:
         | SEV is exciting because it has a much better cost-to-benefits
         | ratio.
         | 
         | It provides useful defense in depth without requiring any
         | changes to the application stack - you can run regular VMs with
         | syscalls, plenty of memory and high-bandwidth IO.
         | 
         | SGX, on the other hand, is extremely limited and notoriously
         | hard to target. It's even harder these days - you need
         | specialized compilers and coding techniques to mitigate a
         | number of attacks that can't be fixed by a microcode update.
         | 
         | I reckon it's almost impossible to do serious SGX work these
         | days without being under NDA with Intel such that you can work
         | on mitigations during embargoes for the never-ending stream of
         | vulnerabilities.
        
         | ENOTTY wrote:
         | SEV has been subjected to its own share of attacks (and
         | design/implementation fails), but note that it has a different
         | threat model.
         | 
         | * https://arxiv.org/pdf/1712.05090.pdf 2017
         | 
         | * https://arxiv.org/pdf/1612.01119.pdf 2017
         | 
         | * https://arxiv.org/pdf/1805.09604.pdf 2018
         | 
         | *
         | https://ipads.se.sjtu.edu.cn/_media/publications/fidelius_hp...
         | 2018
         | 
         | * https://seclists.org/fulldisclosure/2019/Jun/46 2019
         | 
         | *
         | https://www3.cs.stonybrook.edu/~mikepo/papers/severest.asiac...
         | 2019
         | 
         | * https://www.usenix.org/system/files/sec19-li-mengyuan_0.pdf
         | 2019
         | 
         | * https://arxiv.org/pdf/1908.11680.pdf 2019
         | 
         | * https://arxiv.org/pdf/2004.11071.pdf 2020
         | 
         | Any enclave technology will be reliant on the underlying
         | security of the processor itself. Someone was going to have to
         | go first. Intel happened to take greater risks in the name of
         | performance, and all of their technologies (including their
         | first-to-market enclave technology) are suffering reputational
         | hits as a result.
         | 
         | I'll also just mention that CrossTalk is the more interesting
         | vulnerability affecting SGX that was disclosed today.
        
           | usmannk wrote:
           | Oh huh, I see. Thanks for the papers. "Someone was going to
           | have to go first. Intel happened to take greater risks in the
           | name of performance, and all of their technologies (including
           | their first-to-market enclave technology) are suffering
           | reputational hits as a result." Very true, and a point worth
           | making. Just curious, do you work closely with SGX/SEV? You
           | were quick with the links!
        
         | anonymousDan wrote:
         | SEV is fundamentally less secure than SGX because it only
         | provides memory encryption but no integrity protection.
         | Enclaves are a challenging problem given the much more
         | aggressive threat model, but SGX is the better security model
         | of the two IMO.
        
           | lima wrote:
           | Yes - in a recent paper by Wilke et al[0], they nicely
           | demonstrate how the lack of integrity checking can be
           | exploited.
           | 
           | SEV is a very new technology and its current (and previous)
           | iterations have known weaknesses. The next generation of SEV
           | will likely have SEV-SNP[1], which will prevent the host from
           | writing guest memory/messing with the guest's page mappings.
           | 
           | Will probably take a few more iterations to stabilize. At
           | that point, it should provide decent security guarantees.
           | 
           | Current-gen SGX has much stronger guarantees (conceptually,
           | at least) with full memory integrity checking and less attack
           | surface, but it suffers from CPU vulnerabilities, most of
           | which AMD didn't have, and the integrity checks and
           | architecture come at a large performance and development
           | cost.
           | 
           | SEV has different tradeoffs that make it much more useful for
           | real-world use cases, while still providing strong security
           | guarantees.
           | 
           | [0]: https://arxiv.org/pdf/2004.11071.pdf
           | 
           | [1]: https://www.amd.com/system/files/TechDocs/SEV-SNP-
           | strengthen...
        
       | [deleted]
        
       | mmm_grayons wrote:
       | Boy, I'd love to see Moxie's comment on this after years of
       | shilling SGX. I was always disappointed by that and never
       | understood why someone as otherwise-bright as him went for it.
        
         | Spivak wrote:
         | Because there's literally zero alternative to it. If you're
         | running Signal and want to be sure that the server is running
         | the code published on GH then it's all you can do.
        
           | yjftsjthsd-h wrote:
           | > Because there's literally zero alternative to it.
           | 
           |  _Iff_ you take Signal 's overall design as a given. If we
           | had a chat system that used GPG, the servers could be
           | compromised without issue. Of course, usability would suffer,
           | so maybe Signal is _worth_ the tradeoff, but it 's not as if
           | Signal is the only way to do what Signal does.
        
         | anonymousDan wrote:
         | There have been several attacks like this already (and more
         | likely more to come). The nice thing about the SGX design is
         | many of these issues can be fixed immediately with a microcode
         | update. Now if an attack is announced that can't be fixed with
         | a microcode update that is another story :). In general,
         | protecting against these kinds of attacks for any enclave is a
         | hard problem, but it is an active area of research and there
         | are already research proposals for more side channel resilient
         | enclave designs (e.g. see the Keystone project from Berkeley).
         | I expect some of these mitigations will be incorporated into
         | future iterations of SGX, but it will take time.
        
           | pstrateman wrote:
           | For signal this issue actually cannot be fixed.
           | 
           | One failure is a permanent leak of the profile and contact
           | information for everybody with a weak pin (which is probably
           | virtually everybody).
        
           | mmm_grayons wrote:
           | You're right, and the idea of a secure enclave is not an
           | inherently bad one. What is bad is treating it as an
           | unassailable fortress, as in signal's address book feature.
           | There's a difference between using it to make critical parts
           | of existing computations more secure and using it to do stuff
           | one wouldn't otherwise do.
        
             | dane-pgp wrote:
             | > You're right, and the idea of a secure enclave is not an
             | inherently bad one.
             | 
             | Maybe it is when you're trusting private keys and
             | proprietary technology controlled by a single entity that
             | may be subject to commercial and political pressures that
             | make their motives not aligned to yours.
        
       ___________________________________________________________________
       (page generated 2020-06-09 23:00 UTC)