[HN Gopher] Intel x86 Root of Trust: Loss of Trust
       ___________________________________________________________________
        
       Intel x86 Root of Trust: Loss of Trust
        
       Author : bcantrill
       Score  : 289 points
       Date   : 2020-03-05 16:47 UTC (6 hours ago)
        
 (HTM) web link (blog.ptsecurity.com)
 (TXT) w3m dump (blog.ptsecurity.com)
        
       | sounds wrote:
       | Intel claims they were already aware of this vulnerability in
       | CVE-2019-0090. ptsecurity believes there is more work to do here
       | though.
       | 
       | To me it sounds like Intel is not thrilled with ptsecurity's
       | work, and may not be awarding ptsecurity a bounty or recognition
       | for this. But that's just my two cents.
       | 
       | ------>8------ quoting from the article ------>8------
       | 
       | We should point out that when our specialists contacted Intel
       | PSIRT to report the vulnerability, Intel said the company was
       | already aware of it (CVE-2019-0090). Intel understands they
       | cannot fix the vulnerability in the ROM of existing hardware. So
       | they are trying to block all possible exploitation vectors. The
       | patch for CVE-2019-0090 addresses only one potential attack
       | vector, involving the Integrated Sensors Hub (ISH). We think
       | there might be many ways to exploit this vulnerability in ROM.
       | Some of them might require local access; others need physical
       | access.
       | 
       | As a sneak peek, here are a few words about the vulnerability
       | itself:
       | 
       | 1. The vulnerability is present in both hardware and the firmware
       | of the boot ROM. Most of the IOMMU mechanisms of MISA (Minute IA
       | System Agent) providing access to SRAM (static memory) of Intel
       | CSME for external DMA agents are disabled by default. We
       | discovered this mistake by simply reading the documentation, as
       | unimpressive as that may sound.
       | 
       | 2. Intel CSME firmware in the boot ROM first initializes the page
       | directory and starts page translation. IOMMU activates only
       | later. Therefore, there is a period when SRAM is susceptible to
       | external DMA writes (from DMA to CSME, not to the processor main
       | memory), and initialized page tables for Intel CSME are already
       | in the SRAM.
       | 
       | 3. MISA IOMMU parameters are reset when Intel CSME is reset.
       | After Intel CSME is reset, it again starts execution with the
       | boot ROM.
       | 
       | Therefore, any platform device capable of performing DMA to Intel
       | CSME static memory and resetting Intel CSME (or simply waiting
       | for Intel CSME to come out of sleep mode) can modify system
       | tables for Intel CSME pages, thereby seizing execution flow.
       | 
       | ------>8------ quoting from the article ------>8------
        
       | dmitrygr wrote:
       | It is telling that not a single comment here sees this as a bad
       | thing. Maybe Intel should take the hint. Users want to own their
       | hardware!
        
         | kick wrote:
         | Users will buy hardware they don't have control over anyway so
         | Intel doesn't have to worry about them!
         | 
         | I want nothing more than for Intel to stop acting as awful as
         | it does, but the market doesn't care about what users _want_
         | for goods that are almost mandatory.
        
         | wmf wrote:
         | People commenting on this thread are a very self-selected
         | group.
        
           | dmitrygr wrote:
           | a potentially fair point. Please provide one or two reasons
           | as to how this development is not absolutely great for users
        
             | kllrnohj wrote:
             | User-friendly secure authentication mechanisms (like
             | Windows Hello or fingerprint readers) was just broken. The
             | TPM keeps the user's own data secure, too, after all.
             | 
             | How is that not absolutely disastrous for users?
        
               | zymhan wrote:
               | This is a valid concern. If you disagree, at least
               | comment when you downvote.
        
             | lxgr wrote:
             | See my comment further down. Not all trusted computing is
             | user-hostile. Don't confuse the technology with its
             | (primary early) applications.
        
               | sounds wrote:
               | Trusted computing is such an ambiguous term.
               | 
               | Intel alone controls the certificate chain for the CPU I
               | own? I don't trust it, and it's user-hostile. Users won't
               | know that it's because of Intel that, for instance, their
               | legacy apps don't run any more. Or their Mac's NVMe drive
               | cannot be recovered (though, yes, this is Apple's Trusted
               | Computing chip, not Intel's).
               | 
               | I take it as the tech community's responsibility to
               | clearly point out who violated their trust on this one.
               | 
               | Trusted computing could be "not user-hostile," or perhaps
               | that's what "user-friendly" means? But to not be user-
               | hostile the certificate chain must be surrendered at
               | point of sale.
               | 
               | It's ironic that sysadmins for large corporations _are_
               | enabled by Intel's management tools, and _are_ aware of
               | the purpose of these trusted computing tools. But end
               | users _are_ _not_ enabled, _are_ _not_ aware, and are
               | thus treated hostilely by Intel and cannot do the things
               | they absolutely need to do with their own PC.
        
               | dsr_ wrote:
               | The original term, "trusted", is a military intelligence
               | term.
               | 
               | It does not mean the ordinary sense of trust, which
               | indicates complete confidence in the integrity and
               | accuracy of the referent.
               | 
               | It means that you have no choice but to rely on it.
        
               | sounds wrote:
               | In that sense, I don't trust Intel. I don't rely on their
               | hardware.
        
               | pdkl95 wrote:
               | "Trusted" may be military intelligence jargon, but term
               | "trusted computing" originated at Microsoft in the early
               | 2000s. After several particularly nasty internet worms
               | gave the company a (justified) reputation of terrible
               | network security, the they launched the "Trustworthy
               | Computing"[1] initiative to rebuild trust in their
               | platform with several security improvements.
               | 
               | "Trustworty Computing" eventually became the
               | "Palladium"[2] project with more ambitious goals
               | including DRM. Palladium evolved into NGSCB ("Next-
               | Generation Secure Computing Base") when Microsoft joined
               | with other companies to form the TCPA ("Trusted Computing
               | Platform Alliance") that later became the ("Trusted
               | Computing Group").
               | 
               | The term has always been used by Microsoft (and later the
               | TCPA/TCG) mean a trustworthy _platform_ , from the
               | _developer_ perspective[3].
               | 
               | [1] https://en.wikipedia.org/wiki/Trustworthy_computing
               | 
               | [2] https://en.wikipedia.org/wiki/Next-
               | Generation_Secure_Computi...
               | 
               | [3] https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
        
               | dsr_ wrote:
               | Microsoft redefines terms to suit them, what a shocker.
               | 
               | 5200.28-STD - DoD Trusted Computer System Evaluation
               | Criteria - August 15, 1983 - The Orange Book.
        
               | zozbot234 wrote:
               | By that standard, all hardware is "trusted" regardless of
               | what Intel does. You have to rely on it, and if it
               | misbehaves or stops working you're SOL.
        
       | holtalanm wrote:
       | im guessing not, but does this affect AMD CPUs/chipsets?
        
         | morpheuskafka wrote:
         | No. The ultimate potential of this attack is the complete
         | compromise of all Intel signing authorities over affected
         | models. Naturally, that signing key does not have any value on
         | AMD systems, nor can this vulnerability in itself be used on
         | them.
        
           | holtalanm wrote:
           | I figured as much. thanks!
        
         | monocasa wrote:
         | IN addition to what's been said below, the early boot process
         | is totally different on AMD. They've got a little ARM core
         | called the PSP babysitting the main core complex(es).
        
           | wmf wrote:
           | That's not that different. The PSP is basically AMD's ME.
        
             | monocasa wrote:
             | Yes, but the way it boots and is hooked into the system is
             | completely and totally different than ME.
             | 
             | It fulfills the same abstract purpose, but that's where the
             | similarities end.
        
               | asveikau wrote:
               | So there is a different piece that can be inspected for
               | _it 's own_ vulnerabilities, that probably does not get
               | as much scrutiny because the hardware isn't as popular.
               | 
               | That's not a criticism per se, I am sure it's hard to
               | design these things securely and without bugs.
        
               | monocasa wrote:
               | Totally, although it's under a ton of scrutiny from the
               | PS4 folks where the Platform Security Processor is known
               | as SAMU, and holds the most of the decryption keys for
               | the rest of the system including all executables.
               | 
               | Right now the only attacks I know of treat it as a
               | decryption oracle, but it'd be nice to not have to pre
               | decrypt programs on a real PS4 for cases like archiving
               | and emulation.
        
       | einpoklum wrote:
       | The "trust" here is a complete misnomer. "Trusted computing"
       | should be called "traitorous computing", where your computer has
       | a module which might be controlled by (fundamentally
       | antagonistic) remote third-parties. _They_ can trust your system
       | to _betray_ you in their favor.
       | 
       | Traitorous computing should not exist and a pox be upon the heads
       | of everyone who let such modules make it into our computers.
        
         | derefr wrote:
         | Who is "you" in this scenario?
         | 
         | I want to be able to secure my computer (an ATM, say) against
         | people with physical access to it. A root of trust (that the
         | original purchaser of the device controls) allows for that.
         | 
         | Or, to be slightly more dark, I, as an enterprise IT
         | administrator, don't want the employees fucking around with the
         | hardware I deploy, even when they have all day to poke and prod
         | around. _I 'm_ the root user of those workstations, not them. I
         | need to be able to enforce enterprise security policies on
         | them, and I can't do that if they can "jailbreak" the company's
         | computers. (They want to run arbitrary code for personal
         | reasons? They can do it on their own arbitrary personal
         | devices, then, for which I have conveniently provided them a
         | partitioned-VLAN guest network to join.)
        
           | tenebrisalietum wrote:
           | With the way things are set up currently, Intel is your root
           | user, not you.
        
             | derefr wrote:
             | And in a monarchy, businesses exist at the behest of a writ
             | from the monarch saying they can. That doesn't mean that
             | the business doesn't "own" stuff. It just means that the
             | king can capriciously revoke their asset-ownership, in much
             | the same way that a tornado can capriciously revoke their
             | building. It's a rare natural disaster that you insure
             | against.
        
               | K0SM0S wrote:
               | I'm confused, are you claiming it's OK that my x86 Intel
               | platform (and whatever data passes through it) exists in
               | a "monarchy" of sorts (along with other phone-home
               | drones), and that I should find it acceptable?
               | 
               | ( _" I"_ being a business, or individual, any proxy for
               | society at large)
               | 
               | The fact that a tiny few human beings had the power of a
               | "tornado" over others' lives ended fairly abruptly in
               | some circumstances, with apparently good enough reason
               | that it stayed that way.
               | 
               | Note: you're referring to _absolutism_ , which is a mode
               | of monarchy (also found in totalitarianism,
               | dictatorship). By contrast, most monarchies still 'alive'
               | today operate more in "symbolism" mode, in the
               | constitution of their country.
        
               | derefr wrote:
               | I mean, monarchy wasn't really a key element of what I
               | was saying. You don't have "root" on your own body in a
               | rule-of-law democracy, either. You can't just decide to
               | not go to jail, if a court says you must.
               | 
               | This is kind of a recapitulation of the argument that
               | forked Ethereum into Ethereum Classic:
               | 
               | * There was a system, partially founded on a guiding
               | principle of participants in the system having final say
               | in what happens in the system, through the contracts they
               | make in the system. Those contracts were supposed to
               | "have root" in the system.
               | 
               | * Something went wrong with a contract, in a way that
               | made things worse for pretty much everybody, since it was
               | a very popular contract. The maintainers of the system
               | decided to violate the guiding principle in the name of
               | making things better for everybody, by just reaching in
               | and overriding the rules of the popular contract, so that
               | it would retroactively have done the "right" thing.
               | 
               | * Some people thought that there _shouldn 't_ be any
               | entity (consortium or otherwise) with power to override
               | the rules of their contracts, even if those changes are
               | "to the good", so they left and started their own
               | alternative system, mostly the same other than the
               | guarantee that _they 'd_ never violate the guiding
               | principle.
               | 
               | * The market decided that the alternative system has
               | about 1/10th the economic value of the original system.
               | Most developer effort, userbase, etc. sided with the
               | original system, and with the concept of there being a
               | political entity with the power to overrule individual
               | contracts. The contract creators themselves seem to want
               | the "safety net" implied by this entity having the power
               | to overrule them.
               | 
               | Interesting, no?
        
               | zymhan wrote:
               | I really like that analogy, but I do have to point out
               | that many countries have come to realize a monarchy robs
               | people of their freedom, just as the untrustable trust
               | module does.
        
           | to11mtm wrote:
           | > I want to be able to secure my computer (an ATM, say)
           | against people with physical access to it. A root of trust
           | (that the original purchaser of the device controls) allows
           | for that.
           | 
           | Unless you're running something like a Raptor Talos II, You
           | don't really have the root of trust. You have a branch off of
           | the manufacturer's root.
           | 
           | That may be better for some enterprises, but in this modern
           | age is that really enough? Consider how the PLA was involved
           | in the hacking of Experian/Equifax.
           | 
           | Until you can review the code yourself and verify the
           | binaries, you don't really have the root of trust. Someone
           | else does. (I'm barring other types of shenanigans here, but
           | it's the next logical step.)
        
             | derefr wrote:
             | I mean, for the kind of highly-trusted "ruggedized"
             | scenario represented by an ATM, one would hopefully get
             | their hardware from a manufacturer that exists under a
             | political regime they have no enmity with, or are perhaps
             | even allegiant to. (That's half the reason many US
             | government officials and contractors used Blackberries: the
             | US government could--given the political realities of the
             | time they live in--trust a device whose chips were
             | verifiably made in Canada.)
             | 
             | For the workstation scenario, though, you don't really care
             | who has the "ultimate" root, just so long as you can get
             | whoever that is to help _you_ to stop a particular class of
             | attacker (e.g. your own employees, contractors, and any
             | "visitors" in the building) from getting root. It's fine if
             | the PLA has root on the boxes, because the boxes aren't
             | actually storing trade secrets or anything; the point of
             | having pseudo-root on the boxes is, in fact, to enforce a
             | security regime that ensures your employees _don 't_ store
             | any trade secrets on the boxes!
             | 
             | See also: being an "organization owner" in an enterprise
             | SaaS service. Sure, I can't stop Google from snooping my
             | GSuite data--but I'm also _paying_ them to host that data
             | for me, and e.g. selling it would be a violation of the
             | contract. Even though they _can_ , in theory, do it,
             | they're economically incentivized against doing so (and
             | doubly so, because if they did it once and got found out,
             | they'd never make any GSuite money again.)
             | 
             | > Unless you're running something like a Raptor Talos II,
             | You don't really have the root of trust.
             | 
             | Mind you, there are "multiply-descendant root-of-trust"
             | setups that are quite common these days. In modern Apple
             | devices, you've got an Intel processor doing most stuff,
             | but then the Apple-controlled T2-chip domain doing
             | encryption stuff, with its own boot chain completely
             | isolated from the Intel one.
        
               | oneplane wrote:
               | You still are just a trust leaf, not the root. The root
               | is a ROM you cannot read or change on the Intel side, so
               | no trust control there (only delegated which for some
               | people is no trust at all). Which no ability to verify
               | it, an exploit like this would not be something you can
               | detect and as such breaks the trust chain.
        
         | AgentME wrote:
         | Trusted computing enables you to prove to a remote party what
         | your machine is executing. This would be useful to cloud
         | providers so they could prove to their users that their servers
         | are only running their users' code without snooping on it.
         | People would no longer have to choose only from well-known
         | cloud providers to find a trusted host. You could imagine a
         | marketplace where anyone could sell the compute power of their
         | home computers (undercutting cloud providers' prices to make up
         | for their lesser network connectivity) and use remote
         | attestation to prove that they're not spying on or modifying
         | their customers' compute workloads. The people selling their
         | compute power like this can use sandboxing to protect their own
         | system from the customers' compute workloads, and use trusted
         | computing / remote attestation to protect the customers'
         | compute workloads from their own system. I think it's extremely
         | good for users when a technology removes the need for trust in
         | big brands and allows anyone to compete.
        
           | sounds wrote:
           | To me that's twisted logic.
           | 
           | "I don't want to have to trust my cloud provider."
           | 
           | "Ok, we'll absolutely pinky-swear by this API you can access
           | that our machines are running a trusted setup."
           | 
           | "Ok! I'll just trust the API you provide."
           | 
           | Even if the API is an x86 instruction, even if you do timing
           | checks and side-channel checks in your code, you're still
           | just in an arms race with your cloud provider while they hold
           | all the power.
        
             | lxgr wrote:
             | You don't trust the cloud provider, you trust the hardware
             | vendor. They are the root of trust in this scenario.
             | 
             | Of course, if that trust, due to malice or implementation
             | defects, is misplaced, you're not better (but also not
             | worse) off than without something like SGX.
        
               | sounds wrote:
               | I addressed that when I said "x86 instruction."
               | 
               | An x86 core with SGX can be emulated...
               | 
               | (Edit: SGX can't be emulated. I stand corrected. Perhaps
               | a better argument would have been arguing that verifiable
               | builds give the user software freedom by granting them
               | the ability to run the same code everywhere. But trusted
               | computing != verifiable builds.)
        
               | Reelin wrote:
               | Actually it can't be, barring successful attack against
               | the physical hardware, firmware, or underlying
               | cryptography. SGX employs public-key cryptography to
               | authenticate itself to the end user remotely (the same as
               | SSH). The key it uses is signed by the hardware vendor,
               | so you most certainly won't be able to emulate it.
               | 
               | That being said, I have serious misgivings about any
               | hardware I own and use being explicitly designed _not_ to
               | do my bidding. I can certainly see the utility of such an
               | arrangement for a cloud provider though.
        
         | viraptor wrote:
         | It's not a misnomer. When you define trust you need to define
         | who you trust.
         | 
         | If X's trust in Intel to provide a platform where the code runs
         | verified doesn't agree with your ethical view, that meant you
         | likely don't trust X and Intel. That's all - there's no
         | betrayal, or traitors, or other ethical dilemmas here.
         | 
         | Trust is not universal and you cannot trust everyone.
         | 
         | You're probably more interested in trustless computing, where
         | those modules are irrelevant.
        
           | sounds wrote:
           | Why would I want my computer to make decisions against my
           | will because of a very carefully defined version of "trust"?
           | When the alternative is software freedom which gives me the
           | power (and yes, the responsibility too) to direct my computer
           | the way I want?
           | 
           | And yeah, that responsibility includes checking for updates
           | and downloading security fixes.
           | 
           | "Trust" is always used in "Root of trust" and "Trustworthy
           | computing" to mean "deny software freedom to the user."
           | 
           | That's not even close to what the dictionary says trust
           | means.
        
         | lxgr wrote:
         | This assumes the absence of a sandbox. Trusted computing can
         | happen with our without a sandbox, much like "regular"
         | computing.
         | 
         | If your system is running unsandboxed, untrusted third party
         | code, that's pretty bad, regardless of the presence or absence
         | of a trusted platform. As an example: FLOSS systems are
         | definitely capable of running malware.
         | 
         | On the other hand, a reproducible build of open source software
         | might well be what runs in (and relies on the attestation
         | provided by) a trusted computing platform.
         | 
         | I do see one practical concern with integrating trusted
         | computing on a general purpose computer:
         | 
         | If an implementation depends mostly on security through
         | obscurity to achieve the desired attestation capabilities, this
         | makes it much harder to audit it for vulnerabilities or
         | backdoors. But I don't see how that is a fundamental property
         | of a trusted computing system.
        
           | sounds wrote:
           | A reproducible build is very different than cryptographically
           | attesting the binary state of especially the kernel.
           | 
           | If I can't produce a binary with the same "reproducible
           | state" as the one you had because your _kernel_ was one that
           | I don't run (especially because maybe I don't _want_ to run
           | it), that destroys all the value of a reproducible build.
           | 
           | A reproducible build should not _undermine_ software
           | freedoms, specifically those protected by a Free Software
           | license. But trusted computing always undermines software
           | freedoms: that's by design. It's intentional. It's all about
           | locking 100% of the users into a single monoculture where
           | there is minimal freedom.
           | 
           | And that's fine in a managed IT environment such as a
           | corporation. But it's not ok when I buy hardware and the
           | manufacturer refuses to hand over the certificate chain to
           | me.
        
             | lxgr wrote:
             | A kernel would not be something that you would run in a
             | trusted enclave. It's way too big of a surface area
             | (containing your entire operating system and application
             | layer, after all), so what would be the point of attesting
             | that to anyone?
             | 
             | This is the "old" way of using a TPM, and I agree, it does
             | not make sense at all. After all, it never came to be, and
             | that's not only because of the vocal protests against it.
             | It simply does not make sense!
             | 
             | But I would encourage you to read up on how, for example,
             | the Signal foundation is thinking about using something
             | like SGX.
        
               | sounds wrote:
               | It was the ME firmware that was compromised with this
               | CSME exploit.
               | 
               | Of course, few people notice if the ME firmware is
               | updated, and Intel doesn't often update deployed ME
               | firmware. But it verifies the BIOS, and the BIOS verifies
               | the kernel.
               | 
               | And that's the point where people start caring. Hence I
               | used the kernel as an example.
               | 
               | Signal is thinking about using SGX, but they also have
               | other ways to grant the user reasonable security. After
               | this CSME exploit, Signal may reconsider using SGX.
               | 
               | Either way, my point still stands: verifiable builds do
               | not rely on trusted computing at this point. I hope they
               | never do. It would be twisted logic to tell the user the
               | only way they can have their software freedom is by
               | asking the trusted computing infrastructure to verify it
               | for them. The trusted computing infrastructure being
               | absolutely as opaque and locked-down as possible. Trusted
               | computing is not reproducible! Its designed-in purpose is
               | to be opaque, to hide things from the user.
        
               | lxgr wrote:
               | > verifiable builds do not rely on trusted computing at
               | this point.
               | 
               | Of course they don't. They are orthogonal, i.e. one does
               | not imply the other, but one also does not prevent the
               | other.
               | 
               | The verifiable build serves you, the hardware owner, in
               | knowing that the software does what its vendor claims.
               | 
               | The trusted platform's assertion serves the software
               | vendor, allowing them to trust the environment that their
               | software is running in.
               | 
               | > It would be twisted logic to tell the user the only way
               | they can have their software freedom is by asking the
               | trusted computing infrastructure to verify it for them.
               | 
               | Nobody is saying that. If you want to trust your computer
               | to do what you think it does, you don't want trusted
               | computing; you probably want reproducible builds, trusted
               | boot etc. But trusted computing also does not inherently
               | prevent you from doing that. The two are orthogonal!
        
       | amluto wrote:
       | > Intel CSME firmware also implements the TPM software module,
       | which allows storing encryption keys without needing an
       | additional TPM chip--and many computers do not have such chips.
       | 
       | And that was the real error. The TPM should be a TPM. It could be
       | on die, but it should be an entirely isolated device with its own
       | RAM, no DMA, no modules, and no other funny business.
        
         | lima wrote:
         | Internal TPM is more secure for attestation. You can MitM the
         | LPC bus with an external TPM, faking PCRs.
        
           | gruez wrote:
           | >You can MitM the LPC bus with an external TPM, faking PCRs.
           | 
           | not an issue if it's on-die, as the parent suggested.
        
         | lxgr wrote:
         | That sure is an interesting decision, given that other big
         | players (Apple with the Secure Enclave, Google with Titan) have
         | been moving into the opposite direction.
        
         | [deleted]
        
       | londons_explore wrote:
       | TL;DR: There is a tiny window during bootup when any hardware can
       | DMA code or keys in/out of RAM. That allows complete compromise
       | of all protections offered by the chipset, including secure boot,
       | TPM key storage, etc. It is not fixable via a firmware update.
       | 
       | The researchers _have not demonstrated_ a complete end to end
       | attack, but it seems likely one exists.
       | 
       | While this could likely be pulled off easily as a local attack,
       | in some cases it might also be possible to do as a remote attack
       | depending on being able to program other hardware devices to
       | exploit the flaw during a reboot.
        
         | [deleted]
        
       | blendergeek wrote:
       | It looks like finally users may have complete control over their
       | Intel computers without Intel having the final say. I, for one,
       | am quite happy about this.
        
         | lxgr wrote:
         | This sentiment seems to be rooted in a misunderstanding of what
         | trusted computing is trying to achieve on a fundamental level.
         | 
         | The idea is not to "take control over people's computers", i.e.
         | your trust in your own computer. It is rather to enable
         | somebody to gain some level of trust in the computations that
         | are happening on somebody else's computer.
         | 
         | Yes, this technology is commonly used for DRM, and that was one
         | of its earliest applications. But it's not limited to that.
         | Trusted computing can switch the roles and give you as a user
         | certainty over the computations a third party provider performs
         | in the cloud on your behalf. The Signal team is doing a lot of
         | very interesting experiments there [1].
         | 
         | If your concern is a hardware backdoor or something similar,
         | this is less of a question of trusted computing, and rather one
         | of trust in hardware vendors. Your hardware vendor can screw
         | you over entirely without TPM, TEE, secure elements and the
         | like.
         | 
         | On the other hand, Intel's trusted computing platform being
         | horribly broken does not magically give you FOSS replacements
         | for all the firmware, ROMs and microcode running on the dozens
         | of peripherals in your computer.
         | 
         | [1] https://signal.org/blog/secure-value-recovery/
        
           | guerrilla wrote:
           | > Your hardware vendor can screw you over entirely without
           | TPM, TEE, secure elements and the like.
           | 
           | Yes they can, but as I understand it they __are __using TPM
           | to screw us over, hence people celebrating its being popped.
           | No misunderstanding of trusted computing necessary: there
           | aren 't, in practice, other vendors to choose from here.
        
             | johncolanduoni wrote:
             | I'm curious, where are TPMs being used to screw people over
             | in your opinion?
        
             | lxgr wrote:
             | Could you elaborate on the ways that people are being
             | screwed over by TPM?
             | 
             | I do see that the existence of both trusted and untrusted
             | systems could exert some pressure on consumers to adopt the
             | latter, due to the unavailability of certain services on
             | the former (e.g. DRM, banking apps on rooted Android phones
             | etc).
             | 
             | The danger here is a loss of "freedom to tinker", which I
             | do appreciate very much, and I share that concern. But has
             | that actually happened with TPM?
        
               | ghostpepper wrote:
               | What about the fact that the newest Intel CPU that can
               | run Coreboot is from 2012 since all the subsequent ones
               | have been locked by Intel? Isn't the TPM directly
               | responsible for loss of that freedom to tinker?
        
               | wmf wrote:
               | Coreboot runs on the latest Intel CPUs (and work is
               | underway for CPUs that haven't even been released yet)
               | but it uses binary blobs. Those blobs have nothing to do
               | with TPMs, the ME, or whatever.
        
               | floatboth wrote:
               | As already mentioned the coreboot bit is false, but also:
               | 
               | the TPM is a PASSIVE component. It only responds to your
               | requests and you can do cool things with it.
               | 
               | https://media.ccc.de/v/36c3-10564-hacking_with_a_tpm
        
           | wmf wrote:
           | That's all correct, yet it doesn't consider the politics. The
           | people on this thread are concerned about a huge power
           | imbalance between customers and companies; specifically,
           | customers have zero bargaining power so they should expect
           | trusted computing to be used "against" them far more often
           | than it's used "for" them.
        
           | conradev wrote:
           | I think we need to come up with solutions to problems like
           | key escrow (i.e. in Signal's case) that don't require trusted
           | computing because a single root of trust for hardware is a
           | single point of failure and depends on trusting the hardware
           | manufacturer.
           | 
           | There are a lot of possibilities with distributed computing
        
             | AlexCoventry wrote:
             | > like key escrow (i.e. in Signal's case)
             | 
             | Is signal moving towards some sort of key escrow policy?
        
             | josh2600 wrote:
             | Do you have any narrative of how to do key recovery safely
             | without an enclave or a human in the loop?
             | 
             | I've spent a lot of time thinking about this and I don't
             | really know how to do it without one of those two things.
             | 
             | Edit: like I hear you saying there are possibilities in a
             | distributed computing world, but I don't have any idea what
             | distributed computing enables for key recovery (except
             | possibly k of n schemes but that's just replication, not
             | safety).
             | 
             | Edit 2: also, presume that users suck at key management and
             | can't remember long password strings, 24 words, or be
             | trusted to store a key for a meaningful period of time.
        
               | lxgr wrote:
               | It would still be an enclave of sorts, but white box
               | cryptography is generally trying to achieve a similar
               | goal as trusted computing, without relying on trusted
               | hardware.
        
               | josh2600 wrote:
               | I don't think the enclave you're describing exists, nor
               | do I believe there is an enclave that is untrusted
               | hardware.
               | 
               | Do you have an example of such an enclave and how it
               | would operate without a remote attention service in the
               | model where a user can trust that a distributed network
               | they don't control is safeguarding their key?
        
               | sounds wrote:
               | If homomorphic encryption advances to the point where
               | it's usable, that would be an example of security on
               | untrusted hardware.
               | 
               | (But I suppose that just proves your point.)
        
           | Hello71 wrote:
           | that sounds vastly overgenerous. if that were truly the case,
           | then TEE wouldn't be built into every consumer system. if it
           | were actually to protect against malicious cloud providers,
           | then TEE would be only available for special (read:
           | expensive) processors. see: intel and ECC memory. the goal of
           | TEE is to benefit big corp and fuck the user, and just
           | because it can in theory be used for other purposes is barely
           | a consolation.
        
             | lxgr wrote:
             | TEE is practically used for mainly two things in current
             | smartphones: DRM and hardware key storage.
             | 
             | DRM lets users watch Netflix on their phones while on an
             | airplane.
             | 
             | Hardware key storage significantly decreases the attack
             | surface for malware trying to extract them, compared to
             | storing them on the application processor.
             | 
             | How is the average user being fucked here, exactly?
        
               | rcxdude wrote:
               | > DRM lets users watch Netflix on their phones while on
               | an airplane.
               | 
               | No, DRM exists to restrict users, it does not enable
               | anything for them.
        
               | lxgr wrote:
               | That is definitely true. In a way, the user of DRM is the
               | content provider, not the owner of the playback device.
               | 
               | But this is exactly the idea of trusted computing:
               | 
               | "Prove to me that I can trust your hardware to run my
               | software according to my specifications, and I will use
               | it to compute things (for our mutual benefit) that I
               | would otherwise only compute on my own hardware."
               | 
               | DRM is the canonical example, but wouldn't it be nice to
               | be able to actually know that cloud service provider has
               | to adhere to their terms of service, rather than having
               | to take their word for it?
               | 
               | (The big "if" here is that the terms of service are
               | expressible and enforceable in the context of some piece
               | of software.)
        
               | Hello71 wrote:
               | so what you're saying is that without TEE, Netflix would
               | shut down? come on. Netflix would clearly keep operating
               | with or without DRM, all TEE does is make it harder for
               | the user to access their legitimate (non-Netflix) content
               | in anything but the most approved way. it entrenches
               | mainstream operating systems and makes it harder to use
               | FOSS. sure, I'll concede that Netflix is not the most
               | damaging to user freedom, but that's not what OP is
               | about. nobody would give a shit about this vulnerability
               | if it was just Netflix, because Netflix is broken against
               | hardcore attackers anyways. TEE proponents want to expand
               | its use to more user-hostile applications. that's my
               | concern.
               | 
               | hardware encryption is arguably a better use of TEE, but
               | as far as I know, no actual implementations use SGX for
               | that purpose. the TPM is used, but it's not fast enough
               | for actual encryption. the OS loads the keys from the TPM
               | and does the encryption in regular software.
        
         | shiblukhan wrote:
         | "the scenario that they feared most", yet the scenario everyone
         | was sure would happen.
        
           | jjoonathan wrote:
           | I wasn't sure it would happen, but I'm sure happy it has!
        
         | ghostpepper wrote:
         | Does this announcement mean it's finally possible to run FOSS
         | firmware like Coreboot on modern Intel hardware? If so, this is
         | a huge finding
        
           | zelphirkalt wrote:
           | That's what I am wondering too! Perhaps Libreboot can make
           | progress and we could have completely free systems with more
           | up to date hardware after all? That would be so great.
        
           | wmf wrote:
           | Coreboot already runs on modern Intel hardware. This
           | vulnerability doesn't eliminate the blobs or the need for the
           | ME to initialize hardware before Coreboot runs if that's what
           | you're thinking of.
        
             | ghostpepper wrote:
             | Could this vulnerability be used to bypass the ME? I'd like
             | to run coreboot on my thinkpad X1 carbon gen7
        
           | gentleman11 wrote:
           | Looks like it could make it easier to get around DRM also?
        
       | vzaliva wrote:
       | In my opinion, critical code like this must be formally verified.
        
         | vardump wrote:
         | Not sure how formal verification would have helped here. DMA
         | access is allowed at boot up, game over.
        
           | AnimalMuppet wrote:
           | In principle, you could consider validating the _system_ ,
           | not just the software. It might reveal such a gap.
           | 
           | Note well: I am not claiming that the tools exist currently
           | to do this.
        
       | paxswill wrote:
       | I'm curious if Apple's work on hardening their secure boot
       | process on x86 affects this at all? For this unaware, this [0]
       | video covers it over about seven minutes. Basically they claim to
       | be enabling the IOMMU with a basic deny everything policy so that
       | when the changeover to executing from RAM occurs and PCIe devices
       | are brought up the IOMMU is able to deny possible malicious
       | access to the firmware image.
       | 
       | It _sounds_ from the end of the article that there are separate
       | DMA /IOMMU processes for the CSME, but I'm not familiar enough
       | with stuff this far down to know for certain.
       | 
       | https://youtu.be/3byNNUReyvE?t=124
        
         | morpheuskafka wrote:
         | It is their proprietary T2 chip that controls things like
         | FileVault (fulldisk encryption) and Touch ID. So a
         | vulnerability on Mac would not be nearly as severe as on
         | Windows, where the this can eventually compromise the fTMP used
         | for BitLocker encryption (dTMPs wouldn't be vulnerable, but
         | their integrity protection can be bypassed by messing with
         | their physical connections to the CPU).
         | 
         | The T2 chip has its own Secure Enclave and immutable BootROM,
         | and it supposedly verifies the Intel UEFI ROM before it is
         | allowed to load, and then the CPU reads this from the T2 over
         | SPI. So it would seem that this boot process is not weakened by
         | a compromise of the Intel key, as only Apple can sign UEFI
         | updates to be loaded onto the T2 chip.
         | 
         | Source:
         | https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...
         | (long PDF)
        
       | osy wrote:
       | Related: I've wrote a (maybe not 100% accurate) low level summary
       | of the x86 secure boot model here a while ago
       | https://osy.gitbook.io/hac-mini-guide/details/secure-boot
        
       | shiblukhan wrote:
       | A vulnerability has been found in the ROM of the Intel Converged
       | Security and Management Engine (CSME).
       | 
       | A reference to the specific vulnerability would be nice. CVE?
       | Conference presentation? El Reg? Sketchy blogspam? Maybe I've
       | been living under a rock, but it would still help the reader out.
        
         | notpeter wrote:
         | The article mentions CVE-2019-0090 and Intel acknowledges the
         | author (Mark Ermolov of Positive Technologies) in their
         | advisory. You haven't been living under a rock, this is a
         | primary source and the first public suggestion of the grave
         | severity of the vulnerability.
         | 
         | "CVE-2019-0090 was initially found externally by an Intel
         | partner and subsequently reported by Positive Technologies
         | researchers. Intel would like to thank Mark Ermolov, Dmitry
         | Sklyarov and Maxim Goryachy from Positive Technologies for
         | reporting this issue."
         | 
         | https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0090
         | 
         | https://www.intel.com/content/www/us/en/security-center/advi...
        
       | qubex wrote:
       | Some time ago I had considered (and rejected, due to a bad review
       | due to a freak bad sample, apparently) equipping my unit with
       | POWER9 systems from Talos.
       | 
       | I am now reconsidering the idea.
       | 
       | https://www.raptorcs.com/TALOSII/
        
         | guerrilla wrote:
         | For people who know nothing about this and want a tl;dr in
         | video form: https://www.youtube.com/watch?v=5syd5HmDdGU
        
           | qubex wrote:
           | And for anybody interested in the previous Hacker News
           | discussion on the topic:
           | https://news.ycombinator.com/item?id=14956257
        
       | unnouinceput wrote:
       | For my upgrade, due to having lesser vulnerabilities, I decided
       | this year (after 20 years of only using Intel) to go with AMD.
       | Had my doubts, but this article made me decide it's time to go
       | AMD route.
        
         | lxgr wrote:
         | I have not looked into AMDs efforts in this area recently: Is
         | there an AMD-equivalent to, for example, Intel TXT?
         | 
         | If so, is it actually more secure, or has it simply not been
         | scrutinized as much as Intel's version by security researchers?
        
       | baybal2 wrote:
       | As I pointed before, lifting any "secret" key of any chip is
       | quite trivial to a semiconductor professional.
       | 
       | It's part of a job of an IC engineer to be able to tap arbitrary
       | metal layer on the device with microprobes to "debug" it, and
       | this is something quite routine in a process of a microchip
       | development.
       | 
       | Any such measures can only deter people without access to an IC
       | development lab.
        
         | xyzzyz wrote:
         | I think this is simply not true for modern processes. Can you
         | show me any example of such key being extracted this way from a
         | modern sub 50 nm CPU? I haven't heard of anyone actually
         | succeeding.
        
           | anthk wrote:
           | You forgot the buses, the IOMMU and so on.
        
         | DoofusOfDeath wrote:
         | Cool! Out of curiosity, do these debugging tools keep pace with
         | the recent process shrinks? I would imagine it's really hard to
         | connect a logic probe to, for example, a processor built on
         | TSMC's 7nm process.
        
           | sounds wrote:
           | The metal layer interconnects usually are _not_ that small. I
           | can't share the exact specs for TSMC's 7nm process but here's
           | an example that should give you some idea:
           | 
           | https://web.stanford.edu/class/ee311/NOTES/Interconnect%20Sc.
           | ..
        
           | baybal2 wrote:
           | Gate size on 7nm processes is still 30nm, and even the
           | lowermost M0 metal is way, way bigger.
           | 
           | Even if doing so requires destroying, and reconstructing some
           | tracks around the probe, 7nm shouldn't be much different from
           | how it was done back a decade ago.
        
         | chinhodado wrote:
         | Not sure if I understand correctly, but are you saying secrets
         | kept in hardware like console encryption keys (PS4 etc.) can be
         | trivially extracted with the right tool?
        
           | baybal2 wrote:
           | Yes, but signing can't be defeated unless you modify the IC
           | itself.
        
             | AnimalMuppet wrote:
             | If you can get the key, can't you sign whatever you want,
             | in a way that the IC will validate it? It will still check
             | that it's correctly signed, but doesn't that defeat the
             | usefulness of it?
        
               | drewbug wrote:
               | the private key isn't on the chip, only the public key is
        
           | Taniwha wrote:
           | not really trivially, you need to drill tiny (sub micron
           | sized) holes with lasers down to the appropriate wires then
           | insert probes (either using FIBs or directly) to pick up
           | signals (we do this to debug bugs in chips)
           | 
           | Smart designers will put wires with useful information under
           | many other layers which if broken will disable them.
           | 
           | So yes it's doable, you'll likely damage the chip in the
           | process, it's certainly neither easy nor trivial
        
             | Symmetry wrote:
             | Sounds like the sort of resources that most governments
             | could command but few criminals? But of course with
             | criminals there's always just trying to bribe Intel
             | employees.
        
               | gentleman11 wrote:
               | It's bonkers that the DMCA has people labeling hardware
               | crackers as criminals. What about the farmers and their
               | tractors?
        
               | macintux wrote:
               | Not the parent commenter, but I suspect it's less the act
               | than the motivation.
               | 
               | Criminals who anticipate finding a way to profit on the
               | information would be far more likely to go through the
               | trouble of bribing someone or investing in the resources
               | to snag it.
        
               | SolarNet wrote:
               | I think the parent comment was more likely referring to
               | using these devices for personal privacy. For example can
               | a criminal steal my personal information in my phone vs.
               | can the government spy on me. Where the government might
               | spend a million dollars to do this process to read the
               | phone of a terrorist, but a criminal probably wouldn't to
               | steal my personal information off of a phone or USB
               | drive.
        
               | Symmetry wrote:
               | Those people are fine. I'm looking at this from the
               | perspective of the malware that can survive across OS re-
               | installs because Intel put this enclave in your CPU that
               | you can't touch. I'd assume the NSA is using that to spy
               | on people right now but the question is how many other
               | groups.
        
               | pdkl95 wrote:
               | Bribing Intel employees is probably expensive and might
               | have legal risks. Instead, just hire one of the many
               | skilled technicians in Shenzhen.
               | 
               | For a good discussion of this topic, I recommend Andrew
               | "bunnie" Huang's talk about supply chain security:
               | 
               | https://www.youtube.com/watch?v=RqQhWitJ1As
        
             | avsteele wrote:
             | Correct.
             | 
             | My company (zeroK NanoTech) has developed and is now
             | selling advanced focused ion beam (FIB) systems with
             | enhanced resolution and machining capability that are well
             | suited to these operations.
             | 
             | We did circuit-edits on 10 nm node chips with Intel and
             | they have given talks about it at several conferences (e.g.
             | ISTFA)
        
         | MichaelSalib wrote:
         | So, my spouse was a CPU designer at AMD for many years and now
         | does secure computing work for, well, the US government. I
         | showed her your comment. She laughed. A lot.
         | 
         | This is all completely wrong.
        
         | yjftsjthsd-h wrote:
         | > Any such measures can only deter people without access to an
         | IC development lab.
         | 
         | That's a pretty tiny group, isn't it?
        
           | snazz wrote:
           | Yeah, but it includes governments and other big adversaries.
        
           | baybal2 wrote:
           | https://www.google.com/search?q=sem+lab+access
        
       | mindslight wrote:
       | This is great news! Undermining remote attestation is a win for
       | the open web and free society. And perhaps it means we can get
       | Libreboot on something newer than Ivy Bridge.
        
         | jnwatson wrote:
         | I, too, prefer not to know when my box has been blue pilled.
         | 
         | Remote attestation is required for many privacy-preserving
         | activities. It isn't just DRM.
        
           | mindslight wrote:
           | I prefer my box to _not be_ blue-pilled, rather than merely
           | knowing if it has been double blue-pilled.
           | 
           | I didn't say it was just DRM. Remote attestation creates a
           | vulnerability whereby remote entities demand that _you
           | attest_ to running a software environment that _they
           | control_.
           | 
           | It's possible to do boot verification and attestation without
           | baking in privileged manufacturer keys. If the attestation
           | key were generated and installed by the owner, they could
           | prove everything to themselves remotely, without being forced
           | to prove anything to hostile parties.
           | 
           | If this were the case here, I wouldn't be cheering.
        
         | Avamander wrote:
         | Remote attestation is often the only way to do secure computing
         | on platforms with unknown security. For example things like
         | i-voting would immensely benefit from a secure, anti-tamper
         | computing environment. DRM is another thing however, yes.
        
           | mindslight wrote:
           | Individuals being able to choose their operating system when
           | accessing bank websites, and being able to keep secrets to
           | themselves, has much more bearing on every day life than
           | "internet voting".
           | 
           | Also note that voting etc could easily be implemented with
           | smart cards (eg SIMs, credit card chips, etc). These are
           | still trusted computing, but are at least limited in scope.
           | Top-down control has no place in general CPUs if we wish to
           | remain an open society.
        
           | M2Ys4U wrote:
           | Voting using computers is such a terrible idea even in
           | principal that a trust breach like this is irrelevant.
        
             | justincredible wrote:
             | Can you explain why it's a terrible idea _in principle_?
             | That is, assuming uncrackable software with foolproof
             | authentication, why is it a terrible idea?
        
         | pb82 wrote:
         | Do you mean Coreboot? Because as far as i know Libreboot still
         | only supports Core2 era hardware. Ivy Bridge would be a huge
         | upgrade in comparison.
        
           | mindslight wrote:
           | Oops, yes. What I actually meant is blob-cleaned coreboot (ie
           | Thinkpad X230). FWIW Libreboot proper does run on IvyBridge-
           | era Opteron 6300.
        
         | Karunamon wrote:
         | This was my first thought as well. It seems these management
         | engine tools have only two uses in the real world: enterprise
         | IT, and various forms of DRM.
         | 
         | Both exist to treat the user as a hostile entity.
        
           | holtalanm wrote:
           | what about drive encryption? I'm a little un-versed in
           | hardware related to security, but my understanding of the
           | article was that, given the ability to essentially MitM the
           | TPM, anyone could unencrypt the contents of an encrypted
           | drive, potentially even remotely.
           | 
           | if so, that is definitely not a good thing.
        
           | baybal2 wrote:
           | Given the presence of 4k web-dls (original, not reencoded
           | content,) somebody must have the key, or they must have
           | managed to pwn the DRM on an even deeper level, like tapping
           | the memory (which is even worse.)
           | 
           | Another possibility is still a source leak, where 4k content
           | gets lifted off Netflixes own internal content storage.
        
             | jandrese wrote:
             | Or a simple HDMI defeat and re-encode. It only takes one
             | guy to put it out on the net. DRM is an inherently flawed
             | concept.
        
               | sounds wrote:
               | The content is watermarked by the time it is available on
               | HDMI. The guy who re-encoded it would get a knock on the
               | door.
        
               | robotnikman wrote:
               | Not if they are located somewhere like Russia or China
               | where the authorities don't care
        
               | jandrese wrote:
               | I find it hard to believe that a HDCP stripper would then
               | watermark the content and report who bought the equipment
               | to the media cartels.
        
               | wmf wrote:
               | No, forensic watermarking would be done by the HDMI
               | source (e.g. a PC in this case). I'm not aware of that
               | being done in reality though.
        
               | gruez wrote:
               | >The guy who re-encoded it would get a knock on the door.
               | 
               | Assuming there is a watermark, how would you track them
               | down? It's not like you need to register your hdmi
               | capture device.
        
               | kevin_thibedeau wrote:
               | XOR frames captured via two different accounts.
        
               | herogreen wrote:
               | Except if the watermark is terribly designed, that will
               | not work. There is a _lot_ of information that you can
               | hide from the eye in a video.
        
               | wizzwizz4 wrote:
               | Wouldn't you need three? Or, better still, do a bitwise 3
               | of 5 / integer median on the pixel values.
        
               | mschuster91 wrote:
               | No. HDMI/HDCP does _not_ do watermarking or any other
               | modification to content.
               | 
               | Cinema DCP packages, however, do - it's either
               | watermarked at the DCP distributor or in the decryption
               | module, but that stuff is out of reach for most warez
               | crews.
        
               | zozbot234 wrote:
               | Even if HDMI doesn't do that, the streaming provider just
               | might. It would be feasible to implement, and
               | inconvenience potential pirates quite a bit.
        
               | mschuster91 wrote:
               | Why? Potential pirates are just gonna steal someone's
               | stolen cc details to open up a netflix account.
        
               | jandrese wrote:
               | That would seem to increase the cost of streaming quite a
               | bit if the provider has to re-encode the content for each
               | streamer to embed a watermark instead of just dumping
               | pre-encoded bits on the wire. And the watermark has to
               | survive a re-encode. All to shut down some guy's account
               | in a foreign country.
        
               | nitrogen wrote:
               | You can encode two streams with some detectable
               | difference in them, then switch between them at GOP
               | boundaries. The stream choice per gop gives one bit of
               | data. You only need 33 gops (33 bits) to uniquely
               | identify everyone on earth right now.
        
               | AnthonyMouse wrote:
               | > You only need 33 gops (33 bits) to uniquely identify
               | everyone on earth right now.
               | 
               | But then only ~6 accounts to have >50% probability of
               | seeing every combination of each bit and be able to
               | combine them at random.
        
               | sounds wrote:
               | It's not HDMI/HDCP that watermarks it, that's not what I
               | said. But by the time you get to an HDCP stripper, it has
               | already been watermarked.
        
               | wmf wrote:
               | You could have the player software insert the watermark
               | after decode.
        
               | nyuszika7h wrote:
               | > if the provider has to re-encode the content for each
               | streamer
               | 
               | There's no need for that, the CENC standard has more
               | robust watermarking support, but it's not really used in
               | practice yet because it's not commonly supported by
               | browsers and possibly other clients.
        
               | Filligree wrote:
               | Even if it's some guy in Malaysia?
        
               | [deleted]
        
             | aftbit wrote:
             | Just curious, how can one distinguish original content from
             | a full-res re-encode without access to the actual bits of
             | the original file?
        
               | bavell wrote:
               | This may not be exactly what you're asking for but every
               | re-encode introduces additional noise (generational
               | error). Over many re-encodings (even at same high
               | bitrate/quality) the noise will accumulate in a
               | predictable manner. See [0] for an interesting deep-dive
               | blog post on the subject.
               | 
               | Now, as for whether or not you can distinguish the re-
               | encoding from it's original source.... difficult but
               | plausible in certain scenarios? Perhaps if the content
               | was heavily re-encoded to the point where you can
               | statistically determine the presence of the generational
               | noise. With only a single re-encode it may be impossible
               | to determine.
               | 
               | [0] https://goughlui.com/2016/11/22/video-
               | compression-x264-crf-g...
        
           | mindslight wrote:
           | Well it _would_ be nice for cloud servers, so that you wouldn
           | 't have to trust the hosting provider. But given the choice
           | of needing to trust (cloud: intel, home: intel), and (cloud:
           | intel+provider, home: _nobody_ ), it would be foolish to not
           | choose the latter!
           | 
           | At the individual level, remote attestation has a
           | particularly terrible end game. Think of all those websites
           | that attempt to enforce their desired business whims client-
           | side, that we rightfully laugh at - browser fingerprinting,
           | image save as, anti-adblock, etc. Now imagine they're
           | successful!
        
           | anonymousDan wrote:
           | Cloud computing (SGX).
        
       ___________________________________________________________________
       (page generated 2020-03-05 23:00 UTC)