[HN Gopher] A Full Break of the Bitstream Encryption of Xilinx 7...
       ___________________________________________________________________
        
       A Full Break of the Bitstream Encryption of Xilinx 7-Series FPGAs
        
       Author : Nokinside
       Score  : 223 points
       Date   : 2020-04-19 13:44 UTC (9 hours ago)
        
 (HTM) web link (www.usenix.org)
 (TXT) w3m dump (www.usenix.org)
        
       | Nokinside wrote:
       | This is not small issue. Up to 10% of FPGA's in the market can be
       | affected.
       | 
       | RAID-, SATA-, NIC- controllers, Industrial control systems,
       | mobile base stations, data centers, devices like encrypted USB
       | sticks and HDD's. In some cases it's possible to carry the attack
       | remotely.
        
         | REPLicated2 wrote:
         | Depending on your viewpoint, this may actually be a boon for
         | reverse engineering efforts to counter planned obsolescence.
        
           | segfaultbuserr wrote:
           | Yes. Unfortunately, it's a double-edged sword. The same
           | technology is needed for the FOSS community to create secure
           | hardware.
        
             | snvzz wrote:
             | Security through obscurity? Nah.
        
               | loeg wrote:
               | This was security through a cryptographic design. It was
               | just a broken design. If you consider confidential
               | symmetric or privkeys "obscurity," sure, all crypto is
               | obscurity.
        
               | snvzz wrote:
               | There's no need to encrypt the keys as you hardcode them
               | into the FPGA, if you control the hardware as you do
               | this.
               | 
               | I certainly don't see how anything FOSS would be
               | affected, and would appreciate concrete examples.
        
               | johncolanduoni wrote:
               | This mechanism also included an HMAC, responsible for
               | authenticating the bitstream. That's useful even if the
               | bitstream is public knowledge.
        
         | baybal2 wrote:
         | Only ones who need to keep firmwares secret will be affected.
         | 
         | There are few companies I knew who transitioned from MCUs to
         | FPGAs solely for their obsession of keeping their "IP" from
         | leaking, hoping that FPGA will provide more obscuration than
         | simple encrypted MCU firmware.
        
           | SlowRobotAhead wrote:
           | Bitstream is not firmware. It's gate configuration.
           | 
           | It's not a pedantic difference. It's much harder to reverse
           | hardware configuration back to human readable logic like
           | Verilog.
           | 
           | "Bitstream is not firmware" is the first thing you learn when
           | working with FPGAs.
        
           | Nokinside wrote:
           | If the FPGA can be updated, attacker can take over the
           | hardware and reprogram it.
           | 
           | If attacker gets access to the bitstream, the has complete
           | control over the FPGA.
        
             | ATsch wrote:
             | This isn't a worry over device security. It's a worry about
             | cloning. Most of the serious FPGA market is low-volume
             | speciality devices that cost upwards 10-100k per unit, with
             | yearly support contracts of that magnitude on top.
             | Seismology, medical research, telecommunications... many of
             | those products will be a raving success if they sell 1000
             | units. R&D is almost all of the cost and the hardware
             | design is nothing special. You could clone these things
             | with basically no investment if you have access to the
             | bitstream. "Decompiled" bistream is also much more readable
             | than assembly. That's why they're worried.
        
             | DarkmSparks wrote:
             | yeah, with a cheap and easy attack vector like this
             | especially with a remote option this is one of the worst
             | hardware compromises Ive seen in a while.
        
             | firmnoodle wrote:
             | It's more like access to replace the bitsteam. It's a
             | complicated and expensive new attack vector.
        
             | alexdowad wrote:
             | But if an attacker is already "inside" your system and is
             | able to access the interface for configuring the FPGA, I
             | think you have already lost...
        
               | segfaultbuserr wrote:
               | > _But if an attacker is already "inside" your system
               | [...] I think you have already lost..._
               | 
               | It's not necessarily true. Protecting the system from
               | physical attackers is a legitimate requirement in
               | cryptography.
               | 
               | 1. While all hardware secrets can be broken with physical
               | access, but there's a different in cost, cost and cost. A
               | commercial HSM - used by CAs, businesses, banks to hold
               | private keys - contains RF shields, temper-detection
               | switches, sensors for X-ray, light, temperature, battery-
               | backed SRAM for self-destruction, and so on, it's
               | extremely unlikely that anyone has ever succeeded to
               | break into a HSM, possibly only a handful people in the
               | three-letter agencies were able to do that, and even for
               | them it's a great expense, launching a supply-chain
               | attack, bribing the sysadmin or stealing the key are more
               | reasonable options. It's certainly possible to break into
               | it, but the cost is prohibitively expensive for most.
               | 
               | 2. You can make the hardware 100% secure against physical
               | attackers if the actual secret is not even on the
               | hardware. If I steal your full-disk-encrypted laptop
               | while it's off, I cannot obtain any meaningful data
               | because the encryption key is in your brain, not in this
               | laptop. This is a practical threat model and often
               | desirable. However, there's nothing to stop me from
               | bruteforcing the key because the hardware itself doesn't
               | have protections.
               | 
               | 3. If we make some compromises to trust the hardware, we
               | have another security model used by a modern smartphone -
               | an encryption key is buried inside the chip, I can boot
               | the phone but it's impossible to physically bypass any
               | software-based access control like passwords, since all
               | Flash data is encrypted. All hardware can be broken with
               | physical access, but opening the chip and extract the key
               | may be cheaper than breaking into a HSM, but it's still
               | expensive in terms of cost and expertise. It's difficult
               | to bypass without an additional software vulnerability,
               | this is a good enough threat model and often desirable.
               | 
               | We can combine (2) and (3): Save the actual secret
               | outside the hardware so it cannot be stole, at the same
               | time, implement some hardware-based protection that
               | requires the attacker to launch an expensive physical
               | attack before one is able to bruteforce the secret. It
               | will be a defense-in-depth and the best of all worlds.
               | What we have here is actually a OpenPGP Card (Yubikey,
               | Nitrokey), or a Bitcoin wallet, which uses both mechanism
               | to protect the user from thieves. For example, Nitrokey's
               | implementation first encrypts the on-chip OpenPGP private
               | key with an user-supplied passphrase, and it also set the
               | on-chip flash to be externally unreadable (only readable
               | by firmware itself), so that the private key cannot be
               | extracted, finally it has the standard OpenPGP card
               | access control: If multiple wrong passphrases are
               | attempted, it locks itself, of course, this feature
               | requires an inaccessible on-chip flash - either the flash
               | itself is on-chip, or an encryption key to the flash is
               | on-chip.
               | 
               | If the firmware executed by the chip can be replaced, the
               | attacker can disable all the access restrictions (disable
               | the wrong-passphare lockout, and readback the key),
               | totally eliminating the hardware layer of defense, which
               | is not what we want here. Unfortunately, Nitrokey is
               | based on a standard STM32 microcontroller, and its flash
               | protection has already been broken. Nitrokey Pro remains
               | "secure" - the real crypto is performed on an externally
               | inserted OpenPGP smartcard, which is powered by a
               | "secure" microcontroller, but the card is a partial
               | blackbox and cannot be audited - When Yubikey said it was
               | unable to release the source, many recommended Nitrokey
               | since it's "open source", unfortunately, it is, but it
               | depends on a Yubikey-like blackbox. If you want to
               | implement something better, more trustworthy than a
               | Nitrokey or Yubikey, the option for you here is to write
               | a FOSS implementation of the blackbox, make it becoming a
               | whitebox. Not that the underlying FPGA can be audited, it
               | cannot be, but it's still much better than a complete
               | blackbox.
               | 
               | And now back to the original topic: If your FPGA's
               | bitstream encryption has a vulnerability, it's game over.
               | This is a serious problem. A response may be: relying on
               | bitstream encryption is not the correct approach, one
               | should use external Flash at all, well, yes, but this is
               | not my argument here, my argument is simply that securing
               | your hardware against physical access by an attacker is a
               | legitimate requirement, and that even if everything can
               | be broken with physical access, doing so still has a
               | point.
        
               | baybal2 wrote:
               | > A commercial HSM - used by CAs, businesses, banks to
               | hold private keys - contains RF shields, temper-detection
               | switches, sensors for X-ray, light, temperature, battery-
               | backed SRAM for self-destruction, and so on, it's
               | extremely unlikely that anyone has ever succeeded to
               | break into a HSM
               | 
               | A service to lift firmware from Gemalto chips used in
               | SIM, and credit cards costs $25k here at most
        
               | segfaultbuserr wrote:
               | I think there's some confusion. Are you sure that you are
               | talking about the same thing? What I meant here is a real
               | HSM, something similar to a IBM 4758 [0] (which was once
               | vulnerable, but only because it had buggy software), not
               | a SIM card or a credit card chip. Do you imply that many
               | HSMs are based on the same Gemalto chip?
               | 
               | [0] https://www.cl.cam.ac.uk/~rnc1/descrack/ibm4758.html
        
               | DarkmSparks wrote:
               | He bought the FPGA, modified it, and sold you a
               | compromised one.
        
               | Nokinside wrote:
               | Attacker needs physical access to just one device in the
               | whole product line that uses the same encryption key.
               | 
               | After that, all you need is to get the device to update
               | using the bitstream you made.
        
         | stragies wrote:
         | Can your post be read as "In the near future, we might have the
         | possibility of inspecting/re-programming a bunch of DMA--
         | capable devices commonly found in systems"?
         | 
         | That would be great. I shudder to think what horrors may await
         | us in USB-/Wifi-/Network-controller bit-streams/firmwares. But
         | the sooner these things get opened, the better.
        
       | nomanlagharipq wrote:
       | are ok with this mod ? please visit
       | http://ntechappspk.blogspot.com/2020/04/boo-video-status-mak...
        
       | SlowRobotAhead wrote:
       | The number of people glossing over his to take a bitstream which
       | is the gate configuration of the fabric and read it back to human
       | logic like Verilog is extremely few people and always a lot of
       | time.
       | 
       | This is a big issue for cloning though.
       | 
       | Oh AES CBC, when will you stop disappointing!?
        
       | nabla9 wrote:
       | I don't know much about FPGA's but I tried to read the paper.
       | Maybe someone can tell me if this is correct:
       | 
       | Getting your hands to raw gate configuration helps with cloning.
       | It's pita to reverse engineer.
       | 
       | Any device that uses Xilinx 7 or Virtex 6 SPI or BPI Flash remote
       | update is potentially fucked. There is HMAC in bitstream and no
       | other authentication.
        
       | jtaft wrote:
       | From the paper:
       | 
       | > On these devices, the bitstream encryption provides
       | authenticity by using an SHA-256 based HMAC and also provides
       | confidentiality by using CBC-AES-256 for encryption
       | 
       | > We identified two roots leading to the attacks. First, the
       | decrypted bitstream data are interpreted by the configuration
       | logic before the HMAC validates them. Second, the HMAC key is
       | stored inside the encrypted bitstream
        
       | lallysingh wrote:
       | So can I use this hack to use open source tools on these FPGAs?
        
         | firmnoodle wrote:
         | No. It means people will be able to copy FPGA's like was
         | possible in the 2000's. It also means that the design in an
         | FPGA could be altered by an unauthorized 3rd party without
         | having to physically replace the device.
        
           | londons_explore wrote:
           | Some FPGA's _require_ bitstream encryption. On those devices,
           | breaking this encryption is the first of many steps to making
           | an opensource toolchain
        
             | floatingatoll wrote:
             | And is also the first of many steps to making a rootkit
             | toolchain that can infect your devices when they're in the
             | possession of an attacker.
             | 
             | Any device that allows an anonymous third party to modify
             | it is a device that cannot be trusted once it's been handed
             | to a third party.
             | 
             | Would you be willing to register for an identity
             | certificate to make use of an open source toolchain with
             | your registered device, so that you could be certain others
             | had not silently rootkit'd it and were spying on your work?
        
             | eqvinox wrote:
             | This is not the case for the FPGAs targeted here.
             | Encryption is optional on Xilinx 7-Series. Also there
             | already is an open source toolchain coming up for them.
             | 
             | http://www.clifford.at/yosys/
        
       | userbinator wrote:
       | I wonder if everyone who works on stuff like this is pro-
       | DRM/anti-freedom, because while I've seen plenty of DRM-breaking
       | papers which paint a very negative view of their findings (this
       | one included), I can't recall seeing a single one which takes the
       | opposite view that this is another step forward for freedom and
       | right-to-repair. Are the researchers really believing that this
       | is a bad thing, or is it because they're afraid of taking that
       | position since others could disapprove and reject their paper?
        
         | bb88 wrote:
         | So this will help:
         | 
         | 1. Security researchers, so they can see what malware may be
         | lurking in FPGA bitstreams.
         | 
         | 2. Open source developers working on FPGA bitstream compilers.
         | 
         | 3. People who want to steal proprietary IP cores.
         | 
         | It hurts:
         | 
         | 1. People who chose the part because of the closed bitstream
         | particularly. In part because they made security decisions that
         | the bitstream wasn't open.
         | 
         | 2. Anyone who bought the products based upon the marketed
         | security claims of the product (Hopitals/DoD/etc)
        
       | krilovsky wrote:
       | I don't know what the best prctices are now, but it used to be
       | best practice to blow the CFG_AES_Only eFUSE when using bitstream
       | protection, which prevents the loading of a bitstream which isn't
       | authenticated, and thus foils this attack. If a manufacturer went
       | to the trouble of encrypting the FPGA but then allowed loading of
       | plaintext bitstreams they probably didn't really understand what
       | they were doing.
        
         | Nokinside wrote:
         | This attack breaks the encrypted and authenticated bitstream.
         | 
         | I thought that the title "A Full Break of the Bitstream
         | Encryption of Xilinx 7-Series FPGAs" would give some
         | information even for those who don't want to read the article
         | before commenting. :)
        
           | krilovsky wrote:
           | While I understand that without the proper context (knowing a
           | bit about bitstream protection in the Xilinx 7-Series FPGAs)
           | my comment may seem a bit obscure, I did read the paper.
           | 
           | As the sibling comment mentions, the attack requires
           | programming a plaintext bitstream in order to perform the
           | readout of the WBSTAR register after the automatic reset
           | caused by the HMAC authentication failure. Blowing the
           | CFG_AES_Only eFUSE prevents the loading of that plaintext
           | readout bitstream and the first stage of the attack is thus
           | foiled (preventing the second stage of the attack from taking
           | place as well).
        
             | Nokinside wrote:
             | That was the first attack. How about the second attack
             | where they show how to encrypt a bitstream?
        
               | krilovsky wrote:
               | See my reply in the sibling comment thread. Basically,
               | the second attack is not possible without the first
               | succeeding.
        
           | teraflop wrote:
           | As the paper explains, the attack requires alternately
           | tampering with the encrypted bitstream (to write one word of
           | the decrypted data at a time to a non-volatile register) and
           | then resetting the FPGA and loading a separate, attacker-
           | created, _unencrypted_ bitstream to read that register 's
           | contents.
           | 
           | I don't know enough about Xilinx FPGAs to definitively say
           | whether setting the fuse that OP mentions would prevent the
           | attack, but it seems plausible.
        
             | Nokinside wrote:
             | Attack can be used to encrypt bitstreams also.
             | 
             | >3.4 Attack 2: Breaking Authenticity
             | 
             | >Therefore the attacker can encrypt an arbitrary bitstream
             | by means of the FPGA as a decryption oracle. The valid HMAC
             | tag can also be created by the attacker, as the HMAC key is
             | part of the encrypted bitstream. Hence, the attacker can
             | set his own HMAC key inside the encrypted bitstream and
             | calculate the corresponding valid tag. Thus, the attacker
             | is capable of creating a valid encrypted bitstream, meaning
             | the authenticity of the bitstream is broken as well
        
               | krilovsky wrote:
               | > 3.4 Attack 2: Breaking Authenticity
               | 
               | > With the first attack, the FPGA can be used to decrypt
               | arbitrary blocks. Hence, it can also be seen as a
               | decryption oracle. Thus,we can also use this oracle to
               | encrypt a bitstream, as shown by Rizzo and Duong in [41],
               | and generate a valid HMAC tag
               | 
               | This requires the first stage of the attack to succeed.
               | If it fails and the FPGA cannot be used as a decryption
               | oracle, there's no way to generate a valid encrypted
               | bitstream with the technique outlined in the paper.
        
       | lnsru wrote:
       | If I would really care about security, I would not pick SRAM FPGA
       | in the first place. The are nice Flash based FPGAs out there for
       | projects with high security requirements. They don't need
       | configuration devices leaking bitstream all over the place.
       | 
       | On the other hand is somehow sad, that popular 7 series is
       | compromised. Though I never saw a company, that cared about
       | bitstream security. It was best case "nice to have" feature,
       | usually being completely ignored.
        
         | duskwuff wrote:
         | A lot of flash-based FPGAs are actually an SRAM FPGA with an
         | internal flash die bonded to the configuration pins. The
         | bitstream is harder to get to, but it's still available to a
         | determined attacker.
        
           | lnsru wrote:
           | Actel/Microsemi are very real Flash FPGAs while Altera/Intel
           | MAX10 is SRAM FPGA with configuration Flash inside. Very nice
           | and highly integrated chip, comfy development with it.
        
       | eqvinox wrote:
       | This isn't bad for "security" or "secure microcontrollers." It is
       | in fact good for security. Designs running on these FPGAs can now
       | be analyzed and inspected for accidental or intentional security
       | issues. Mind you: the security issues are there whether you know
       | about them or not. The function that the FPGA implements can (and
       | should) still be secure - since the security of its algorithms
       | should never rely on the secrecy thereof. (And to protect secrecy
       | of private key material, it comes down to physical security
       | either way.)
       | 
       | What it's bad for is vendors relying on DRM to protect their
       | assets. Which is normally diametrically opposed to user freedom.
        
         | pmorici wrote:
         | This encryption is the only way that you can ensure the
         | integrity of the firmware at the chip level so anything relying
         | on that as part of their chain of trust is going to have to
         | redesign their device now. Firmware is loaded from an external
         | eeprom on these devices DRM wasn't the sole use of this
         | feature.
        
           | eqvinox wrote:
           | The chain of trust could already be attacked by replacing the
           | entire FPGA chip with an unkeyed/open one, and then loading
           | you own malicious bitstream.
           | 
           | Also, encryption never ensures integrity; it ensures
           | confidentiality. Integrity would've come from the
           | accompanying signature scheme, which apparently was badly
           | implemented and broken at the same time.
           | 
           | If anything, the encryption makes it impossible to conduct
           | spot checks on a batch of devices you receive, since it
           | prevents the end user from verifying bitstream integrity.
           | (The keys are device specific AFAIK, so the bitstream is
           | device specific too, and signature public keys aren't known.)
           | To establish trust, you ideally need an unencrypted,
           | verifiable, signed bitstream.
           | 
           | (An encrypted, signed bitstream with the keys available does
           | not protect against manufacturer collusion; they can
           | cooperate in sending you a tampered device. An unencrypted
           | bitstream allows comparing a device you received against
           | other devices around the planet.)
        
             | pmorici wrote:
             | On these chips the encryption and integrity checking
             | feature is one and the same you can't turn on one without
             | the other.
             | 
             | Whether or not you use the same key over every device in a
             | product line or a per device key is up to the oem. So you
             | can still verify firmware in the former instance.
             | 
             | Replacing the FPGA chip is a lot harder than re-flashing an
             | eeprom and they would also have to put a lot of effort in
             | to replicating your firmware just to insert their change.
        
       | voxadam wrote:
       | Is there any way that the breaking of the Xlinix bitstream
       | encryption opens the door to documenting and reverse engineering
       | that bitstream in the same way that was done with Project
       | IceStorm[0] for the Lattice iCE40 FPGAs?
       | 
       | [0] Project IceStorm - http://www.clifford.at/icestorm/
        
         | q3k wrote:
         | No. This is pretty much useless for that.
         | 
         | Project X-Ray [1] has been working on reverse engineering the
         | Series 7 bitstream format for a while now, and Dave Shah has an
         | experimental fork [2] of nextpnr that targets some devices
         | using the X-Ray database.
         | 
         | [1] - https://github.com/SymbiFlow/prjxray
         | 
         | [2] - https://github.com/daveshah1/nextpnr-xilinx
        
         | [deleted]
        
         | amelius wrote:
         | It's such a sad situation. Why can't companies just provide all
         | the necessary hardware info in the datasheet?
        
           | CamperBob2 wrote:
           | Their competitors would treat it as an illustrated guide for
           | patent infringement suits, for one thing. Security through
           | obscurity still works for that purpose to a great extent.
        
           | p0llard wrote:
           | In addition to the other reasons already mentioned, this
           | would likely reveal a lot of small details about the
           | underlying microarchitecture of the FPGA fabric which is a
           | (highly valuable) trade secret.
        
           | ATsch wrote:
           | Vendor lock-in is the primary way in which these companies
           | make money
        
             | amelius wrote:
             | But wouldn't an open specification be a much better value
             | proposition for engineers?
        
           | analognoise wrote:
           | Tens of thousands of engineers successfully use these chips
           | already without this data.
           | 
           | Divulging this data ends a revenue stream (which funds
           | tooling development) and prevents competitors from
           | potentially extracting useful information.
           | 
           | There's zero reason to open the chips up at that level.
        
             | throwaway2048 wrote:
             | zero reason for the company selling them, plenty of reason
             | for the users.
        
         | userbinator wrote:
         | It'll help with encrypted bitstreams, not much else.
         | 
         | Besides, the lack of public RE efforts is AFAIK a political
         | issue more than anything; the FPGA companies have been known to
         | send lawyers at anyone who tries. The bitstream format itself
         | is, following the layout of the FPGA itself, naturally going to
         | be extremely regular and definitely not hard to figure out.
         | They're really like a "worst kept secret" in the industry ---
         | there are probably a lot of people who have already figured it
         | out, but just don't want to attract legal attention.
        
         | detaro wrote:
         | It's an optional feature, so I don't think so.
        
       | segfaultbuserr wrote:
       | It's not just an issue for big corporations and their proprietary
       | software and DRM, but also has serious implications for the free
       | and open source hardware community, especially the infosec
       | hackers. To begin with: While it's not realistic to make secure
       | hardware (let's say, a OpenPGP/X.509/Bitcoin Wallet security
       | token) that can be 100% independently verified and free from all
       | backdoors, but still, relatively speaking, FPGAs are generally a
       | better and more secure option as a hardware platform than
       | microcontrollers (for example, see the talk [0] by Peter Todd on
       | Bitcoin hardware wallet and pitfalls of general-purpose
       | microcontrolles), because of three advantages:
       | 
       | * It's possible to implement custom security protections at a
       | lower level than accepting whatever is provided by a
       | microcontroller or implementing it in more vulnerable software.
       | 
       | * Many microcontrollers can be copied easily, but FPGAs are often
       | used to run sensitive bitstream that contains proprietary
       | hardware/software, manufacturers generally provide better
       | security protections, such as verification and encryption,
       | against data extraction (read: OpenPGP private key) and
       | manipulation attacks.
       | 
       | * Most "secure" microcontrollers are guarded under heavy NDAs,
       | while they are commercially available (and widely used in DRM
       | systems), but it's essentially useless for the FOSS community. On
       | the other hand, because the extensive use of FPGA in commercial
       | systems, security is NDA-free for many FPGAs. It's often the best
       | (or the only option) that provides the maximum transparency - not
       | everything can be audited, sure, but the other option is using a
       | "secure" blackbox ASIC, which is a total blackbox.
       | 
       | Unfortunately, nothing is foolproof, manufacturers leave secret
       | debug interfaces, cryptographic implementations have
       | vulnerabilities, etc. Hardware security is a hard problem - 100%
       | security and independent verification is impossible, making it
       | harder to attack is the objective, but it's worse than software -
       | once a bug is discovered and published, the cost of an attack
       | immediately becomes 0, and it cannot be patched. We can only hope
       | that the increased independent verification, like the researchers
       | behind this paper, can somewhat reduce these problems
       | systematically.
       | 
       | [0] https://www.youtube.com/watch?v=r1qBuj_sco4
        
         | eqvinox wrote:
         | The way to protect secure hardware tokens is not bitstream
         | encryption, it's tamper protection. You store the key material
         | in SRAM that is erased when the device detects any attempt at
         | manipulating.
         | 
         | If your Bitcoin Wallet or whatever token is affected by this,
         | it was IMHO badly designed to begin with, since apparently it
         | was relying on an AES-CBC bitstream encryption scheme. That
         | should've been a red flag even if it wasn't broken.
        
           | segfaultbuserr wrote:
           | > _The way to protect secure hardware tokens is not bitstream
           | encryption, it 's tamper protection. You store the key
           | material in SRAM that is erased when the device detects any
           | attempt at manipulating._
           | 
           | You need both. First, make all external storage (that hold
           | keys, firmware, configurations) unreadable to everything else
           | besides the main processor itself. Also, in the ideal world,
           | implement tamper detection, in most HSMs there are tamper
           | detections, but unfortunately, the world is not ideal, in the
           | FOSS world, I don't see anything that uses tamper detection,
           | developing an open source tamper detection is something has
           | great value to the community, yet, I don't see it happening
           | at anytime soon. Also, the majority of security
           | token/hardware have no tamper detection - SIM cards, bank
           | cards, OpenPGP cards (Yubikeys, Nitrokeys), smartphones, they
           | only depend on encrypting external storage and/or restrict
           | the access of the secret inside a chip to maintain security.
           | In practice, they still have an above-average security level,
           | it clears shows tamper protection is not the only way to
           | protection the hardware, although it's less effective and
           | occasionally something is going to be broken, to be sure.
           | 
           | This specific FPGA bitstream encryption vulnerability may be
           | a non-issue, as pointed out by the critics, relying on
           | external storage is not a good idea to begin with, better to
           | burn everything inside the FPGA. My point is that FPGAs are
           | the only platform to implement FOSS security hardware in the
           | most (relatively) transparent and secure manner, yet, the
           | recent discoveries of FPGAs vulnerabilities indicates they
           | are much less secure than expected, and it's only the tip of
           | an iceberg. If external bitstream encryption has
           | cryptographic vulnerabilities, what comes next? More broken
           | cryptos that allow you to read an internal key?
        
             | eqvinox wrote:
             | Moving your data inside a device not easily accessible is
             | tamper protection too.
        
       | [deleted]
        
       | LeifCarrotson wrote:
       | While this does open up the code (sans descriptive text) to
       | external attacks, I rather feel that once you have a logic
       | analyzer or compromised microcontroller on the logic bus of your
       | secure device you've got the attacker on the wrong side of the
       | airtight hatchway.
       | 
       | I'm personally much more interested in what it means for
       | 'attackers' who wish to use it to open up their own hardware.
       | Perhaps that might not align with the goals of Xilinx or the OEM,
       | but it's great for their customers!
        
         | Nokinside wrote:
         | Attacker needs physical access to just one device in the whole
         | product line that uses the same encryption key.
         | 
         | After that, all you need is to get the device to update using
         | the bitstream you made. In some cases this could be remote
         | attack.
        
           | IshKebab wrote:
           | I'm not sure this is true. This attack allows reading the
           | encrypted bitstream, but it doesn't say anything about
           | allowing you to sign modified bitstreams.
        
             | nebulous1 wrote:
             | > In this paper, we introduce novel low-cost attacks
             | against the Xilinx 7-Series (and Virtex-6) bitstream
             | encryption, resulting in the total loss of authenticity and
             | confidentiality
             | 
             | This seems to be saying that it's possible
        
             | teraflop wrote:
             | As the paper explains, there is no "signing" involved (in
             | the sense of a public-key cryptosystem).
             | 
             | Each encrypted bitstream includes an HMAC, but because the
             | HMAC key is part of the encrypted bitstream itself, it
             | basically only acts like a non-cryptographic checksum. An
             | attacker who knows the encryption key can simply choose an
             | arbitrary HMAC key and generate a valid HMAC for arbitrary
             | data.
             | 
             | EDIT: I should clarify that this attack doesn't appear to
             | actually let someone _extract_ the AES encryption key. But
             | they can use an FPGA that has the key programmed as a
             | decryption oracle. And a weakness of CBC mode is that a
             | decryption oracle can be used for encryption as well.
        
               | IshKebab wrote:
               | Ah interesting, thanks!
        
       | Nokinside wrote:
       | >3.5 Wrap-Up: What Went Wrong?
       | 
       | >These two attacks show again that nowadays, cryptographic
       | primitives hold their security assumptions, but their embedding
       | in a real-world protocol is often a pitfall. Two issues lead to
       | the success of our attacks: First, the decrypted data are
       | interpreted by the configuration logic before the HMAC validates
       | them. Generally, a malicious bitstream crafted by the attacker is
       | checked at the end of the bitstream, which would prevent an
       | altered bitstream content from running on the fabric.
       | Nevertheless, the attack runs only inside the configuration
       | logic, where the command execution is not secured by the HMAC.
       | Second, the HMAC key K_HMAC is stored inside the encrypted
       | bitstream. Hence, an attacker who can circumvent the encryption
       | mechanism can read K_HMAC and thus calculate the HMAC tag for a
       | modified bitstream. Further, they can change K_HMAC, as the
       | security of the key depends solely on the confidentiality of the
       | bitstream. The HMAC key is not secured by other means. Therefore,
       | an attacker who can circumvent the encryption mechanism can also
       | bypass the HMAC validation
        
         | ATsch wrote:
         | This is another example of what Moxie Marlinspike calls the
         | "cryptographic doom principle". If you do anything, _anything_
         | with a ciphertext before checking authenticity, doom is
         | inevitable.
        
           | loeg wrote:
           | For others following along: https://moxie.org/blog/the-
           | cryptographic-doom-principle/ (2011)
        
       | alexdowad wrote:
       | Once I worked at a place which was very interested in protecting
       | the IP inherent in their firmware. They gave me a research
       | assignment to get an idea of how difficult it would be for an
       | attacker to extract it as a binary given unlimited physical
       | access to a sample device.
       | 
       | Since I read and write Chinese, I did some searching on Chinese-
       | language sites and found a company advertising their ability to
       | do just that... for about $10,000 per job. They listed the chips
       | they knew how to crack, but the one my employer was using was not
       | on the list...
       | 
       | I feel that trying to prevent reverse-engineering by adversaries
       | with unlimited physical access is a fool's errand. So this break
       | of Xilinx FPGAs is interesting... But kind of a shoulder shrug.
        
         | WrtCdEvrydy wrote:
         | I wonder if classic chip-off methods would work (they no longer
         | work on mobile devices due to encryption advances)
        
         | userbinator wrote:
         | There are quite a few Chinese (and Russian, not
         | surprisingly...) companies who will do "MCU breaks". $10k (USD)
         | is near the high end of the price range; price depends on
         | complexity and newness --- less than $1k for some of the common
         | and older parts. Mikatech is one of the better-known and older
         | ones.
         | 
         | They are great for maintaining legacy equipment where the
         | original company has either discontinued support or disappeared
         | completely.
        
         | grogenaut wrote:
         | Having evaluated implemting game drm many times this nuance may
         | or may not matter to the business. Most drm is a house of cards
         | that a determined attacker can take out. It's still very widely
         | used for good and bad reasons. And a lot of those reasons are
         | not closely tied to the strength of a given implementation.
        
           | Teknoman117 wrote:
           | The DRM only has to function during the time where most sales
           | occur for it to be worth it.
        
             | pmiller2 wrote:
             | Except that piracy doesn't hurt sales and may actually
             | help: https://www.engadget.com/2017-09-22-eu-suppressed-
             | study-pira...
        
               | baby wrote:
               | I can see it being a spectrum.
               | 
               | For a big movie that people want to watch? Piracy
               | probably hurt.
               | 
               | For a small indie that is in dire need of more exposure,
               | then they're probably on youtube, but I can see how
               | piracy could help.
               | 
               | If you're in the middle? Not clear.
        
       ___________________________________________________________________
       (page generated 2020-04-19 23:00 UTC)