[HN Gopher] A case against security nihilism
       ___________________________________________________________________
        
       A case against security nihilism
        
       Author : feross
       Score  : 170 points
       Date   : 2021-07-20 19:18 UTC (3 hours ago)
        
 (HTM) web link (blog.cryptographyengineering.com)
 (TXT) w3m dump (blog.cryptographyengineering.com)
        
       | ngneer wrote:
       | NSO == arms dealers, by their own admission. They did not create
       | the market for digital arms, but successfully cater to it. No HN
       | comment will change their business model. They benefit from the
       | easy distribution of software twice. Once as an exploit
       | developer, because all target systems look alike (recall hardware
       | and software vendors also want to write hardware and software
       | once and then distribute widely) and therefore an exploit must
       | only be developed once to apply broadly. Then, a second time as a
       | software developer, because they can sell their same software to
       | multiple clients. Having worked on Pegasuses, the thing that is
       | dreaded the most and is very costly is a rewrite. Those get
       | financially prohibitive. If the world was serious about stopping
       | the NSOs of the world, it would work toward efficiently (read:
       | inexpensively) making different individual systems wildly
       | different yet remaining interoperable (because the
       | interoperability is where the network effect comes in, providing
       | value in communication systems and leading to their wider
       | adoption). The conflict to solve is how to make systems
       | interoperable and non-interoperable at the same time. While it is
       | easy to imagine randomized instruction sets, Morpheus-like-
       | blindly-randomize-everything chips and bytecode VMs that use
       | binary translation to modify the phenotype at each individual
       | system, it is not so easy to envision how systems could be
       | written once to interoperate yet prevent BORE-type attacks
       | whereby the one time exploit development cost can be easily
       | offset by repeat exploitation. The only way forward is to find
       | that lever which gives defenders a cheap button to push that
       | forces an expensive Pegasus rewrite.
        
       | gnfargbl wrote:
       | "What can we do to make NSO's life harder?" That seems pretty
       | simple to me: We ask Western democratic governments (which
       | include Israel) to properly regulate the cybersecurity industry.
       | 
       | This is the purpose of governments; it is why we keep them
       | around. There is no really defensible reason why the chemical,
       | biological, radiological and nuclear industries are heavily
       | regulated, but "cyber" isn't.
        
         | fennecfoxen wrote:
         | This still leaves us with threats from state actors and
         | cybersecurity firms answering only to Eastern, undemocratic
         | governments.
        
         | tptacek wrote:
         | Nobody has any credible story for how regulations would prevent
         | stuff like this from happening. The problem is simple
         | economics: with the current state of the art in software
         | engineering, there is no way to push the cost of exploits (let
         | alone supporting implant tech) high enough to exceed the petty
         | cash budget of state-level actors.
         | 
         | I think we all understand that the medium-term answer to this
         | is replacing C with memory-safe languages; it turns out, this
         | was the real Y2K problem. But there's no clear way for
         | regulations to address that effectively; assure yourself, the
         | major vendors are all pushing forward with memory safe
         | software.
        
           | contravariant wrote:
           | Well, first of all the NGO group in its current form wouldn't
           | exist if Israel regulated them, at the very least it wouldn't
           | exist as a state-level equivalent actor.
           | 
           | Second of all if you can't push the costs high enough then it
           | becomes time to limit the cash budget of state level actors.
           | Which is hardly without precedent.
           | 
           | For some reason you seem to only be looking at this as a
           | technology problem, while at the core it is far more
           | political. Sure technology might _help_ , but that's the
           | raison d'etre of technology.
        
             | tptacek wrote:
             | Sure, you can outlaw NSO itself. I won't complain! But all
             | you're doing is smearing the problem over the globe. You
             | can push this kind of work all the way to "universally
             | acknowledged as organized crime", and it'll still happen,
             | exactly the same way, with basically the same actors. You
             | might even increase the incentives by doing it. Policy is
             | complicated.
        
               | mjreacher wrote:
               | I really don't get this line of argument that regulation
               | is useless. For example if you made it illegal for ex US
               | gov workers to work at companies like these I would
               | expect the vast majority to comply with this, so at the
               | very minimum you would be limiting the available talent
               | pool. The post several parents up talked about regulation
               | for biological, nuclear, etc industries being effective,
               | and although 'cyber' would never be treated in the same
               | way, they're right, after all you don't see organized
               | criminals running around with biological or radiological
               | weapons now do you?
        
               | tptacek wrote:
               | I don't know if it's useless. I just know it isn't going
               | to stop NSO-type attacks by state-level actors. People on
               | message boards have very strange ideas about what the
               | available talent pool is; for starters, they seem
               | strangely convinced that it's all people who are choosing
               | between writing exploits and working at a Google office.
        
               | mjreacher wrote:
               | Of course you will never stop all attacks, however you
               | can try limit them in amount by making them more
               | expensive to do, whether this be by limiting where they
               | can hire from, the kind of political consequences they
               | will incur, etc.
        
               | tptacek wrote:
               | On this thread, we're talking about state-level attackers
               | targeting iMessage.
        
           | jrm4 wrote:
           | Nor does anyone _need_ one, yet. Again, the point of
           | government -- force the dang discussion; that 's what
           | investigations, committees, et al are for.
           | 
           | It's fun to make fun of old people in ties asking (to us)
           | stupid questions about technology in front of cameras, but at
           | the end of the day, it's a crucial step in actually getting
           | something done about all this.
        
           | gnfargbl wrote:
           | You're extremely correct, of course, but what I'm really
           | proposing here is something much more boring than actually
           | solving the technical problem(s). How about a dose of good
           | old-fashioned bureaucracy? If you want to sell exploits, in a
           | Western country, then yeah sure you can, but first you should
           | have to go through an approval process and fill in a form for
           | every customer and have them vetted, yada yada.
           | 
           | This wouldn't do anything to stop companies who base
           | themselves in places like Russia. It wouldn't even really do
           | anything to stop those who base themselves in the
           | _Seychelles_. But, you want to base yourself in a real bona-
           | fide country, like the USA or France or Israel or Singapore?
           | Then you should have to play by some rules.
        
             | tptacek wrote:
             | If you make people fill out paperwork to sell exploits in
             | Israel, Germany, and the United States, they will sell
             | exploits in Kuala Lumpur, Manila, and Kigali. I'm not
             | saying you're expressing it at all, but there is a lot of
             | chauvinism built into the most popular ideas for regulating
             | exploits.
        
               | gnfargbl wrote:
               | Yes, they certainly will. I'm not naive, or colonial,
               | about that. But what more can we do than live out the
               | standards that we want to see upheld in the world?
        
           | mrtesthah wrote:
           | > _Nobody has any credible story for how regulations would
           | prevent stuff like this from happening._
           | 
           | We do have some of those already.
           | 
           | https://www.faa.gov/space/streamlined_licensing_process/medi.
           | ..
        
           | maqp wrote:
           | If the governments can't ban exploits, perhaps they can ban
           | writing commercial programs in memory unsafe languages?
           | Countries could agree on setting a goal, e.g. that by 2040
           | all OSs etc. need to use a memory safe language.
        
         | nullc wrote:
         | > to properly regulate the cybersecurity industry
         | 
         | Regulated Cybersecurity: Must include all mandatory government
         | backdoors.
        
         | mrdoops wrote:
         | The whole approach of regulating on the level of "please don't
         | exploit vulnerable systems" seems reactive to me. If the cats
         | out of the bag on a vulnerability and it's just data to copy
         | and proliferate - not much a government can do other than
         | threaten with repercussions which only applies if you get
         | caught.
         | 
         | The only tractable way to deal with cyber security is to
         | implement systems that are secure by default. That means
         | working on hard problems in cryptography, hardware, and
         | operating systems.
        
           | AnimalMuppet wrote:
           | By the exact same logic, implementing physical security on
           | the level of "please don't kill vulnerable people" would also
           | be reactive. If the cat's out of the bag on a way to kill
           | people, well, don't we need to implement humans that are
           | unkillable in that way? That's going to mean working on some
           | hard problems...
           | 
           | No. We don't operate that way, and we don't want to.
           | 
           | But for us to not operate that way in cyberspace, we need
           | crackers (to use the officially approved term) to be at least
           | as likely to be caught (and prosecuted) as murderers are.
           | _That 's_ a hard problem that we should be working on.
           | 
           | (And, yes, we need to work on the other problems as well.)
        
             | shkkmo wrote:
             | Despite the enforcement mechanisms against murders (which
             | work less than 2/3s of the time), you see many places that
             | implement preventive security measures to make killing
             | people more difficult.
             | 
             | I think it is wholey reasonable to work on both preventive
             | and punitive approaches. For online crimes, jurisdictional
             | issues are major hurdles for the punitive approach.
        
               | AnimalMuppet wrote:
               | > For online crimes, jurisdictional issues are major
               | hurdles for the punitive approach.
               | 
               | Yeah. If you can catch people in your jurisdiction
               | (without the problems of spoofing and false flags), then
               | people are just going to attack you from outside your
               | jurisdiction. You'd have to firewall your jurisdiction
               | against outside attacks. (You might even be able to do
               | that, by controlling every cable into the country. But
               | then there's satellites...)
        
         | cratermoon wrote:
         | > We ask Western democratic governments (which include Israel)
         | to properly regulate the cybersecurity industry.
         | 
         | That's a bit naive. Governments want surveillance technology,
         | and will pay for it. The tools will exist, and like backdoors
         | and keys in escrow, they will leak, or be leaked.
         | 
         | The reason why all those other industries are regulated as much
         | as they are is because governments don't need those types
         | weapons they way they need information. It's messy and somewhat
         | distasteful to overthrow an enemy in war, but undermining a
         | government, through surveillance, disinformation, propaganda,
         | until it collapses and is replaced by a more compliant
         | government is the bread-and-butter of world affairs.
        
           | maqp wrote:
           | The thing is, countries with vast intellectual property base
           | have more to lose in the game, thus they should favor defense
           | over offense. Like Schneier says, we must choose between
           | security for everyone, or security for no-one.
        
         | contravariant wrote:
         | Yeah, it seems kind of silly to start with the fact that the
         | something has caused "the bad thing everyone said would happen"
         | to happen and somehow not see that thing as a blatant security
         | hole in and of itself.
         | 
         | I mean sure technical solutions are available and do help, but
         | to only look at the technical side and ignore the original
         | issue seems like a mistake.
        
           | cratermoon wrote:
           | > a blatant security hole in and of itself
           | 
           | That means our society, our governments, our economic systems
           | are security holes. Everyone saying the Bad Thing would
           | happen did so by looking, not at technology, but at how our
           | world is organized and run. The Bad Thing happened because
           | all those actors behaved exactly as they are designed to
           | behave.
        
       | Animats wrote:
       | _" the fact that iMessage will gleefully parse all sorts of
       | complex data received from random strangers, and will do that
       | parsing using crappy libraries written in memory unsafe
       | languages."_
       | 
       | C. 30 years of buffer overflows.
        
         | pradn wrote:
         | 50 years!
        
           | [deleted]
        
       | cratermoon wrote:
       | > The problem that companies like Apple need to solve is not
       | preventing exploits forever, but a much simpler one: they need to
       | screw up the economics of NSO-style mass exploitation.
       | 
       | On the one hand, sure, make it too expensive to do this. On the
       | other hand, how much more expensive is _too_ expensive? When the
       | first SHA1 collision attack was found, it was considered a
       | problem, and SHA1 was declared unsuitable for security purposes,
       | but now it 's cheap.
        
       | downWidOutaFite wrote:
       | Dumb article. Basically amounts to "Apple should continue to do
       | what they're doing".
        
       | haecceity wrote:
       | Should iMessage do what Facebook messenger does and request
       | receiver permission before letting a new contact message them?
        
       | bjornsing wrote:
       | How does a bug in iMessage lead to my iPhone being completely
       | taken over by Pegasus? I thought apps were sandboxed on iOS.
       | 
       | Or can they only monitor SMS/iMessages with this entry point?
        
         | x4e wrote:
         | I imagine they use one exploit to get code execution in
         | iMessage, then another exploit to escape sandbox and execute
         | code in kernel.
        
       | nullc wrote:
       | > Notably, these targets include journalists and members of
       | various nations' political opposition parties
       | 
       | For all we know it also included cryptographers and security
       | researchers. Unfortunately, the list hasn't been published-- so
       | we only know what the journalists who had access to it cared to
       | look up.
        
       | grantwu wrote:
       | Can someone explain why Blastdoor has been unsuccessful? Is it
       | too hard a problem to restrict what iMessage can do?
        
       | dfabulich wrote:
       | The article says that although "you can't have perfect security,"
       | you can make it _uneconomical_ to hack you. It 's a good point,
       | but it's not the whole story.
       | 
       | The problem is that state-level actors don't just have a lot of
       | money; they (and their decision makers) also put a much much
       | lower value on their money than you do.
       | 
       | I would never think to spend a million dollars on securing my
       | home network (including other non-dollar costs like
       | inconveniencing myself). Let's suppose that spending $1M would
       | force the US NSA to spend $10M to hack into my home network. The
       | people making that decision aren't spending $10M of their own
       | money; they're spending $10M of the government's money. The NSA
       | doesn't _care_ about $10M in the same way that I care about $1M.
       | 
       | As a result, securing yourself even against a dedicated attacker
       | like Israel's NSO Group could cost way, way more than a simple
       | budget analysis would imply. I'd have to make the costs of
       | hacking me so high that someone at NSO would say "wait a minute,
       | even _we_ can 't afford that!"
       | 
       | So, sure, "good enough" security is possible in principle, I
       | think it's fair to say "You probably can't afford good-enough
       | security against state-level actors."
        
         | dane-pgp wrote:
         | Whether $10M is a lot of money to the NSA or not is also only
         | part of the story. The remaining part is how much they value
         | the outcome they will achieve from the attack.
         | 
         | That reminds me somehow of an old expression: If you like
         | apples, you might pay a dollar for one, and if you really like
         | apples you might pay $10 for one, but there's one price you'll
         | never pay, no matter how much you like them, and that's two
         | apples.
        
           | tptacek wrote:
           | You're right. It's only part of the story. Another part of
           | the story is that the cost of these attacks is so far below
           | the noise floor of any state-level actor that raising their
           | costs will probably have perverse outcomes. For the same
           | reason you don't routinely take half a course of antibiotics,
           | there are reasons not to want to deliberately drive up the
           | cost of exploits as an end in itself. When you do that,
           | you're not hurting NSO; you're helping them, since their
           | business essentially boils down to taking a cut.
           | 
           | We should do things that have the side effect of making
           | exploits more expensive, by making them more intrinsically
           | scarce. The scarcer novel exploits are, the safer we all. But
           | we should be careful about doing things that simply make them
           | cost more. My working theory is that the more important
           | driver at NSA isn't the mission as stated; like most big
           | organizations, the real driver is probably just "increasing
           | NSA's budget".
        
         | cratermoon wrote:
         | > The problem is that state-level actors don't just have a lot
         | of money; they (and their decision makers) also put a much much
         | lower value on their money than you do.
         | 
         | They also have something else most people don't have: time.
         | Nation-states and actors at that level of sophistication can
         | devote years to their goals. This is reflected in the acronym
         | APT, or Advanced _Persistent_ Threat. It 's not that just once
         | they have hacked you they'll stick around until they are
         | detected or have everything they need, it's _also_ that they
         | 'll keep trying, playing the long game, waiting for their
         | target to get tired or make a mistake, and fail to keep up with
         | advancing sophistication?
         | 
         | In your example, you spend $1M on your home network, but do you
         | keep spending the money, month after month, year after year, to
         | prevent bitrot? Equifax failed to update Struts to address a
         | known vulnerability, not just because of cost but also time.
         | It's cost around $2billion so far, and the final cost might
         | never really be known.
        
         | ngneer wrote:
         | "Secure" and "uneconomical" are generally equivalent. A door
         | lock is an _economic_ instrument, that just happens to leverage
         | the laws of physics in its operation. If the NSOs of the world
         | are your enemy, and they are by definition of having you on
         | their list, then you must wisely expend your energy on making
         | their attack more costly or else get eaten.
        
         | bitexploder wrote:
         | Most organizations should not really be factoring state level
         | actors into their risk assessment. It just doesn't make sense.
         | If you are an actual target for state level actors you likely
         | will know about it. You will also likely have the funding to
         | protect yourself against them. And if you can't, that isn't a
         | failing of your risk assessment decision making.
        
           | titzer wrote:
           | Meanwhile, the biggest state-level actors are developing
           | offensive capabilities at the scale of "we can wipe out
           | everything on the enemy's entire domestic network" which
           | includes thousands of businesses of unknown value. The same
           | way strategic nuclear weapons atomize plenty of civilian
           | infrastructure.
           | 
           | Sure, in that kind of event, an org might be more concerned
           | with flat out survival. But you never know if you'll be
           | roadkill. And once that capability is developed, there is no
           | telling how some state-level actors are connected to black
           | markets and hackers who are happy to have more ransomware
           | targets. Some states are hurting for cash.
        
           | pixl97 wrote:
           | Are you a semi-large American company?
           | 
           | Then you are an actual target for state level actors.
        
             | Beached wrote:
             | as a security engineer at a semi large American company, we
             | factor in state actors. we do tool for, and routinely hunt
             | for nation state actors.
             | 
             | most people I know, even those in mid size businesses tool
             | for and hunt for nation state TAs as well. it's just
             | something you have to do. the line between ecrime and
             | nation state is sooooo thin, you might as well. especially
             | when your talking about NK, were you have nation state
             | level ecrime.
        
           | blowski wrote:
           | What counts as a state-level actor? The NSA, obviously. But a
           | lot of other groups seem to be in more of a grey area.
        
       | [deleted]
        
       | tick_tock_tick wrote:
       | Apple seems incapable of successfully sandboxing iMessage after
       | years of exploits. At this point I think we just have to assume
       | they just don't care.
        
         | skybrian wrote:
         | This doesn't doesn't seem like a stable fact about Apple. Their
         | priorities can change. I expect that these recent revelations
         | have gotten the attention of top management and they are likely
         | to get a strong organizational response.
         | 
         | (I'm reminded of Google's responses to the Snowden leak.)
        
         | smoldesu wrote:
         | You're not entirely wrong. For a proprietary, closed-source,
         | limited-access system that Apple has complete control of, it's
         | surprisingly vulnerable and slow to be patched.
        
           | recursive wrote:
           | Also not partially wrong either.
        
           | tptacek wrote:
           | It's pretty much completely wrong. Apple invests more on this
           | problem than almost any vendor in the world, and no vendor
           | with a comparable footprint fares meaningfully better than
           | they do --- Google surpasses them at some problems, but vice
           | versa. The problems we're dealing with here are basically at
           | the frontier of software engineering and implicate not so
           | much Apple as _the entire enterprise of commercial consumer
           | software development_ , no matter where it's practiced.
           | 
           | It's fair to criticize Apple. But you can't reasonably argue
           | that they DGAF.
        
             | [deleted]
        
       | dadrian wrote:
       | There's plenty of evidence that this type of attack surface
       | (parsers operating on untrusted data received over the Internet)
       | is fixable, even at Big Tech scale. The most obvious example is
       | Microsoft Office in the early 2000s and the switch to the XML-
       | based format with newer, easier-to-implement and ideally memory-
       | safer parsers. That's not to say there's no bugs in Office
       | anymore, but it's certainly much much better than it was.
       | 
       | Microsoft figured it out. Apple can do it, too.
        
       | akhilpotla wrote:
       | The point that the author makes is very valuable, it is important
       | to not throw out hands in the air. If you are not moving forward,
       | you are falling back.
       | 
       | Though one (perhaps nit-picky) point I'd like to make is that
       | these dictators are not dumb. They are incredibly intelligent.
       | They themselves are probably not hackers, but they understand
       | people and power. They are going do what they can to get what
       | they want. We can't ignore the factor they play in creating these
       | problems, and we need to take it just as seriously as we would a
       | technical security exploit.
        
         | spitfire wrote:
         | Not to split hairs here, but lets separate this into a few
         | different traits. Smart, intelligent and cunning.
         | 
         | Most dictators are not very intelligent. Just like Donald Trump
         | is not very intelligent.
         | 
         | Cunning and with social smarts would be apt. These guys
         | _really_ know how to play people off each other, and
         | manipulate, like, really well.
        
         | pessimizer wrote:
         | You don't have to be "incredibly intelligent" to pay some
         | company to hack a list of your enemies. That just takes money
         | and a list of people you hate, not insight.
        
       | smoldesu wrote:
       | This article doesn't seem to have a direction, it just seems to
       | be a lump of refutations about how hard it is to maintain a
       | secure system, and how we need to be understanding throughout
       | this process. What it doesn't actually address is security
       | nihilism, so let's expand on the seed he plants in the final
       | section:
       | 
       | > It's the scale, stupid
       | 
       | This should 100% be the focus, not how truly amicable Apple's
       | efforts are to improve security. Security nihilism is _entirely
       | about_ scale, and understanding your place in the digital pecking
       | order. The only way to be  'secure' in that sense is to directly
       | limit the amount of personal information that the surrounding
       | world has on you: in most first-world countries, it's impossible
       | to escape this. Insurance companies know your medical history
       | before you even apply for their plan, your employer will
       | eventually learn about 80% of your lifestyle, and the internet
       | will slowly sap the rest of the details. In a world where copying
       | is free, it's undeniable that digital security is a losing game.
       | 
       | Here's a thought: instead of addressing security nihilism in the
       | consumer, why don't you highlight this issue in companies?
       | There's currently no incentive to hack your phone unless it has
       | valuable information that can't be found anywhere else: in which
       | case, you have more of a logistics issue than a security one.
       | Meanwhile, ransomware and social-engineering attacks are at an
       | all-time high, yet our security researchers are taking their time
       | to hash out exactly how mad we deserve to be at Apple for their
       | exploit-of-the-week. If this is the kind of attitude the best-of-
       | the-best have, it's no wonder we're the largest target for
       | cyberattacks in the world.
        
         | kaba0 wrote:
         | > The only way to be 'secure' in that sense is to directly
         | limit the amount of personal information that the surrounding
         | world has on you
         | 
         | I may misunderstand you but this is privacy, not security. The
         | 2 are not completely separated, but that's another issue.
        
       | fsflover wrote:
       | The only practical security is security through isolation, like
       | what Qubes OS provides. Security through correctness is
       | impossible.
        
         | ttymck wrote:
         | Stupid question: how do you know your isolation is correct?
        
           | fsflover wrote:
           | Not stupid question at all. Nothing is 100% correct. Instead,
           | you look at the attack surface, which for Qubes is extremely
           | small: no network in AdminVM, only 100k lines of code in Xen
           | supervisor, hardware virtualization with extremely low number
           | of discovered escapes and so on.
        
           | ece wrote:
           | You test for it with rigor and incorporate new learning, just
           | like every other engineering discipline.
        
         | Ar-Curunir wrote:
         | You seem to have missed the point of the article completely.
         | 
         | We can't achieve perfect security (there's no such thing). What
         | we can achieve is raising the bar for attackers. Simple things
         | like using memory-safe languages for handling untrusted inputs,
         | least-privilege design, defense in depth, etc.
        
           | fsflover wrote:
           | Memory-safe languages are good, but decreasing the attack
           | surface through compartmentalization is much more reliable I
           | think.
        
       | Syonyk wrote:
       | ... against nihilism? They're just sort of handwaving and saying,
       | "Well, uh... we should do better, somehow... and expect Apple to
       | do better, and... uh..." How's that any different from saying
       | "The problem is basically impossible"?
       | 
       | The core of the problem is complexity. Our modern computing stack
       | can be broadly described as:
       | 
       | - Complexity to add features. - Complexity to add performance. -
       | Complexity to solve problems with the features. - Complexity to
       | solve problems created from the performance complexity. -
       | Complexity added to solve the issues the previous complexity
       | created.
       | 
       | And this has been iterating over, and over, and over... and over.
       | The code gets more complex, so the processors have to be faster,
       | which adds side channel issues, so the processors get more
       | complex to solve that, as does the software, hurting performance,
       | and around you go again.
       | 
       | At no point does anyone in the tech industry seem to step back
       | and say, "Wait. What if we simplify instead?" Delete code. Delete
       | features. I would rather have an iPhone without iMessage zero
       | click remote exploits than one with animated cartoons based on me
       | sticking my tongue out and waggling my eyebrows, to pick on a
       | particularly complex feature.
       | 
       | I've made a habit of trying to run as much as I can on low power
       | computers, simply to see how it works, and ideally help figure
       | out the choke points. Chat has gotten comically absurd over the
       | years, so I'll pick on it as an example of what seems, to me, to
       | be needless complexity.
       | 
       | Decades ago, I could chat with other people via AIM, Yahoo, MSN,
       | IRC, etc. Those clients were thin, light, and ran on a single
       | core 486 without anything that I recall as being performance
       | issues.
       | 
       | Today, Google Chat (having replaced Hangouts, which was its own
       | bloated pig in some ways) struggles to keep up with typing on a
       | quad core, 1.5GHz ARM system (Pi 4). It pulls down nearly 15MB of
       | resources - or roughly 30% of a Windows 95 install. To chat with
       | someone person to person, in the same way AIM did decades ago.
       | I'm _more_ used to lagged typing in 2021 than I was in 1998.
       | 
       | Yes, it's got some new features, and... I'm sure someone could
       | tell me what they are, but in terms of sending text back and
       | forth to people across the internet, along with images, it's
       | fundamentally doing the exact same thing that I did 20 years ago,
       | just using massively more resources, which means there are
       | massively more places for vulnerabilities, exploits, bugs, etc,
       | to hide. Does it have to be that huge? No idea, I didn't write
       | it. But it's larger and slower than Hangouts, to accomplish, as
       | far as I'm concerned, the same things.
       | 
       | We can't just keep piling complexity on top of complexity forever
       | and expect things to work out.
       | 
       | Now, if I wanted to do something like IRC, which is substantially
       | unchanged from the 90s, I can use a lightweight native client
       | that uses basically no CPU and almost no memory to accomplish
       | this, on an old Pi3 that has an in-order CPU with no speculation,
       | and can run a rather stripped down kernel, no browser, etc.
       | That's going to be a lot harder to find bugs in than the modern
       | bloated code that is most of modern computing.
       | 
       | But nobody gets promoted for stripping out code and making things
       | smaller these days, it seems.
       | 
       | As long as the focus is on adding features, that require more
       | performance, we're simply not going to get ahead of the security
       | bugs. And, if everyone writing the code has decided that memojis
       | are more important than security iMessage against remote zero
       | click exploits, well... OK. But the lives of journalists are the
       | collateral damage of those decisions.
       | 
       | These days, I regularly find myself wondering why I bother with
       | computers at all outside work. I'd free up a ton of "overhead
       | maintenance time" I spend maintaining computers, and that's
       | before I get into the fact that even with aggressive attempts to
       | tamp down privacy invasions, I'm sure lots of my data is happily
       | being aggregated for... whatever it is people do with that, send
       | ads I block, I suppose.
        
         | tptacek wrote:
         | The bugs we're talking about have almost nothing to do with the
         | underlying message transport, but rather the features built on
         | top of it. Replacing iMessage with IRC wouldn't solve anything.
        
           | Syonyk wrote:
           | No, but my point is about complexity.
           | 
           | If _all_ iMessage allowed were ASCII text strings, do you
           | think it would have nearly the same attack surface as it does
           | now, allowing all the various things it supports (including,
           | if I recall properly, some tap based patterns that end up on
           | the watch)?
           | 
           | In a very real sense, complexity (which is what features are)
           | is at odds with security. You increase the attack surface,
           | and you increase the number of pieces you can put together
           | into weird ways that were never intended, but still work and
           | get the attacker something they want.
           | 
           | If there were some toggle to disable parsing everything but
           | ASCII text and images in iMessage, I'd turn it on in a
           | heartbeat.
        
             | tptacek wrote:
             | Virtually no one wants to use a messaging platform that
             | just sends ASCII strings.
             | 
             | It's true that if you constrain the problems enough,
             | ratcheting them down to approximately what we were doing
             | with the Internet in 1994 when we were getting access to it
             | from X.25 gateways, you can plausibly ship secure software
             | --- with the engineering budgets of 2021 (we sure as shit
             | couldn't do it in 1994). The problem is that there is no
             | market to support those engineering budgets for the feature
             | set we had in 1994.
        
               | Syonyk wrote:
               | > _Virtually no one wants to use a messaging platform
               | that just sends ASCII strings._
               | 
               | That's just about all I use for messages. Some images,
               | but it's not critical. And if I had the option to turn
               | off "all advanced gizamawhatchit parsing" in iMessage to
               | reduce the attack surface, I absolutely would - and you
               | can bet any journalist in a hostile country would like
               | the option as well.
               | 
               | The whole "zero click" thing is the concerning bit - if I
               | can remotely compromise someone's phone with just their
               | phone # or email address, well... that's kind of a big
               | deal, and this is hardly the first time it's been the
               | case for iMessage.
               | 
               | If software complexity is at a point that it's considered
               | unreasonable to have a secure device, then it's long past
               | time to put an icepick through the phones and simply stop
               | using them. Though, as I noted above, I feel this way
               | about most of modern computing these days.
        
               | tptacek wrote:
               | I 100% believe that this is all you do with messages. In
               | the 1990s, my cool friends did lots of their work on Wyse
               | dumb terminals hooked up to FreeBSD boxes. Everything
               | they did worked fine on dumb terminals! They were neat,
               | you could have a bunch of them hooked up to one box! But
               | _nobody else in the whole world worked that way_ ; even
               | the bank data entry people who were the original market
               | for those stupid terminals had moved on from them.
               | 
               | The issue here is that we aren't saying anything about
               | the real problem. You can radically scope software down.
               | That will indeed make it more secure. But you will stop
               | making money. When you stop making money, you will stop
               | being able to afford the developers who can write secure
               | software (the track record on messaging software written
               | by amateurs for love is not great). Now we're back where
               | we're started, just with shittier software.
               | 
               | It's a hard problem. You aren't wrong to observe it; it's
               | just that you haven't gotten us an inch closer to a
               | solution.
        
               | pixl97 wrote:
               | So you speak English? And the rest of the world should do
               | what?
        
               | Syonyk wrote:
               | I suppose I should have gone with "Unicode without emoji"
               | instead of ASCII. I don't mind unicode, but I question
               | the emoji parsing engines as they're doing all sorts of
               | crazy stuff with modifiers, and even unicode rendering is
               | oddly complex and likely has bugs in some corner case or
               | another.
               | 
               | From a "I would like it as simple and secure as
               | possible," ASCII does tick quite a few boxes.
        
               | tptacek wrote:
               | I think it's been single-digit months since the last
               | UTF-8 parsing vulnerability.
        
             | philipkglass wrote:
             | The "and images" part has historically been a rich source
             | of software exploits. I would guess that chat with full
             | Unicode support but no images would be easier to implement
             | to a high degree of security than ASCII text plus images.
        
         | ngneer wrote:
         | Well put. The market values features. With present system
         | engineering approaches, the path of least resistance is to add
         | complexity to enable said features and reap the financial
         | rewards. It takes more effort to build smaller attack surfaces,
         | so nature tends to avoid that path. Regulation helps little.
         | Security is not additive, it is subtractive. Less is more.
         | There is very little incentive to simplify, except in niche
         | segments. So, zero surprise commodity systems fail so
         | horrendously.
        
       | Veserv wrote:
       | The article correctly refutes the silly binary argument that many
       | people fall back on that since perfection is impossible, we must
       | accept an imperfect solution. And since the current solutions are
       | clearly imperfect, the status quo must be acceptable since
       | imperfect solutions are acceptable.
       | 
       | However, the article falls right into the next failed model of
       | considering everything in terms of relative security. We should
       | make things "better", we should make things "harder", but those
       | terms mean very little. 1% better is "better". Making a broken
       | hashing function take 2x as long to break makes things "harder",
       | but it does not make things more secure since it is already
       | hopelessly inadequate. The problem with considering things only
       | in relative terms to existing solutions is that it ignores
       | defining the problem, and more importantly, it does not tell you
       | if you solved your problem.
       | 
       | The correct model is the one used by engineering disciplines,
       | specifying _objective_ , quantifiable standards for what is
       | adequate and then verifying the solution passes those standards.
       | Because if you do not define what is adequate, how do you know if
       | you have even achieved the bare minimum of what you need and how
       | far your solution may be from that.
       | 
       | For instance, consider the same NSO case as the article. Did
       | Apple do an adequate job, what is an adequate job, and how far
       | away are they?
       | 
       | Well, let us assume that the average duration of surveillance for
       | the 50,000 phones was 1 year per phone. Now what is a good level
       | of protection against that kind of surveillance? I think a
       | reasonable standard is making it so the phone is not the easiest
       | way to surveil a person for that length of time, it is cheaper to
       | do it the old fashioned way, so the phone does not make you more
       | vulnerable on average. So, how much does it cost to surveil a
       | person and listen in on their conversations for a year the old
       | fashioned way? 1k, 10k, 100k? If we assume 10k, then the level of
       | security needed to protect against NSO type threats and to
       | adequately protect against surveillance is $500M.
       | 
       | So, how far away is Apple from that? Well, Zerodium pays $1.5M
       | per iMessage zero click [1]. If we assume they burned 10 of them,
       | infecting a mere 5k per with a trivially wormable complete
       | compromise, that would amount to ~15M at market price. Adding in
       | the rest of the work, it would maybe cost $20M all together worst
       | case. So, if you agree with this analysis (if you do not feel
       | free to plug in your own estimates), then Apple has achieved ~4%
       | of the necessary level and would need to improve processes by
       | 2,500% to achieve adequate security against this type of attack.
       | I think that should make it clear why things are so bad. "Best in
       | class" security needs to improve by over 10x to become adequate.
       | It should be no wonder these systems are so defenseless.
       | 
       | [1] http://zerodium.com/program.html
        
       | roody15 wrote:
       | I am not convinced apple wasn't aware of what NSO was doing?
       | 
       | Governments want access to spy on people. Apple wants to market
       | and sell a "secure" mobile device.
       | 
       | In a way NSO provides apple with a perfect out. They can legally
       | claim they are secure platform and do not work with bad actors or
       | foreign governments to "spy".
       | 
       | Hear no evil see no evil. NSO ability to penetrate iOS gives
       | powerful governments what they want and in a way may keep
       | "pressure" off Apple in providing official back door access.
        
         | tptacek wrote:
         | That would be a surprise to literally every person who works in
         | Ivan's organization at Apple. This is message-board-think, not
         | analysis.
        
       | o8r3oFTZPE wrote:
       | "An entirely separate area is surveillance and detection: Apple
       | already performs some remote telemetry to detect processes doing
       | weird things. This kind of telemetry could be expanded as much as
       | possible while not destroying user privacy. While this wouldn't
       | necessarily stop NSO, it would make the cost of throwing these
       | exploits quite a bit higher - and make them think twice before
       | pushing them out to every random authoritarian government."
       | 
       | Apple could do more spying (excuse me, "telemetry") "as much as
       | possible" in addition to NSO... because it would make the the
       | competitor's spying more expensive.
       | 
       | This could be a unilateral decision to be made by Apple without
       | input from users, as usual.
       | 
       | Any commercial benefits to Apple due to the incresed data
       | collection would be purely incidental, of course.
       | 
       | Apple and NSO may have different ways of making money, but they
       | both use (silent) data collection from computer users to help
       | them.
        
       | staticassertion wrote:
       | Just the other day I suggested using a yubikey, and someone
       | linked me to the Titan sidechannel where researchers demonstrated
       | that, with persistent access, and a dozen hours of work, they
       | could break the guarantees of a Titan chip[0]. They said "an
       | attacker will just steal it". The researchers, on the other hand,
       | stressed how very fundamentally difficult this was to pull off
       | due to very limited attack surface.
       | 
       | This is the sort of absolutism that is so pointless.
       | 
       | At the same time, what's equally frustrating to me is defense
       | without a threat model. "We'll randomize this value so it's
       | harder to guess" without asking who's guessing, how often they
       | can guess, how you'll randomize it, how you'll keep it a secret,
       | etc. "Defense in depth" has become a nonsense term.
       | 
       | The use of memory unsafe languages for parsing untrusted input is
       | just wild. I'm glad that I'm working in a time where I can build
       | all of my parsers and attack surface in Rust and just think way,
       | way less about this.
       | 
       | I'll also link this talk[1], for the millionth time. It's Rob
       | Joyce, chief of the NSA's TAO, talking about how to make NSA's
       | TAO's job harder.
       | 
       | [0] https://arstechnica.com/information-
       | technology/2021/01/hacke...
       | 
       | [1] https://www.youtube.com/watch?v=bDJb8WOJYdA
        
         | blowski wrote:
         | I was with you until the parsing with memory unsafe languages.
         | Isn't that exactly the kind of "random security not based on a
         | threat model" type comment you so rightly criticised in the
         | first half of your comment?
        
           | lisper wrote:
           | I think you must have misunderstood the point the parent
           | comment was trying to make. Memory-safety issues are
           | responsible for a majority of real-world vulnerabilities.
           | They are probably the most prevalent extant threat in the
           | entire software ecosystem.
        
           | staticassertion wrote:
           | The attack surface is the parser. The ability to access it is
           | arbitrary. I can't build a threat model beyond that for any
           | specific case, but in the case of a text messaging app I
           | absolutely expect "attacker can text you" to be in your
           | threat model.
        
           | kmeisthax wrote:
           | There are very few threat models that a memory unsafe parser
           | _does not_ break.
           | 
           | Even the "unskilled attacker trying other people's vulns"
           | threat basically _depends_ on the existence of memory-safety
           | related vulnerabilities.
        
             | blowski wrote:
             | Then we're right back in the checklist mentality of "500
             | things secure apps never do". I could talk to somebody else
             | and they'd tell me the real threat to worry about is
             | phishing or poor CI/CD or insecure passwords or whatever.
        
               | staticassertion wrote:
               | There is no "real threat". Definitely phishing is one of
               | the top threats to an organization, left unmitigated.
               | Thankfully, we now have unphishable 2FA, so you can
               | mitigate it. _When_ you choose to prioritize a threat is
               | going to be a call you have to make as the owner of your
               | company 's security posture - maybe phishing is above
               | memory safety for you, I can't say.
               | 
               | What I can say is that parsing untrusted data in C is
               | very risky. I can't say it is more risky than phishing
               | for you, or more risky than anything else. I lack the
               | context to do so.
               | 
               | That said, a really easy solution might be to just not do
               | that. Just like... don't parse untrusted input in C. If
               | that's hard for you, so be it, again I lack context. But
               | that's my _general_ advice - don 't do it.
        
               | lanstin wrote:
               | In-arguable these days.
        
           | titzer wrote:
           | Based on the hundreds, perhaps thousands of critical
           | vulnerabilities that are due directly to parsing user input
           | in memory-unsafe languages, usually resulting in remote code
           | execution, how's this for a threat model: attacker can send
           | crafted input that contains machine code that subsequently
           | runs with the privileges of the process parsing the input.
           | That's bad.
        
           | ExtraE wrote:
           | I mean, the threat model is that 1. Memory leaks/errors are
           | bad 2. Programmers make those mistakes all the time 3. Using
           | memory safe languages is cheap Therefore, 4. We should use
           | memory safe languages more often
        
         | viztor wrote:
         | For anyone who's fresh to cyber security, the fundamental axiom
         | of it is that anything can be cracked, only a matter of
         | computations (time*resource). Just as the dose makes the poison
         | (sola dosis facit venenum).
         | 
         | Suppose you have a secret, that is RSA-encrypted, we might be
         | looking at three hundred trillion years according to Wikipedia
         | with the kind of computer we have now. Obviously that secrecy
         | would have lost its value then, and the resource it requires to
         | crack the secret would worth more than the secret itself. Even
         | with quantum computing, we are still looking at 20+ years,
         | which is still enough for most of the secrets, you got plenty
         | time to change it, or after it lost its value. So we say that's
         | secure enough.
        
           | alabamacadabra wrote:
           | If that's a fundamental axiom of cyber security then it's
           | obvious that it's a field of fools. This is a poor, tech-
           | driven understanding of security that will leave massive gaps
           | in its application to technology.
        
         | o8r3oFTZPE wrote:
         | From the Ars reference: "There are some steep hurdles to clear
         | for an attack to be successful. A hacker would first have to
         | steal a target's account password and also gain covert
         | possession of the physical key for as many as 10 hours. The
         | cloning also requires up to $12,000 worth of equipment and
         | custom software, plus an advanced background in electrical
         | engineering and cryptography. That means the key cloning-were
         | it ever to happen in the wild-would likely be done only by a
         | nation-state pursuing its highest-value targets."
         | 
         | "only by a nation-state"
         | 
         | This ignores the possibility that the company selling the
         | solution could itself easily defeat the solution.
         | 
         | Google, or another similarly-capitalised company that focuses
         | on computers, could easily succeed in attacking these "user
         | protections".
         | 
         | Further, anyone could potentially hire them to assist. What is
         | to stop this if secrecy is preserved.
         | 
         | We know, for example, that Big Tech companies are motivated by
         | money above all else, and, by-and-large, their revenue does not
         | come from users. It comes from the ability to see into users'
         | lives. Payments made by users for security keys are all but
         | irrelevant when juxtaposed against advertising services revenue
         | derived from personal data mining.
         | 
         | Google has an interest in putting users' minds at ease about
         | the incredible security issues with computers connected to the
         | internet 24/7. The last thing Google wants is for users to be
         | more skeptical of using computers for personal matters that
         | give insight to advertisers.
         | 
         | The comment on that Ars page is more realistic than the
         | article.
         | 
         | Few people have a "nation-state" threat model, but many, many
         | people have the "paying client of Big Tech" threat model.
        
           | staticassertion wrote:
           | Yes, if you don't trust Google don't use a key from Google.
           | Is that what you're trying to say? If your threat model is
           | Google don't buy your key from Google. Do I think that's
           | probably a stupid waste of thought? Yes, I do. But it's
           | totally legitimate if that's your threat model.
        
         | ignoramous wrote:
         | _I 'll conclude with a philosophical note about software
         | design: Assessing the security of software via the question
         | "can we find any security flaws in it?" is like assessing the
         | structure of a bridge by asking the question "has it collapsed
         | yet?" -- it is the most important question, to be certain, but
         | it also profoundly misses the point. Engineers design bridges
         | with built-in safety margins in order to guard against
         | unforeseen circumstances (unexpectedly high winds, corrosion
         | causing joints to weaken, a traffic accident severing support
         | cables, et cetera); secure software should likewise be designed
         | to tolerate failures within individual components. Using a MAC
         | to make sure that an attacker cannot exploit a bug (or a side
         | channel) in encryption code is an example of this approach: If
         | everything works as designed, this adds nothing to the security
         | of the system; but in the real world where components fail, it
         | can mean the difference between being compromised or not. The
         | concept of "security in depth" is not new to network
         | administators; but it's time for software engineers to start
         | applying the same engineering principles within individual
         | applications as well._
         | 
         | -cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-
         | then-mac....
        
         | api wrote:
         | There's still a lot of macho resistance to using safe
         | languages, because "I can write secure code in C!"
         | 
         | "You" probably can. I can too. That's not the point.
         | 
         | What happens when the code has been worked on by other people?
         | What happens after a few dozen pull requests are merged? What
         | happens when it's ported to other platforms with different
         | endian-ness or pointer sizes or hacked in a late night death
         | march session to fix some bug or add some feature that has to
         | ship tomorrow? What happens when someone accidentally deletes
         | some braces with an editor's refactor feature, turning a "for {
         | foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?
         | 
         | That's how bugs creep in, and the nice thing about safe
         | languages is that the bugs that creep in are either caught by
         | the compiler or result in a clean failure at runtime instead of
         | exploitable undefined behavior.
         | 
         | Speed is no longer a good argument. Rust is within a few
         | decimal points of C performance if you code with an eye to
         | efficiency, and if you really need something to be as high-
         | performance as possible code _just that one thing_ in C (or
         | ASM) and code the rest in Rust. You can also use unsafe to
         | squeeze out performance if you must, sparingly.
         | 
         | Oh and "but it has unsafe!" is also a non-argument. The point
         | of unsafe is that you can trivially search a code base and
         | audit every use of it. Of course it's easy to search for unsafe
         | code in C and C++ too... because all of it is!
         | 
         | If we wrote most things and especially things like parsers and
         | network protocols in Rust, Go, Swift, or some other safe
         | language we'd get rid of a ton of low-hanging fruit in the form
         | of memory and logic error attack vectors.
        
         | cratermoon wrote:
         | > I'm glad that I'm working in a time where I can build all of
         | my parsers and attack surface in Rust and just think way, way
         | less about this.
         | 
         | I'm beginning to worry that every time Rust is mentioned as a
         | solution for every memory-unsafe operation we're moving towards
         | an irrational exuberance about how much value that safety
         | really has over time. Maybe let's not jump too enthusiastically
         | onto that bandwagon.
        
           | bitwize wrote:
           | Whole classes of bugs -- _the_ most common class of security-
           | related bugs in C-family languages -- just go away in safe
           | Rust with few to no drawbacks. What 's irrational about the
           | exuberance here? Rust is a massive improvement over the
           | status quo we can't afford not to take advantage of.
        
           | TaupeRanger wrote:
           | There's literally zero evidence that a program written in
           | Rust is actually practically safer than one written in C at
           | the same scale. And there won't be any evidence of this for
           | some time because no Rust program is as widely deployed as an
           | equivalent highly used C program.
        
             | rrdharan wrote:
             | I'd wager Dropbox's Magic Pocket is up there with
             | equivalent C/C++ based I/O / SAN stacks:
             | 
             | https://dropbox.tech/infrastructure/extending-magic-
             | pocket-i...
        
             | staticassertion wrote:
             | That's not true, actually. There is more than "literally
             | zero" evidence. I don't feel like finding it for you, but
             | at minimum Mozilla has published a case study showing that
             | moving to Rust considerably reduced the memory safety
             | issues they discovered. That's just one example, I believe
             | there are others.
             | 
             | There are likely many other examples of, say, Java not
             | having memory safety issues. Java makes very similar
             | guarantees to Rust, so we can extrapolate, using common
             | sense, that the findings roughly translate.
             | 
             | Common sense is a really powerful tool for these sorts of
             | conversations. "Proof" and "evidence" are complex things,
             | and yet the world goes on with assumptions that turn out to
             | hold quite well.
        
           | Ar-Curunir wrote:
           | ... it is a solution for every memory-unsafe operation,
           | though?
        
             | choeger wrote:
             | No. Rust cannot magically avoid memory-unsafe operations
             | when you have to deal with, well, memory. If I throw a byte
             | stream at you and tell you it is formatted like so and so,
             | you have to work with memory and you will create memory
             | bugs.
             | 
             | It can however make it extremely difficult to exploit _and_
             | it can make such use cases very esoteric (and easier to
             | implement correctly).
        
           | ddalcino wrote:
           | What's with the backlash against Rust? It literally is "just
           | another language". It's not the best tool for every job, but
           | it happens to be exceptionally good at this kind of problem.
           | Don't you think it's a good thing to use the right tool for
           | the job?
        
             | amelius wrote:
             | It is good to keep in mind that the Rust language still has
             | lots of trade-offs. Security is only one aspect addressed
             | by Rust (another is speed), and hence it is not the most
             | "secure" language.
             | 
             | For example, in garbage collected languages the programmer
             | does not need to think about memory management all the
             | time, and therefore they can think more about security
             | issues. Rust's typesystem, on the other hand, can really
             | get in the way and make code more opaque and more difficult
             | to understand. And this can be problematic even if Rust
             | solves every security bug in the class of (for instance)
             | buffer overflows.
             | 
             | If you want secure, better use a suitable GC'ed language.
             | If you want fast and reasonably secure, then you could use
             | Rust.
        
               | tptacek wrote:
               | I don't think this is a good take. Go, Java, Rust,
               | Python, Swift; they all basically eliminate the bug class
               | we're talking about. The rest is ergonomics, which are
               | subjective.
               | 
               | "Don't use Rust because it is GC'd" is a take that I
               | think basically nobody working on memory safety (either
               | as a platform concern or as a general software
               | engineering concern) would agree with.
        
               | staticassertion wrote:
               | > Rust's typesystem, on the other hand, can really get in
               | the way and make code more opaque and more difficult to
               | understand.
               | 
               | I don't disagree with the premise of your post, which is
               | that time spent on X takes away from time spent on
               | security. I'll just say that I have not had the
               | experience, as a professional rust engineer for a few
               | years now, that Rust slows me down _at all_ compared to
               | GC 'd languages. Not even a little.
               | 
               | In fact, I regret not choosing Rust for more of our
               | product, because the productivity benefits are massive.
               | Our rust code is radically more stable, better
               | instrumented, better tested, easier to work with, etc.
        
             | cratermoon wrote:
             | > What's with the backlash against Rust?
             | 
             | What's with the hyping of Rust as the Holy Grail as the
             | solution to everything not including P=NP and The Halting
             | Problem?
        
               | pdimitar wrote:
               | No serious and good programmer is hyping Rust as the
               | "Holy Grail". You are seeing things due to an obvious
               | negative bias. Link me 100x HN comments proving your
               | point if you like but they still mean nothing. I've
               | worked with Rust devs for a few years and all were
               | extremely grounded and practical people who arrived at
               | working with it after a thorough analysis of the merits
               | of a number of technologies. No evangelizing to be found.
               | 
               | Most security bugs/holes have been related to buffer
               | [over|under]flows. Statistically speaking, it makes sense
               | to use a language that eliminates those bugs by the mere
               | virtue of the program compiling. Do you disagree with
               | that?
        
               | tptacek wrote:
               | Nobody seriously thinks it's "Rust" that's the silver
               | bullet either; they just believe _memory-safe languages_
               | are. There are a bunch of them to choose from. We hear
               | about Rust because it works in a bunch of high-profile
               | cases that other languages have problems with, but there
               | 's no reason the entire iMessage stack couldn't have been
               | written in Swift.
        
               | pdimitar wrote:
               | Agreed. I was simply mostly addressing this person's
               | obvious beef with Rust.
        
               | staticassertion wrote:
               | Totally. I said Rust because I write Rust. Like, that's
               | (part of) my job. Rust is no more memory safe (to my
               | knowledge) than Swift, Java, C#, etc.
               | 
               | I also said "way, way less" not "not at all". I still
               | think about memory safety in our Rust programs, I just
               | don't allocate time to address it (today) specifically.
        
               | maqp wrote:
               | I like what tptacek wrote in the sibling comment. IIUC
               | Rust keeps getting mentioned as "the" memory-safe
               | language because it's generally equally fast compared to
               | C programs. And it's mainly C and C++ that are memory-
               | unsafe. So Rust is good language to combat the argument
               | of speed (that's often interchangeable with profits in
               | business world, especially if security issues have a flat
               | rate of cyber insurance).
        
           | colonelxc wrote:
           | The article we are commenting on is about targeted no-
           | interaction exploitation of tens of thousands of high profile
           | devices. I think this is one of the areas where there is a
           | very clear safety value (not just theoretical).
        
           | staticassertion wrote:
           | I'm a security professional so it's based on being an
           | experienced expert, not some sort of hype or misplaced
           | enthusiasm.
        
       ___________________________________________________________________
       (page generated 2021-07-20 23:00 UTC)