[HN Gopher] 'Zero-click' hacks are growing in popularity
       ___________________________________________________________________
        
       'Zero-click' hacks are growing in popularity
        
       Author : taubek
       Score  : 187 points
       Date   : 2022-02-19 09:52 UTC (13 hours ago)
        
 (HTM) web link (www.bloombergquint.com)
 (TXT) w3m dump (www.bloombergquint.com)
        
       | newuser94303 wrote:
       | Are there messaging apps on Android that make you more
       | vulnerable?
        
       | gfd wrote:
       | Why aren't these used to steal cryptocurrencies? According to the
       | article you can buy a similar exploit for just $1-2.5 million.
       | Considering the amount of money floating around that space, that
       | it hasn't happened yet is surprising to me (or maybe I just don't
       | pay attention to people who own crypto and are public about it,
       | maybe they do get hit by zero-days all the time?).
        
         | [deleted]
        
         | bannedbybros wrote:
        
       | nonrandomstring wrote:
       | > 'Zero-click' hacks
       | 
       | A.K.A. 'Hacks' (as opposed to social engineering)
        
       | markus_zhang wrote:
       | Just use dumb phones. Maybe one old smartphone purely for
       | business to minimize personal information leak.
        
         | dagmx wrote:
         | Dumb phones still had security issues and usually had no way to
         | update the software or firmware
        
         | throwaway48375 wrote:
         | dumb phones have their own problems
        
         | rasz wrote:
         | Dumb phones still had Bluetooth
         | https://en.wikipedia.org/wiki/Bluebugging
        
       | anthk wrote:
       | Rust won't be the miracle stopping this. Porting unveil/pledge to
       | all OSes will.
        
         | foxfluff wrote:
         | There's no silver bullet. Pledge on a complex application that
         | does too many things [requests too many permissions] doesn't
         | help much.
         | 
         | IMO complexity and churn remain the biggest problems but people
         | are not willing to engage it. There's _always_ at least one
         | legitimate use case for some faddy trendy new feature, always a
         | reason for more complexity, fuck anyone who doesn 't want it.
         | And so you get a massive body of constantly changing code that
         | auditors can't keep on top of.
         | 
         | What would it be like if your chat app was max 3000 lines of
         | code and received no more than a handful of small patches per
         | year since 2008? You could audit that in an evening or two and
         | be reasonably confident in its security, and you could also be
         | reasonably confident that it hasn't grown a bunch of new vulns
         | in the next three releases, and you could quickly audit it
         | again to be sure.
         | 
         | Alas, practically nobody takes you seriously if you advocate
         | for simplicity. Usually it's the opposite; I tend to get
         | attacked if I suggest that a program/system might be too
         | complex.
        
           | anthk wrote:
           | >Pledge on a complex application
           | 
           | I think you never saw how pledge works.
           | 
           | SeLinux is complex. Pledge can be a piece of cake.
        
             | foxfluff wrote:
             | I think you failed at reading comprehension. I said nothing
             | about the complexity of pledge.
        
         | saagarjha wrote:
         | Are you familiar with how sandboxing works on iOS?
        
       | xavxav wrote:
       | Not to go all 'Rust Evangelism Strike Force' but almost
       | universally, these exploits leverage memory unsafety somewhere in
       | the stack, usually in a parser of some kind (image, text, etc).
       | The fact that this is still tolerated in our core systems is a
       | pox on our industry. You don't have to use Rust, and it won't
       | eliminate every bug (far from it), but memory safety _is not
       | optional_.
       | 
       | We truly need to work more towards eliminating every memory
       | unsafe language in use today, until then we're fighting a forest
       | fire with a bucket of water.
        
         | NohatCoder wrote:
         | Nothing wrong with Rust, but I still think making operating
         | systems with airtight sandboxing and proper permission
         | enforcement is the only thing that can truly solve these
         | issues.
        
           | jl6 wrote:
           | Still not enough, because apps still need to interact with
           | the outside world, so there would have to be intentional
           | holes in the sandbox out through which the compromised app
           | could act maliciously.
        
             | NohatCoder wrote:
             | That is why you need a well designed permission system.
             | Android and iOs had a chance of doing this in a time when
             | the requirements could reasonably be understood, but I
             | don't think either came close.
        
           | anon_123g987 wrote:
           | And what language should we use to create such an OS? Maybe
           | Rust?
        
             | NohatCoder wrote:
             | It is a better choice than C++ for sure.
        
           | azinman2 wrote:
           | Look at how often V8's sandboxes get exploited. It's all
           | developed by humans, which means there will always be errors.
           | 
           | Saying just make airtight sandboxes is like just write bug-
           | free code.
        
             | NohatCoder wrote:
             | It is a tradeoff. Making an airtight sandbox is not that
             | hard. Making it run programs near hardware speed is a lot
             | harder. Making it run legacy machine code is a nightmare.
             | 
             | JavaScript is not machine code, but still a good deal
             | harder to make fast than a language designed for fast
             | sandboxing. Of course there have been bugs, but mostly I
             | think the JS VMs have done a pretty good job of protecting
             | browsers.
        
           | dagmx wrote:
           | I feel like those are two separate levels of concerns though.
           | 
           | Airtight sandboxing would be easier in a memory safe language
           | prevents certain classes of bugs.
        
           | rcxdude wrote:
           | Only if the barriers have a finer resolution than a single
           | application. Most applications need access to more than
           | enough data to cause problems in the case of an exploit. You
           | need sandboxing between different components of the
           | application as well.
        
         | tptacek wrote:
         | It's worth engaging with the fact that essentially nobody
         | disagrees with this (someone will here, but they don't matter),
         | and that it's not happening not because Apple and Google don't
         | want it to happen, but because it's incredibly, galactically
         | hard to pull off. The Rust talent pool required to transition
         | the entire attack surface of an iPhone from C, C++, and ObjC to
         | Rust (substitute any other memory safe language, same deal)
         | doesn't exist. The techniques required to train and scale such
         | a talent pool are nascent and unproven.
         | 
         | There is probably not a check Apple can write to fix this
         | problem with memory safe programming languages. And Apple can
         | write all possible checks. There's something profound about
         | that.
        
           | littlestymaar wrote:
           | I' don't think the real question is "how feasible is it to
           | rewrite everything in Rust", because as you say, the answer
           | to this question is clearly "not a all". But "rewriting all
           | parsers and media codec implementation" is a much smaller
           | goal, and so is "stop writing new codec implementation in
           | memory unsafe language", yet none of those two more
           | achievable are being pursued either, which is sincerely
           | disappointing.
        
           | staticassertion wrote:
           | You don't need to move the entire attack surface of the
           | iphone to Rust. There are plenty of smaller areas that tend
           | to have the most vulnerabilities. They could absolutely write
           | a check to radically reduce these sorts of issues.
           | 
           | It'll take years to have impact, but so what? They can start
           | now, they have the money.
           | 
           | > nobody disagrees with this (someone will here, but they
           | don't matter)
           | 
           | There are so many people out there who don't understand the
           | basics. HN can be sadly representative.
        
           | e40 wrote:
           | I honestly don't understand this. If Google or Apple wanted
           | it to happen, they could force those developers to learn
           | Rust. Are you saying the people that wrote the products in
           | question can't learn Rust well enough to achieve the goal?
        
             | tptacek wrote:
             | Start with the fact that practically all software
             | development at Apple and Google would cease for multiple
             | months while people dealt with the Rust learning curve,
             | which is not gentle, and proceed from there to the fact
             | that Rust demands (or, at least, urgently requests)
             | architectural changes from typical programs designed in
             | other languages.
             | 
             | Now: rewrite 20 years worth of code.
             | 
             | Let's make sure we're clear: I agree --- and, further,
             | assert that _every other serious person agrees_ --- that
             | memory safety is where the industry needs to go, urgently.
        
               | Tanjreeve wrote:
               | When you say architectural changes how do you mean? Most
               | of the memory stuff isn't particularly exotic there's a
               | lot of different syntax/Functional programming influences
               | but I'm curious why it would be wildly exotic compared to
               | most C++ code or have I misunderstood?
        
               | InvertedRhodium wrote:
               | It doesn't matter how exotic something is when you're
               | taking about rewriting an entire platform - the sheer
               | amount of man hours required to reimplement something for
               | an advantage the vast majority of customers simply don't
               | value enough is the limiting factor.
               | 
               | In that context, even a small architectural difference
               | can be seen as a high barrier.
        
             | BoxOfRain wrote:
             | It'd still be like replacing the engines of an aeroplane
             | mid-flight surely? I know Rust can do C interop and it'd
             | probably be done piecemeal but it'd still be an absolutely
             | gargantuan task. I'd say there's a fair chance the sheer
             | time and effort such an undertaking would involve would
             | cost more than the memory safety bugs using C or C++
             | introduces.
        
             | tuwtuwtuwtuw wrote:
             | Forcing their employees to learn rust doesn't mean Google
             | has the capacity to rewrite all their software in rust.
             | They have tons and tons of code which would need to be
             | rewritten from scratch.
             | 
             | Of course if they dropped all other development and told
             | their employees to rewrite to rust, they may end up with a
             | piece of software written in rust but no customers.
        
               | e40 wrote:
               | I agree, but there's so many people at Google (132,000 if
               | you can believe the search results), it's hard for me to
               | believe they couldn't devote a small percentage of them
               | to moving to a secure stack.
        
               | tptacek wrote:
               | "Moving to a new stack" implies rewriting much of the
               | code (and: a surprising amount of the code) built on the
               | existing stack.
        
               | shukantpal wrote:
               | And you don't realize their codebase must also be really
               | large like their number of employees. They must have a
               | lot of code per employee.
        
           | closeparen wrote:
           | Do you think Apple is already at the frontier of what can be
           | done to detect or refactor out these bugs in their existing
           | languages? Static analysis, Valgrind, modern C++, etc?
        
           | Dylan16807 wrote:
           | They wrote it the first time, didn't they? C isn't special,
           | and training isn't special.
        
             | tptacek wrote:
             | It took decades.
        
               | Dylan16807 wrote:
               | So make the initial goal a portion. It's not like Apple
               | is going to go away any time soon. The second best time
               | to start is now.
               | 
               | And a lot of that was design work that still holds, and a
               | lot of that was code that has been obsoleted.
        
               | tptacek wrote:
               | As far as I'm aware, every major company in the industry
               | is working on exactly this. I'm telling you why we don't
               | just have an all-memory-safe iPhone right now, despite
               | Apple's massive checking account. I'm not arguing with
               | you that the industry shouldn't (or isn't) moving towards
               | memory safety.
        
             | jeremygaither wrote:
             | macOS runs the Darwin kernel (developed at NeXT using the
             | Mach kernel, then at Apple). NeXTSTEP was based on a BSD
             | UNIX fork. Development of BSD at Berkeley started in 1977.
             | NeXT worked on their kernel and the BSD UNIX fork in the
             | '80s and '90s before being purchased by Apple. NeXTSTEP
             | formed the base of Mac OSX (which is why much of the
             | Objective-C base libraries start with `NS-something`. There
             | is 45 years worth of development on UNIX, and Linux is a
             | completely different kernel with a completely different
             | license. Linux kernel has been in development for about 31
             | years.
             | 
             | Languages and understanding them is not special, but
             | decades of development of two different kernels is a huge
             | time investment. Even though Linus Torvalds wrote the basic
             | Linux kernel in 5 months, it was very simple at first.
             | 
             | I doubt writing an entire POSIX-compatible replacement for
             | a kernel would be a small or quick endeavor, and Apple has
             | shown resistance to adopting anything with a GPL 3 license
             | iirc. That is why they switched to ZSH from Bash.
        
               | Dylan16807 wrote:
               | The earlier post seemed more focused on the userland, so
               | we should very much consider excluding the kernel before
               | we decide the idea is too hard.
        
             | pvg wrote:
             | Time is pretty special. iOS alone is over a decade old and
             | a constantly evolving target and it's itself a direct
             | descendant of a 30+ year old system.
        
               | Dylan16807 wrote:
               | > a constantly evolving target
               | 
               | That decreases the amount of code to replace, doesn't it?
        
               | pvg wrote:
               | How so? New not-in-safe-languages code is being added all
               | the time.
        
               | Dylan16807 wrote:
               | For code added in the future, you need devs no matter
               | what language they use, so switching their language is
               | the easy part of this large hard project.
               | 
               | For code added in the past, more evolution means that for
               | every X lines of code written, a smaller and smaller
               | fraction of X still exists. Which means less work to
               | replace the end product.
        
           | honkdaddy wrote:
           | Is that more-so due to a lack of Rust engineers or a lack of
           | firmware engineers capable of rebuilding the iOS stack?
        
           | dagmx wrote:
           | In Apple's case they wouldn't need to move everything to
           | Rust. Swift is a little bit higher level and a lot of stuff
           | could be moved into it, with Rust as the lower level layer to
           | replace ObjC / C / C++.
           | 
           | Still a gargantuan effort, but for them it doesn't require
           | everyone learn Rust, just to learn Swift, which is kind of
           | table stakes for a lot of user facing dev I'm sure there.
        
           | rrradical wrote:
           | Well to some extent these companies are self sabotaging by
           | centering interviews around algorithm problems, not only by
           | selecting a certain kind if talent for further investment of
           | resources, but also by signaling to the market the kinds of
           | training needed to land a good job.
           | 
           | If instead, the talent pool were incentivized to increase
           | their ability to understand abstractions, and we selected for
           | that kind of talent, it might not be so hard to use new
           | languages.
        
             | phendrenad2 wrote:
             | Abstractions are fun, security isn't. I doubt there are
             | even that many programmers who enjoy writing (correct,
             | safe) Rust.
        
               | rrradical wrote:
               | Wait, this whole thread is about moving to languages that
               | eliminate classes of security holes by virtue of the
               | language itself. The premise is that being a security
               | conscious programmer is not by itself enough to achieve
               | good security.
        
         | benreesman wrote:
         | Honestly at this point I've given in and am now advocating that
         | we rewrite every damned widget from scratch in Rust, because by
         | the time we're mostly done, my career will be winding down, and
         | seeing that shit still gets pwned like, exactly as much, will
         | be "good TV".
         | 
         | Rust is cool because it's got a solid-if-slow build story that
         | doesn't really buy into the otherwise ubiquitous .so brain
         | damage. Rust is cool because Haskell Lego Edition is better
         | than no Haskell at all, and Rust is cool because now that it's
         | proven affine/linear typing can work, someone will probably get
         | it right soon.
         | 
         | But if I can buy shares in: "shit still gets rocked
         | constantly", I'd like to know where.
        
           | omegalulw wrote:
           | Do you imagine how long it would take to compile the Linux
           | kernel if it were rust only? Not to mention the kernel has to
           | allow for third party closed source stuff like drivers,
           | wouldn't that force you to allow unsafe Rust and put you back
           | to square one?
        
             | dtech wrote:
             | That seems an insignificant price to pay if it would truly
             | provide the promised benefits (big if). Even most Linux
             | users don't compile the kernel themselves, and the 5% that
             | do care can afford the time and/or computing resources.
        
           | axiosgunnar wrote:
           | > Haskell Lego Edition
           | 
           | Gatekeeping much?
        
             | benreesman wrote:
             | I took the time to learn Rust well in spite of how annoying
             | the Jehovas Witness routine has been for like, what, 5-10
             | years now? I worked with Carl and Yehuda both on the
             | project right before Cargo (which is pretty solid, those
             | guys don't fuck around).
             | 
             | I think I've paid my cover-fee on an opinion.
        
               | [deleted]
        
               | benreesman wrote:
               | Do you have a different opinion on whether or not syntax
               | for a clumsy Maybe/Either Monad is a bit awkward? Do you
               | think that trait-bound semantics are as clean as proper
               | type classes as concerns trying to get some gas mileage
               | out of ad-hoc polymorphism? Do you think that the Rust
               | folks might have scored a three-pointer on beating the
               | Haskell folks to PITA-but-useable affine types?
               | 
               | Or were you just dissing knowing things?
        
           | qualudeheart wrote:
           | Mass rewrites will be quite the jobs program. I'm on board.
           | Converted to Marxism not long ago.
        
           | xavxav wrote:
           | > Honestly at this point I've given in and am now advocating
           | that we rewrite every damned widget from scratch in Rust,
           | because by the time we're mostly done, my career will be
           | winding down, and seeing that shit still gets pwned like,
           | exactly as much, will be "good TV".
           | 
           | Rust won't solve logic bugs but it can help bring up the
           | _foundations_. So long as memory safety bugs are so pervasive
           | we can 't even properly reason on a theoretical level about
           | logic bugs. The core theorem of any type system is "type
           | safety" which states that a well-typed program never goes
           | wrong (gets stuck, aka UB). Only then can you properly tackle
           | correctness issues.
           | 
           | > Rust is cool because Haskell Lego Edition is better than no
           | Haskell at all, and Rust is cool because now that it's proven
           | affine/linear typing can work, someone will probably get it
           | right soon.
           | 
           | I don't understand the condescending remarks about "Haskell
           | Lego Edition". I do agree that Rust has shown that
           | substructural type systems work and are useful, and that they
           | will be a 'theme' in the next batch of languages (or I can
           | hope).
        
             | benreesman wrote:
             | How much do I win if I can panic Rust without any "unsafe"
             | whatsoever? Maybe I'll index into some Unicode or
             | something, haven't decided.
        
               | benreesman wrote:
               | And frankly I don't see how it's even remotely fair to
               | call a no-nonsense statement that some things are
               | simplified versions of other things with a cheeky
               | metaphor "condescending".
               | 
               | I could just as easily throw around words like "anti-
               | intellectual" if my goal was to distract from the point
               | rather substantially replying.
        
               | kibwen wrote:
               | But Rust isn't remotely a simplified version of Haskell,
               | and I'm not sure where you got that impression. It's
               | inspired by several languages, but is predominantly a
               | descendant of ML and C++. The only similarity they have
               | is that Rust traits resemble Haskell typeclasses, but
               | even there they are quite different in semantics and
               | implementation.
        
               | benreesman wrote:
               | Question mark operator is bind for Result? Derives show?
               | Run that argument past someone who can't quote chapter
               | and verse.
        
               | benreesman wrote:
               | I like Rust in a lot of ways, I write a fuckload of it
               | and I get value from doing so. Not "praise the lord"
               | value, but real value.
               | 
               | But the attitude is an invitation to getting made fun of.
               | It's absurdly intellectually dishonest when Rust-as-
               | Religion people actively hassle anyone writing C and then
               | get a little precious when anyone mentions Haskell and
               | then extremely precious when they step on the landmine of
               | the guy who likes Rust enough to know the standard, the
               | compiler, the build tool, the people who wrote the build
               | tool, and generally enough to put it in its place from a
               | position of knowledge.
               | 
               | SSH servers? Yeah, I'd go with Rust. Web browsers? In a
               | perfect world, lot of work. Even for Mozilla who _timed
               | the fuck out on it_.
               | 
               | Everything ever so no security problem ever exists ever
               | again? Someone called it the "green energy of software
               | security" on HN like this year.
               | 
               | It's not the coolest look that one of my "blow off some
               | steam" hobbies is making those people look silly, but
               | there are worse ways to blow off some steam.
        
               | user3939382 wrote:
               | It sounds like you're saying, you spent a lot of time
               | focused on learning rust, so now you like to discuss its
               | shortcomings as abrasively as you can for sport.
        
               | benreesman wrote:
               | Upthread I've already surrendered. There are certain
               | gangs you just don't pick a fight with. I'm a slow
               | learner in some ways but I get the message. Got it,
               | learning Rust nuts and bolts only makes it worse to say
               | anything skeptical about it.
        
               | ascar wrote:
               | Nearly every answer you gave in this thread doesn't
               | address the parent comments point at all.
               | 
               | It seems you are just raging and reading subtext and
               | drama where there is none.
               | 
               | Further up someone mentioned Rust and Haskell aren't
               | similar and you go on about Rust-religion and where to
               | use Rust. Why don't you just address the point? "Lego" is
               | also not a synonym or metaphor for simplified.
        
               | emerongi wrote:
               | Your argument seems to mostly boil down to "Rust isn't
               | magic", which nobody is really arguing. It does help
               | eliminate one class of really nasty bugs, which tend to
               | repeatedly show up in a lot of massive security hacks,
               | and which generally everyone would like to see
               | eliminated. Therefore: use Rust.
               | 
               | Comparisons to other languages like Haskell don't really
               | work, since they don't fit in the same space nor have the
               | same goals as Rust or C.
        
               | benreesman wrote:
               | Do I really need to do the search for comparisons to
               | solar panels or cancer drugs, or does that sort of scan?
        
               | kibwen wrote:
               | Panicking in Rust isn't a memory-unsafe operation.
        
               | staticassertion wrote:
               | lol if "shit still gets rocked" means "programs exit
               | safely but unexpectedly sometimes" we're on very
               | different pages
               | 
               | I'm searching your posts in this topic trying to find
               | something of value and coming up short. You assert that
               | you know rust, and therefor your opinions have merit,
               | but... lots of people know rust and disagree. But somehow
               | your opinions are More Right and the others are just
               | religious Rust shills.
               | 
               | I don't think you know what you're talking about
               | honestly. If you want to pick fights on HN that's cool,
               | we all get that urge, but you're really bad at it.
        
               | xavxav wrote:
               | A crash is signficiantly better than corruption. If you
               | can force an `unwrap` you can cause a denial of service
               | but with corruption, all bets are off.
        
           | lionkor wrote:
           | The flaw in the idea of "rewrite it in rust" is that, next to
           | the memory issues, the biggest issues are logic bugs.
           | 
           | Rewriting something from scratch isnt going to magically not
           | have bugs, and the legacy system likely has many edge cases
           | covered that a modern new implementation will have to learn
           | about first.
        
             | xavxav wrote:
             | Right, but a memory unsafety but is what takes a harmless
             | logic bug in an image parser with no filesystem access to
             | an RCE and sandbox escape.
             | 
             | Memory unsafety allows you to change the 'category' of the
             | bug, you become free to do _whatever_ whereas a logic bug
             | forces to to work within the (flawed) logic of the original
             | program.
        
               | saagarjha wrote:
               | Not necessarily; see https://github.com/LinusHenze/Fugu14
               | /blob/master/Writeup.pdf for example. It's a full chain
               | that repeatedly escalates privileges without exploiting
               | any memory safety bugs by tricking privileged subsystems
               | into giving it more access than it should have, all the
               | way up through and beyond kernel code execution.
        
               | jquery wrote:
               | That's not a zero-click vulnerability though. I didn't
               | read the entire pdf but 2 of the first 4 steps involve
               | active user participation and assistance (install exploit
               | app 1 and exploit app 2).
               | 
               | I think regardless, you're right, we will still have
               | logic bugs... but that example is also an "exception
               | proves the rule" kind of thing.
        
               | saagarjha wrote:
               | It's not a zero click, that is correct. I presented it as
               | an example of how every layers of Apple's stack, far
               | beyond what is typically targeted by a zero-click exploit
               | chain, can still have logic bugs that allow for privilege
               | escalation. It's not _just_ a memory corruption thing,
               | although I will readily agree that trying to reduce the
               | amount of unsafe code is a good place to start fixing
               | these problems.
        
               | dcsommer wrote:
               | 70% of high severity security bugs (including RCE) are
               | due to memory unsafety. Not all, but most. It's been this
               | way for~decades.
               | 
               | https://news.ycombinator.com/item?id=19138602
               | 
               | https://www.zdnet.com/article/chrome-70-of-all-security-
               | bugs...
        
               | Dagonfly wrote:
               | These Rust vs C comparison often get fixated on the,
               | somewhat unique, memory safety advantage of Rust. But the
               | proper comparison should be ANY modern language vs. C,
               | cause those remove a heap of other C footguns as well.
               | Most modern language have:
               | 
               | - sane integers: no unsafe implicit cast, more ergonomic
               | overflow/saturate/checked casts
               | 
               | - sane strings: slices with length, standardized and safe
               | UTF-8 operations
               | 
               | - expressive typing preventing API misuse: monads like
               | Optional/Result, mandatory exception handling, better
               | typedefs, ADTs vs tagged unions
               | 
               | And even without the full Rust ownership model, I'd
               | expect the following to solve a majority of the memory
               | safety problems:
               | 
               | - array bounds checks (also string bounds checks)
               | 
               | - typed alloc (alloc a specific type rather than N bytes)
               | 
               | - non-null types by default
               | 
               | - double-free, use-after-free analysis
               | 
               | - thread-save std APIs
               | 
               | In the write-up you linked, Section 2 is a missing error
               | check => Result<T> would surface that. The macOS case
               | contains a relative path vs string comparison =>
               | expressive typing of Path would disallow that. DriverKit
               | exploit is a Non-NULL vs NULL API mistake. Kernel PAC is
               | a legit ASM logic bug, but requires a confusion of kernel
               | stack vs. user stack => might have been typed explicitly
               | in another language.
        
             | GeekyBear wrote:
             | >next to the memory issues, the biggest issues are logic
             | bugs
             | 
             | When you look at the percentage of security issues that
             | derive from memory safety, it certainly makes memory safety
             | a good place to start.
             | 
             | >The Chromium project finds that around 70% of our serious
             | security bugs are memory safety problems.
             | 
             | https://www.chromium.org/Home/chromium-security/memory-
             | safet...
             | 
             | >Around 70 percent of all the vulnerabilities in Microsoft
             | products addressed through a security update each year are
             | memory safety issues.
             | 
             | https://www.zdnet.com/article/microsoft-70-percent-of-all-
             | se...
        
             | fulafel wrote:
             | It's important to have good foundations (memory safety)
             | because then it becomes much more attractive to spend
             | effort on the rest of the correctness and security. If you
             | want to build a sturdy house, and see how to make the roof
             | well, don't give up on it just because you'll need to do
             | something else for good doors and windows.
        
             | Taywee wrote:
             | An improvement is an improvement. A flaw of seatbelts is
             | that some people still die when they wear them. That's not
             | a valid argument to not wear seatbelts.
        
         | zuzun wrote:
         | Memory safety is optional in Rust. It might not be obvious at
         | the moment, because Rust is written by enthusiasts who enjoy
         | fighting with the compiler until their code compiles, but once
         | developers will be forced to use it on their jobs with tight
         | deadlines, unsafe becomes the pass-the-borrow-checker cheat
         | code.
        
           | kibwen wrote:
           | Unsafe code is rarely necessary, especially unsafe code that
           | isn't just calling out to some component in C. You can easily
           | forbid developers from pushing any code containing `unsafe`
           | and use CI to automatically enforce it.
        
           | Gigachad wrote:
           | I was under the impression that even in rust unsafe blocks,
           | you still had massive safety advantages over C and it isn't
           | just instant Wild West.
        
             | benreesman wrote:
             | I'd love to red team the program that thinks Rust unsafe is
             | easier to get right than tight ANSI C.
        
               | Closi wrote:
               | Why compare 'non-tight' Rust against 'tight' C?
               | 
               | Surely we should compare tight Rust (with some 'tight'
               | unsafe sections) against tight C?
        
               | benreesman wrote:
               | I must be more tired than I thought if I said "non-tight
               | Rust" and forgot ten minutes later.
               | 
               | I just think if mistakes need to be _literally low as
               | possible_ you've got a better bet than Rust unsafe.
               | 
               | The language spec is smaller, the static analyzers have
               | been getting tuned for decades, and the project leaders
               | arent kinda hostile to people using it in the first
               | place.
        
               | nyanpasu64 wrote:
               | I think it's easier to write correct safe Rust than C, I
               | wouldn't say it's easier to write correct Rust with
               | unsafe blocks than C (many operations strip provenance,
               | you can't free a &UnsafeCell<T> created from a Box<T>,
               | you can't mix &mut and _const but you might be able to
               | mix Vec <T> and _const (https://github.com/rust-
               | lang/unsafe-code-guidelines/issues/2...), self-
               | referential &mut or Pin<&mut> is likely unsound but
               | undetermined), and it's absolutely more difficult to
               | write sound unsafe Rust than C (sound unsafe Rust must
               | make it impossible for callers to induce UB through any
               | possible set of safe operations including interior
               | mutability, logically inconsistent inputs, and panics).
        
               | qualudeheart wrote:
               | We could set up a prediction market for this. A study
               | would be performed of attempts ti pentest randomly
               | selected unsafe Rust and tight ANSI C programs. A
               | prediction market used to estimate the probability of
               | either language winning before publication of results.
               | Someone needs to make this a thing.
        
               | LasEspuelas wrote:
               | What is "tight ANSI C"?
        
           | vgel wrote:
           | I write Rust at $WORK. Using `unsafe` to meet a deadline
           | makes 0 sense. It doesn't disable the borrow checker unless
           | you're literally casting references through raw pointers to
           | strip lifetimes, which is... insane and would never pass a
           | code review.
           | 
           | 99% of the time if you're fighting the borrow checker and
           | just want a quick solution, that solution is `clone` or
           | `Arc<Mutex<T>>`, not `unsafe`. Those solutions will sacrifice
           | performance, but not safety.
        
             | nyanpasu64 wrote:
             | > Using `unsafe` to meet a deadline makes 0 sense... would
             | never pass a code review.
             | 
             | I've seen unsound unchecked casts from &UnsafeCell or *mut
             | to &mut in multiple codebases, including Firefox itself:
             | https://github.com/emu-rs/snes-
             | apu/blob/13c1752c0a9d43a32d05...,
             | https://searchfox.org/mozilla-
             | central/rev/7142c947c285e4fe4f....
        
             | qualudeheart wrote:
             | My girlfriend uses Rust for embedded systems at a large and
             | important company. Everyone uses memory safety.
        
           | cute_boi wrote:
           | This is kinda disingenuous. Whenever we people use unsafe its
           | like an alarm because you can setup CI system that warns
           | DevOps team regarding the usage of unsafe code.
           | 
           | And, most of the time unsafe code is not required. I think
           | many people will just use clone too much, or Arc rather than
           | unsafe. Additionally, I have never seen unsafe code at least
           | where I work.
        
           | Tomuus wrote:
           | Unsafe does not turn off the borrow checker
           | 
           | https://steveklabnik.com/writing/you-can-t-turn-off-the-
           | borr...
        
           | xavxav wrote:
           | Rust is used in production though at large companies: Amazon,
           | Microsoft, Mozilla, etc... I would be _highly_ surprised that
           | the borrowchecker would be the reason code couldn 't ship in
           | the first place, once you get over the initial mental hurdles
           | it's usually a non-issue.
           | 
           | Besides equivocating between a pervasively unsafe-by-default
           | language and one with an _explicit_ bounded opt-in is a
           | little disingenuous. Time after time, it has been shown that
           | even expert C developers cannot write memory safe C
           | consistently, _each_ line of code is a chance to blow up your
           | entire app 's security.
        
         | _wldu wrote:
         | I agree and anything above the OS layer can be written in Go or
         | Java or C#. Lots of those devs out there to hire.
        
         | benreesman wrote:
        
           | lysozyme wrote:
           | This tracks with your comment history
           | 
           | https://news.ycombinator.com/item?id=567736
        
             | benreesman wrote:
             | How well do the best and worst thing you've done in the
             | last 7 days "track" with where you were at 13 years ago?
             | We're there any ups or downs during that time?
        
             | benreesman wrote:
             | I think I've probably said something dumb to be cherry-
             | picked more recently than 13 years ago.
        
           | scythe wrote:
           | It doesn't have to be Rust. But for all the people who
           | insisted for more than a decade that GCs and VMs don't
           | necessarily compromise performance of practical applications,
           | there has never been an acceptable browser engine in C#,
           | Java, D, Common Lisp, Go, OCaml, Haskell or any of the
           | others. Meanwhile Apple uses Objective-C with its ARC (and
           | later Swift, which is even more like Rust) for everything and
           | it was great.
           | 
           | So the community tried to make OCaml with the memory model of
           | ObjC and as much backwards compatibility with C as they could
           | muster. In context, this doesn't seem like a weird strategy.
        
         | parentheses wrote:
         | the core problem is trusting a byte you read and use to make
         | decisions. this is the superset of memory safety and not
         | protected against by any language as of now.
         | 
         | let's kill bytes. :P
        
         | chromanoid wrote:
         | Following this idea, using a memory managed language like Go,
         | Java or C# should also prevent most security issues (at least
         | in non-core systems). Somehow I don't think this would work.
        
           | tester756 wrote:
           | What makes you think so?
        
             | chromanoid wrote:
             | While I think garbage collected languages produce programs
             | that are more safe, I also think they are often enablers
             | for new classes of security issues. For example ysoserial,
             | log4j etc.
        
               | tester756 wrote:
               | (This is genuine question)
               | 
               | Is log4j's bug actually unique to Java/C#/gc-based
               | languages?
        
               | 41b696ef1113 wrote:
               | >Is log4j's bug actually unique to Java/C#/gc-based
               | languages?
               | 
               | Not to my understanding. It would be possible in any
               | language with or without a GC.
        
               | catdog wrote:
               | No, not validating user input and passing it to some
               | crazy feature rich library like JNDI is possible in any
               | language. Not denying that Java did contribute by
               | shipping with such an overengineered mess like JNDI in
               | the first place.
               | 
               | Log4Shell wasn't a bug, log4j worked as expected and
               | documented. It's just a stupid idea for a logging library
               | to work in such a way.
        
               | jerf wrote:
               | The underlying bug in log4j is having a deserialization
               | mechanism that can automatically deserialize to any class
               | in the system, combined with method code that runs upon
               | deserialization that does dangerous things. It has
               | nothing to do with GC at all.
               | 
               | It's a recurring problem in dynamic scripting languages
               | where the language by its very nature tends to support
               | this sort of functionality. It's actually a bit weird
               | that Java has it because statically-typed languages like
               | that don't generally have the ability to do that, but
               | Java put a _lot_ of work into building this into its
               | language. Ruby has a very large issue with this a few
               | years back where YAML submitted to an Ruby on Rails site
               | would be automatically deserialized and execute a payload
               | before it got to the logic that would reject it if
               | nothing was looking for it. Python 's pickle class has
               | been documented as being capable of this for a long time,
               | so the community is constantly on the lookout for things
               | that use pickle that shouldn't, and so far they've mostly
               | succeeded, but in principle the same thing could happen
               | with that, too.
               | 
               | It would be nearly impossible for Go (a GC'd language) to
               | have that class of catastrophic security error, because
               | there is nowhere the runtime can go to get a list of "all
               | classes" for any reason, including deserialization
               | purposes. You have to have some sort of registry of
               | classes. It's possibly to register something that can do
               | something stupid upon unmarshaling, but you have to work
               | a lot harder at it.
               | 
               | Go is not unique. You don't see the serialization bugs of
               | _this_ type in C or C++ either (non-GC 'd languages),
               | because there's no top-level registry of "all
               | classes/function/whatever" in the system to access at
               | all. You might get lots of memory safety issues, but not
               | issues from deserializing classes that shouldn't be
               | deserialized simply because an attacker named them. Many
               | other languages make this effectively impossible because
               | most languages don't have that top-level registry built
               | in. That's the key thing that makes this bug likely.
        
               | formerly_proven wrote:
               | > The underlying bug in log4j is having a deserialization
               | mechanism that can automatically deserialize to any class
               | in the system
               | 
               | Getting objects out of a directory services is what JNDI
               | is all about, I'm hesitant to call it a bug.
               | 
               | The bug is that Java is way too keen on dynamically
               | loading code at runtime. Probably because it was created
               | in the 90s, where doing that was kinda all the rage. I
               | think retrospectively the conclusion is that it may be
               | the easiest way to make things extensible short-term, but
               | also the worst way for long-term maintenance. Just ask
               | Microsoft about that.
        
               | chromanoid wrote:
               | No, it's not unique. But a GC strengthens, like dynamic
               | typing, your ability to develop more dynamically and
               | together with reflection or duck typing to write code
               | that can deal with to some extent unknown input more
               | easily. You can pass arbitrary objects (also graphs of
               | objects) around very generously. Ysoserial is based more
               | or less on this idea. Passing arbitrary objects around
               | was deemed so useful, that it is also supported by java's
               | serialization mechanism and thus could be exploited.
               | Log4shell exploits similar mechanisms that would be a
               | hell to implement in non-gc languages.
        
         | phendrenad2 wrote:
         | You don't know what you're asking for. In reality, you'll end
         | up replacing C code with memory unsafely with Rust code written
         | by people who understand Rust less than they understand C. The
         | problem? The Rust Evangelism Strike Force always assumes that
         | if you replace a C program with a Rust program, it'll be done
         | by a top-tier expert Rust programmer. If that isn't the case
         | (which it won't be), then the whole thing falls apart. There
         | are vulnerabilities in JS and Ruby code, languages that are
         | even easier (and just as type-safe) as Rust.
        
           | dataangel wrote:
           | This is just incorrect. The beauty of Rust is even bad
           | programmers end up writing memory safe code because the
           | compiler enforces it. The ONLY rule an organization needs to
           | enforce on their crappy programmers is not allowing use of
           | unsafe. And there are already available tools for enforcing
           | this in CI, including scanning dependencies.
        
             | [deleted]
        
             | normac2 wrote:
             | I think what they're saying is that by making devs use a
             | less familiar language, you're going to end up with at
             | least as many security bugs, just ones _not_ related to
             | memory safety. (Not weighing in either way, just
             | clarifying.)
        
           | assttoasstmgr wrote:
           | I'm surprised no one has mentioned OpenBSD yet. Theo touches
           | on this topic in this presentation:
           | https://www.youtube.com/watch?v=fYgG0ds2_UQ&t=2200s
           | 
           | Some follow-up commentary: https://marc.info/?l=openbsd-
           | misc&m=151233345723889
        
             | staticassertion wrote:
             | Because it's not relevant.
             | 
             | 1. In the video he's saying that you can't replace memory
             | safety mitigation techniques like ASLR with memory safe
             | languages. He notes that there will always be some unsafe
             | code and that mitigation techniques are free, so you'll
             | always want them.
             | 
             | No one should disagree with that. ASLR is effectively
             | "free", and unsurprisingly all Rust code has ASLR support
             | and rapidly adopts new mitigation techniques as well as
             | other methods of finding memory unsafety.
             | 
             | 2. The link about replacing gnu utils has nothing to do
             | with memory safety. At all.
             | 
             | Even if it were related, it would simply be an argument
             | from authority.
        
           | monocasa wrote:
           | There's something to be said for taking the entire class off
           | vulnerability off of the table.
           | 
           | For instance, in the past I worked at a sort of active
           | directory but in the cloud company. We identified parsers of
           | user submitted profile pictures in login windows as a
           | privilege escalation issue. We couldn't find memory safe
           | parsers for some of these formats that we could run in all
           | these contexts, and ended up writing a backend service that
           | had memory safe parsers and would recompress the resulting
           | pixel array.
           | 
           | Rust parsers at the time would have greatly simplified the
           | workflow, and I'm not sure how we would have addressed the
           | problem except as whack-a-mole at the time if there wasn't
           | our central service in the middle (so MMS can't do that).
        
           | staticassertion wrote:
           | > There are vulnerabilities in JS and Ruby code, languages
           | that are even easier (and just as type-safe) as Rust.
           | 
           | This is completely misleading. The vulnerabilities that exist
           | in those languages are completely different. They often are
           | also far less impactful.
           | 
           | Memory safety vulnerabilities typically lead to _full code
           | execution_. It is so so so much easier to avoid RCE in memory
           | safe languages - you can grep for  "eval" and "popen" and
           | you're fucking 99% done, you did it, no more RCE.
        
           | bobbylarrybobby wrote:
           | Rust programmers can write buggy code, but you'd really have
           | to go out of your way to write memory-unsafe code.
        
           | emerongi wrote:
           | I think the only question that matters is how much longer it
           | takes to write a moderately-sized program in Rust vs C. If it
           | takes around the same time, then an average C programmer will
           | probably write code with more bugs than an average Rust
           | programmer. If it takes longer in Rust, the Rust programmer
           | could start taking some seriously unholy shortcuts to meet a
           | deadline, therefore the result could be worse.
           | 
           | All code can have bugs, it's mostly just a question of how
           | many. Rust code doesn't have to have zero bugs to be better
           | than C. It's not like all C programmers are top-tier
           | programmers and all Rust programmers are the bottom of the
           | barrel.
        
             | xxpor wrote:
             | This is part of the issue though:
             | 
             | Writing things in C _correctly_ takes more time than in
             | rust (once you get past the initial learning curve)
             | 
             | Writing things in C that _appear to work_ may take less
             | time.
             | 
             | I think we can be reasonably sure that Apple didn't
             | introduce those image parsing bugs intentionally. But that
             | means they _thought_ it was correct.
        
             | dagmx wrote:
             | I've written a few things at work in C/C++ and Rust. I can
             | move much faster in Rust, personally, as long as the pieces
             | of the ecosystem I need are there. Obviously I only speak
             | for myself.
             | 
             | Part of that is because I'm working in code where security
             | is constantly paramount, and trying to reason about a C or
             | C++ codebase is incredibly difficult. Maybe I get lucky and
             | things are using some kind of smart ptr, RAII and/or proper
             | move semantics, but if they're not then I have to think
             | about the entire call chain. In rust I can focus very
             | locally on the logic and not have to try and keep the full
             | codebase in my head
        
             | staticassertion wrote:
             | That assumes that writing unsafe code would make you go
             | faster. It wouldn't. In general if you want to write code
             | in Rust more quickly you don't use unsafe, which really
             | wouldn't help much, but you copy your data. ".clone()" is
             | basically the "I'll trade performance for productivity"
             | lever, not unsafe.
        
         | CyberRage wrote:
         | As if rewriting entire OS components is easy or viable for
         | vendors, even big ones like Apple or Microsoft.
         | 
         | Also backwards compatibility is a feature many wouldn't give
         | away for extra security, at least not now.
        
           | alophawen wrote:
           | Nobody said it would be easy, but it is already happening.
           | 
           | https://medium.com/@tinocaer/how-microsoft-is-adopting-
           | rust-...
           | 
           | https://preettheman.medium.com/this-is-what-apple-uses-
           | rust-...
        
             | saagarjha wrote:
             | I don't think Apple is shipping anything customer facing
             | that's built on Rust?
        
               | dagmx wrote:
               | Not in rust, but they did:
               | 
               | - Add reference counting to ObjC to get rid of a lot of
               | use after free bugs (still of course possible because
               | it's just a language suggestion and not strictly enforced
               | like Rust or Swift)
               | 
               | - push for adding ObjC notations to let the tooling help
               | catch some set of bugs. Still not perfect by any means
               | but helps a little.
               | 
               | - created an entirely new memory safe language as Swift.
        
               | steveklabnik wrote:
               | Apple+ has some stuff in Rust, in my understanding.
        
             | CyberRage wrote:
             | Did you even read what you linked? I wouldn't say "already
             | happening" more like first early steps. Operating systems
             | have a massive attack surface, would take years to convert
             | code from C\C++ to Rust and likely be more vulnerable
             | initially(the old code base went through decades of
             | scrutiny, hundreds of scanners\fuzzers etc)
        
           | qualudeheart wrote:
           | My view is that it should be state mandated for products with
           | over 1 million users. In the long run it would pay for itself
           | with the money that no longer has to be spent on mitigating
           | cyber security problems.
           | 
           | Cyber security is national security is the people's security.
           | Ever since my aunt was doxxed and had her online banking
           | money stolen I've become a cyber security hardliner.
        
         | [deleted]
        
         | radford-neal wrote:
         | Wouldn't it be a lot easier to just use a C compiler that
         | produces memory-safe code?
         | 
         | I'm sure someone else has already thought of this, but in case
         | not... All you need to do is represent a pointer by three
         | addresses - the actual pointer, a low bound, and a high bound.
         | Then *p = 0 compiles to code that checks that the pointer is in
         | bounds before storing zero there.
         | 
         | I believe such a compiler would conform to the C standard. Of
         | course, programs that assume that a pointer is 64-bits in size
         | and such won't work. But well-written "application level"
         | programs (eg, a text editor) that have no need for such
         | assumptions should work fine. There would be a performance
         | degradation, of course, but it should be tolerable.
        
           | retrac wrote:
           | Yes, such approaches can be compliant. There's even a few C
           | interpreters. Very popular back in the day for debugging C
           | programs when you didn't have full OS debugging support for
           | breakpoints and etc. Such an approach would be quite suitable
           | for encapsulating untrusted code. There is definitely some
           | major overhead, but I don't see why you couldn't use JIT.
        
             | [deleted]
        
           | ghusbands wrote:
           | That doesn't at all address use of pointers that have since
           | become invalid (via free or function return, say).
        
             | radford-neal wrote:
             | Good point. There's also the problem of pointers to no-
             | longer-existing local variables. (Though I think it's rare
             | for people to take addresses of local variables in a
             | context where the compiler can't determine that they won't
             | be referenced after they no longer exist.)
        
           | xavxav wrote:
           | There are similar approaches, ie: Checked-C which work
           | surprisingly well. However, I'm not sure that this approach
           | would be expressive enough to handle the edge cases of C
           | craziness and pointer arithmetic. There's more to memory
           | unsafety than writing to unallocated memory, even forcing a
           | write to slightly wrong memory (ie setting `is_admin = true`)
           | can be catastrophic.
        
             | radford-neal wrote:
             | I think it handles all standards-conforming uses of pointer
             | arithmetic. Even systems-level stuff like coercing an
             | address used for memory-mapped IO may work. For example,
             | struct dev { int a, b; } *p; p = (struct dev *) 0x12345678;
             | 
             | should be able to set up p with bounds that allow access
             | only to the a and b fields - eg, producing an error with
             | int *q = (int *) p; q[2] = 0;
             | 
             | Of course, it doesn't fix logic errors, such as setting a
             | flag to true that shouldn't be set to true.
        
           | AlotOfReading wrote:
           | That's essentially what ASAN is, with some black magic for
           | performance and scope reasons. The problem is that ensuring
           | that your code will detect or catch memory unsafety isn't
           | enough, because the language itself isn't designed to
           | incorporate the implications of that. If you're writing a
           | system messenger for example, you can't just crash unless you
           | want to turn all memory unsafety into a zero-click denial of
           | service.
        
             | toast0 wrote:
             | Based on recent experience, you'd really want your media
             | decoders compiled with a safe compiler, and if it crashes,
             | don't show the media and move on. Performance is an issue,
             | but given the choice between RCE and DoS, DoS is
             | preferable.
             | 
             | It would be nice if everything was memory safe, but making
             | media decoding memory safe would help a lot.
        
               | AlotOfReading wrote:
               | I absolutely agree that it's a step in the right
               | direction. My point is that we can't get all the way to
               | where we want to be simply by incremental improvements in
               | compilers. At some point we have to change the code
               | itself because it's impossible to fully retrofit safety
               | onto C.
        
             | radford-neal wrote:
             | Programs that would crash when using the memory-safe
             | compiler aren't standards conforming. If you're worried
             | that programs crashing due to bugs can be used for a
             | denial-of-service attack... Well, yes, that is a thing.
             | 
             | Low-level OS and device-handling code may need to do
             | something that won't be seen as memory safe, but I expect
             | that for such cases you'd need to do something similarly
             | unsafe (eg, call an assembly-language routine) in any
             | "memory safe" language.
             | 
             | I'm not familiar with how ASAN is implemented, but since it
             | doesn't change the number of bytes in a pointer variable, I
             | expect that it either doesn't catch all out-of-bounds
             | accesses or has a much higher (worst case) performance
             | impact than what I outlined.
        
               | AlotOfReading wrote:
               | I brought up ASAN because it's a real thing that already
               | exists and gets run regularly. The broad details of how
               | ASAN is implemented are best summarized in the original
               | paper [1]. The practical short of it is that there are
               | essentially no false negatives in anything remotely
               | approaching real-world use. A malicious attacker could
               | get around it, but any "better algorithm" would still run
               | into the underlying issue that C doesn't have a way to
               | actually handle detected unsafety and no amount of
               | compiler magic will resolve that.
               | 
               | You have to change the code. Whether that's by using
               | another language annotations or through annotations like
               | checked C is an interesting (but separate) discussion in
               | its own right.
               | 
               | As for the point that programs with memory unsafety
               | aren't standards conforming; correct but irrelevant.
               | Every nontrivial C program ever written is nonconformant.
               | It's not a matter of "just write better code" at this
               | point.
               | 
               | [1] https://www.usenix.org/system/files/conference/atc12/
               | atc12-f...
        
               | radford-neal wrote:
               | From the linked ASAN paper: "...at the relatively low
               | cost of 73% slowdown and 3.4x increased memory usage..."
               | 
               | That's too big a performance hit for production use -
               | much bigger than you would get with the approach I
               | outlined.
               | 
               | I don't agree that any nontrivial C program is
               | nonconformant, at least if you're talking about
               | nonconformance due to invalid memory references.
               | Referencing invalid memory locations is not the sort of
               | thing that good programmers tolerate. (Of course, such
               | references may occur when there are bugs - that's the
               | reason for the run-time check - but not when a well-
               | written program is operating as intended.)
        
               | nickelpro wrote:
               | That's a low cost for detecting the memory unsafe
               | behvior. It is not intended to run in production, it's
               | intended to run with your test suite.
        
               | radford-neal wrote:
               | Yes, I know. But this thread is about detecting invalid
               | memory references in production, to prevent security
               | exploits. ASAN seems too slow to solve that problem.
        
         | codechad wrote:
        
         | daniel-cussen wrote:
         | Yeah, I started noticing huge flaws in Apple's Music app, which
         | I told them about and work around mostly, but...are they
         | because Apple software is written in C? C++, Objective-C, same
         | thing. Like can C code ever really be airtight?
        
           | can16358p wrote:
           | I'd say lack of QA. Apple Music (especially on macOS) is
           | EXTREMELY buggy, unresponsive, slow, and feels like a mess to
           | use. Same for iMessage.
           | 
           | Other apps are also written using the same stack with almost
           | no bugs. I wouldn't blame the language here, but the teams
           | working on them (or more likely their managers trying to hit
           | unrealistic deadlines).
        
             | daniel-cussen wrote:
             | No, I would not blame the teams or their managers. You
             | can't just blame a manager you've never met just because
             | he's a manager, we're talking about the manager of Apple
             | Music, they could very well be capable and well-minded,
             | likely personally capable of coding. So let me give you
             | another example in the same vein as C, where everybody uses
             | a technology that is terrible, questioning it only at the
             | outset, and then just accepting it: keyboard layouts.
             | QWERTY is obsolete. It wasn't designed at random, it would
             | have aged better if it did, it was designed to slow down
             | typing so typewriters wouldn't jam. And secondly, in order
             | for salesmen to type "TYPEWRITER" with just the top row, so
             | the poor woman he was selling didn't realize typewriters
             | were masochistic. So that's how you end up with millions of
             | people hunting and pecking, or getting stuck for months
             | trying to learn touch typing for real with exercises like
             | "sad fad dad." It takes weeks before you can type "the".
             | It's just the network effects of keyboard layouts are just
             | next-level. Peter Thiel talks about this in "Zero to One"
             | as an example of a technology that is objectively inferior
             | but is still widely used because it's so hard to switch,
             | illustrating the power of network effects. I for one did
             | switch, and it was hard because I couldn't type neither
             | Qwerty nor Dvorak for a month. But after that Dvorak came
             | easily, you don't need an app to learn to type, you just
             | learn to type by typing, slowly at first, then very soon,
             | very fast.
             | 
             | So with regard to C, I would say it is not objectively
             | inferior like QWERTY became, it's actually pretty well
             | designed. It does produce fast code. I use it myself
             | sometimes, not a bad language for simple algorithm
             | prototypes of under 60 lines. But it's based to a huge
             | degree on characters, the difference between code that
             | works and code that fails can come down to characters, C is
             | about characters, pretty much any character, there's no
             | margin of error. Whereas with Lisp, you have parentheses
             | for everything, you have an interpreter but you can also
             | compile it, I am actually able to trust Lisp in a way that
             | is out of the question with C. There's just so incredibly
             | many gotchas and pitfalls, buffer overflows, it's endless,
             | you have to really know what you're doing if you want to do
             | stunts with pointers, memory, void types.
             | 
             | I guess the bottom line is if you're want your code to be
             | perfect, and you write it in C, you can't delegate to the
             | language, you yourself have to code that code perfectly in
             | the human capacity of perfection.
        
               | daniel-cussen wrote:
               | > you can't delegate to the language, you yourself have
               | to code that code perfectly in the human capacity of
               | perfection.
               | 
               | Clarifying that what I mean by this is that it's not
               | realistic to expect large C codebases to be perfect. Bug-
               | free, with no exploits. Perfect. Same thing.
        
               | [deleted]
        
               | chaosite wrote:
               | You're being downvoted to oblivion, even though your
               | general point (rephrased, C is an unforgiving language
               | and safer languages are a Good Thing) is pretty
               | mainstream. Here are my guesses why:
               | 
               | 1. You start off by saying you can't just blame the team
               | or their manager if you're dissatisfied with a product,
               | then instead of explaining why the people who made a
               | piece of software aren't responsible for its faults you
               | go off on a long non-sequitur about QWERTY.
               | 
               | 2. Your rant on QWERTY just isn't true. You namedrop
               | Peter Thiel and his book, so if he's your source then
               | he's wrong too. QWERTY is not terrible, not obsolete, it
               | was not designed to slow down typists, and there's no
               | record of salesmen typing "typewriter quote" with just
               | the top row. It's true that it was designed to switch
               | common letters between the left and right hands, but that
               | actually speeds up typing. It also does not take weeks
               | for someone to type "the" ; and if you mean learning
               | touch-typing, I don't know of any study that claims that
               | alternative keyboard layouts are faster to learn.
               | 
               | The various alt. keyboard layouts (dvorak, coleman,
               | workman) definitely have their advantages and can be
               | considered better than QWERTY, sure; people have
               | estimated that they can be up to ~30% faster, but
               | realistically, people report increasing their typing
               | speeds by 5-10%; or at least the ones who have previously
               | tried to maximize their typing speeds... If learning a
               | new layout is the first time they'd put effort into that
               | skill, they'd obviously improve more. It's probably also
               | true that these layouts are more efficient in the sense
               | that they require moving the fingers less, reducing the
               | risk of RSI (though you'd really want to use an ergonomic
               | keyboard if that's a concern.)
               | 
               | QWERTY is still used because it's not terrible, it's good
               | enough. You can type faster than you can think with it,
               | and for most people that's all they want. There's nothing
               | wrong with any of the alternative layouts, I agree that
               | they're better in some respects, but they're not order-
               | of-magnitudes better as claimed.
               | 
               | 3. Your opinions about C are asinine.
               | 
               | "not objectively inferior like QWERTY" - So, is C good or
               | not? We're talking about memory safety, C provides
               | literally none. Is this not objectively inferior? Now, I
               | would argue that it's not, it's an engineering trade-off
               | that one can make, trading safety for an abstract machine
               | that's similar to the underlying metal, manual control
               | over memory, etc. But you're not making that point,
               | you're just saying that it's actually good before going
               | on to explain that it's hard to use safely, leaving your
               | readers confused as to what you're trying to argue.
               | 
               | "not a bad language for simple algorithm prototypes of
               | under 60 lines" - It's difficult to use C in this way
               | because the standard library is rather bare. If my
               | algorithm needs any sort of non-trivial data-structure
               | I'll have to write it myself, which would make it over 60
               | lines, or find and use an external library. If I don't
               | have all that work already completed from previous
               | projects, or know that you'll eventually need it in C for
               | some reason, I generally won't reach for C... I'll use a
               | scripting language, or perhaps even C++. Additionally,
               | the places C is commonly used for its strengths (and
               | where it has begun being challenged by a maturing Rust)
               | are the systems programming and embedded spaces, so
               | claiming C is only good for 60-line prototypes is just
               | weird.
               | 
               | "C is about characters" - Um, most computer languages are
               | "about characters". There are some visual languages, but
               | I don't think you're comparing C to Scratch here... You
               | can misplace a parentheses with Lisp or make any number
               | of errors that are syntactically correct yet semantically
               | wrong and you'll have errors too, just like in C. Now,
               | most lisps give you a garbage collector and are more
               | strongly typed than C, for instance, features which
               | prevent entire categories of bugs, making those lisps
               | safer.
               | 
               | 4. You kinda lost the point there. You started by saying
               | that the people who wrote Apple Music "could very well be
               | capable and well-minded, likely personally capable of
               | coding", i.e., they're good at what they do. Fine, let's
               | assume that. Then, your bottom line is that in C "you
               | have to really know what you're doing" and "you yourself
               | have to code that code perfectly in the human capacity of
               | perfection". What's missing here is a line explaining
               | that humans aren't perfect, and even very capable
               | programmers make mistakes all the time, and having the
               | compiler catch errors would actually be very nice. Then
               | it would flow from your initial points that these are
               | actually fine engineers, but they were hamstrung by C.
               | 
               | And the tangent on QWERTY just did not help at all.
        
           | saagarjha wrote:
           | Bugs in Apple's Music apps have essentially nothing to do
           | with it being written in C++ and Objective-C (and these days
           | a significant portion of it is JavaScript and Swift).
        
           | dagmx wrote:
           | Apple Music isn't a great example because depending on which
           | OS and version you're running, it's essentially a hosted web
           | application.
           | 
           | Or given how new it is, it's likely majority written in Swift
           | when presenting a native app
        
         | ghusbands wrote:
         | Around 70% of security flaws are from memory unsafety
         | (according to Google and Microsoft), which isn't "almost
         | universally" but is still a significant percentage and worth
         | attacking. But we'll still have a forest fire to fight
         | afterwards from the other 30%.
        
       | xyzzy123 wrote:
       | "no way to stop them" = the economic impact to Apple isn't big
       | enough to justify the engineering / rewrites required to
       | completely prevent them.
        
         | BiteCode_dev wrote:
         | More likely: our gov gag ordered us to keep up.
        
         | [deleted]
        
         | bilbo0s wrote:
         | Apple, or Microsoft, or Samsung, or Ubuntu, or Google, or
         | whoever can do all the system level bulletproofing they want.
         | People will still write apps. And those apps, probably upwards
         | of 99.999999% of them will be unsafe.
         | 
         | It would take a sea change in the mindsets of software
         | engineers globally to centralize the software development
         | process around a security mindset. That's not going to happen
         | unfortunately. The vast majority of us have neither the
         | expertise, nor the time, to develop 100% secure code. The best
         | most of us conscientious types can do is to provide
         | comprehensive monitoring, so that a user can use those tools to
         | know if and when something is amiss.
        
           | formerly_proven wrote:
           | > And those apps, probably upwards of 99.999999% of them will
           | be unsafe.
           | 
           | Apps are sandboxed, so the damage should be limited to only
           | the exploited app.
           | 
           | Pegasus exploits exploited iMessage et al, which are Apple's
           | own apps with special permissions.
        
             | bilbo0s wrote:
             | Again, from the perspective of a cyber security expert, all
             | that is great!
             | 
             | Or rather would be great if Pegasus was the only 0-day out
             | there. It'd be even better if Pegasus were the only 0-click
             | out there.
             | 
             | Here's the thing though, it's not.
             | 
             | That's the world we live in. So the question is, given that
             | fact, how do we get to a world where we can have some level
             | of security? My belief is that everyone from the users to
             | the app devs have to adopt a security mindset.
             | 
             | Users should not download that free app that lets you see
             | what you would look like as your favorite French pastry.
             | They should not click on the link in that sms they got from
             | that strange phone number. They should be careful about
             | giving out their phone number. Give everyone your gmail
             | google phone number instead and let them send texts to
             | that. Then check those texts on your gmail google phone if
             | you're a high profile target. (Or even just a guy/gal who
             | has a few people out there who really don't like them.)
             | Keep a buffer between the world and your phone. Etc etc
             | etc.
             | 
             | Devs want access to the file system. Awesome, but they'd
             | better make sure in using that filesystem they are not
             | inadvertently allowing users to take any actions
             | deleterious to the system. Devs want access to the GPU.
             | Again, no problem. But you'd better know how to write
             | secure GPU code. There is no way a browser, or .NET, or
             | Python or an OS can provide you access to a GPU "safely".
             | If they give you the gun, they expect you will use it
             | responsibly.
             | 
             | Browsers and other platform providers should also act
             | responsibly. I understand developers want features. At the
             | same time, is it responsible to hand out access to these
             | features without some kind of plan to keep irresponsible
             | devs from compromising security at scale? Sometimes there
             | just is no way to do that, and I understand. (Access to the
             | GPU is an example. Devs just have to know what they're
             | doing.) But sometimes it is possible to do things in a more
             | secure fashion, or to just wait on delivering that feature
             | altogether.
             | 
             | Point is, for a secure environment, everyone has to play
             | their part. There are so many of these 0-clicks and 0-days
             | out there in the wild. Everyone wants to make a better
             | environment. Well, I'm not seeing how that happens without
             | getting everyone's cooperation. Or, at a minimum, getting
             | everyone to be a bit more careful with their behaviors.
        
               | saagarjha wrote:
               | Do you have concrete examples of how a developer could,
               | say, write secure code to run on the GPU?
        
             | antihero wrote:
             | Is there a way to rescind said permissions?
        
               | saagarjha wrote:
               | Short of not using those apps, no.
        
             | ec109685 wrote:
             | What special permissions were used to enable the attack?
             | 
             | AFAIK, the hacker broke out of the sandbox in addition to
             | rooting iMessage.
             | 
             | Also, the surface area available to a sandbox is too large.
             | Firecracker like VM isolation is required for safety, which
             | Apple seems to be moving towards when it comes to parsing
             | from their apps at least.
        
           | xyzzy123 wrote:
           | That's a defeatist position.
           | 
           | 99% of the problem is just wanting to not have to rewrite a
           | hundred parsers in memory-safe languages.
           | 
           | It's just economics and engineering.
           | 
           | They don't have to change everyone's minds or fix the world.
           | They'd need to invest _a lot_ but so far nobody really thinks
           | it 's worth it.
        
             | CyberRage wrote:
             | People try to address that will simpler solutions that
             | wouldn't break backwards compatibility or require a full
             | re-write.
             | 
             | Isolation, mitigation and prevention of exploitation is
             | common.
        
           | simion314 wrote:
           | >People will still write apps. And those apps, probably
           | upwards of 99.999999% of them will be unsafe.
           | 
           | This can be avoided if you have a cross platform high level
           | language like say C# with a big standard library like .Net ,
           | the field needs then to make sure the language and core
           | library are safe, most programs use existing libraries and
           | put some business logic on top, I remember that memory safety
           | was a thing before Rust was born, the issue was that either
           | the languages were too slow, were not cross platform or had
           | weird license or were "garbage".
           | 
           | If this gaints like Apple, Google, Facebook would contribute
           | on rewriting or prove that the core libraries they use are
           | correct then things would improve, but how would they
           | continue to increase their obscene profits ?
        
             | delusional wrote:
             | "If we could just have one more layer of abstraction, THEN
             | we would be secure".
             | 
             | You'll end up making a standard library so big that it will
             | never be secure. And even more portantly, you'll strangle
             | innovation by disallowing improvements to the standard
             | library.
        
               | simion314 wrote:
               | I would not make it illegal for cool devs to invent their
               | own language or libraries. I want a good default for
               | string manipulation,http, file manipulation, json/xml/zip
               | and other format parsing, you could rewrite your own in
               | CoolLang using QunatumReactiveAntiFunctionalPatterns. It
               | is your choice if you use a proven correct zip library or
               | you use a different one written by some stranger in a
               | weekend In CoolLang.
               | 
               | Something like JVM or.Net would be part of the solution
               | because you could pacify developers because they can use
               | their darling language but target the same platform as
               | the others. We still need true engineers to create an OS
               | and Standard library from the ground up, designed for
               | security and not chaotically evolved.
        
               | giantrobot wrote:
               | > I want a good default for string manipulation,http,
               | file manipulation, json/xml/zip and other format parsing
               | 
               | Wanting those things is fine but _delivering_ those
               | things is extremely difficult. JSON /XML/Zip have so many
               | weird edge cases it's maybe impossible to write parsers
               | that are complete to the spec yet also truly secure. XML
               | and Zip bombs aren't explicit features of either format
               | but they're side effects of not being explicitly
               | forbidden.
               | 
               | You're also want "good" parsers without specifying in
               | which dimension you want them to be "good". You can have
               | a complete parser that's reasonably secure but then pay
               | for that with CPU cycles and memory. You can have a small
               | and fast parser that's likely incomplete or has
               | exploitable holes.
        
             | bilbo0s wrote:
             | Respectfully, an enormous amount of work has gone into
             | making sure things like Python, .NET, and Rust are secure.
             | And the security researchers still regularly find bugs and
             | sell 0-days. That's not even counting the work that's gone
             | into the gold standard that is the JVM. Any serious minded
             | security expert could tell you that guaranteeing security
             | on any of these platforms is a sysiphean effort. Your
             | platform is state of the art with respect to security,
             | until it is not.
             | 
             | The essential problem is features. Devs want features. So
             | Python, .NET, etc, and even the browsers try to provide
             | access to those features. But some of those features are
             | simply inherently unsafe. Someone will find a way to
             | compromise this feature or that. How does one provide 100%
             | safe access to the GPU? The file system? And so on. It's
             | not really possible. At some point, the app level dev will
             | have to keep a security mindset when writing his code.
             | Don't do things on the GPU that compromise the system. But
             | that has to be on the app developer if that developer is
             | demanding that the browsers give him/her access to the GPU.
             | 
             | I don't know if I'm being clear? But I hope you can see
             | what I'm trying to say.
        
               | simion314 wrote:
               | You are using big words like "Impossible" such big word
               | would need a proof. There are languages that can produce
               | programs proven to be correct as per the specifications.
               | We don't have such programs because we prefer moving
               | fast, breaking things, having fun while coding, making
               | money etc.
               | 
               | Do you have a proof that it is impossible to have a
               | secure calculator application?
               | 
               | About .Net and Python, they are using a lot of wrappers
               | around old unsafe code, so we would need to put more work
               | and eliminate that, MS failed because of their shity
               | Windows first ideals and their FUD,
        
               | saagarjha wrote:
               | > Don't do things on the GPU that compromise the system.
               | 
               | Easier said than done...
        
               | simion314 wrote:
               | If say a Game or 3D program crashes the system or causes
               | a security issue the problem is the driver or the
               | hardware. A correct driver and hardware should not allow
               | any user level application to cause issues.
               | 
               | I know is hard, this GPU companies need to keep backward
               | compatibility, support different operating systems(and
               | versions), support old stuff that worked by mistake. and
               | probably some "benchmark cheeting might be hidden in the
               | proprietary drivers too".
        
               | saagarjha wrote:
               | Drivers and hardware are not designed by application
               | developers.
        
               | simion314 wrote:
               | I understand. But my point is we can have safe
               | application if GPU makers would care for safety , at this
               | moment they care to impress people with benchmarks to
               | they make money, probably people that run GPU servers
               | will visualize them and put zero pressure on the driver
               | maker to produce safety that might reduce a bit of speed.
        
               | staticassertion wrote:
               | > How does one provide 100% safe access to the GPU?
               | 
               | Presumably through pointer capabilities.
               | 
               | > The file system?
               | 
               | File system namespacing and virtualization.
               | 
               | I disagree with most of your assertions.
        
         | CyberRage wrote:
         | Yes but practically this isn't viable.
         | 
         | Nothing is impossible unless it disobeys the laws of
         | physics(which are also limited to what we currently know)
         | 
         | It would be equivalent to saying, well it isn't economic enough
         | for energy companies to simply create nuclear fusion
         | reactors...
         | 
         | Some things are just extremely hard and there's no obvious
         | answer even if you had "unlimited" funds.
        
         | falsenapkin wrote:
         | It's baffling that they won't at least disable previews for
         | senders not in your contacts. Ideally they would provide a way
         | to block certain types of senders outright. I will NEVER want
         | to receive an iMessage from an unknown email address, but
         | that's where all of the spam crap comes from.
         | 
         | Recently I was on my phone when I received an email address
         | iMessage and the toast showed an absolutely insane link, when I
         | opened iMessage (not even that conversation) to go delete the
         | thread my phone screen went blank quickly 2 or 3 times in a
         | row, something I've never seen before. I deleted the thread and
         | turned the thing off.
        
           | leoqa wrote:
           | Sounds like it was triggering a crash or running 70,000 logic
           | gates?
        
           | abecedarius wrote:
           | Does it solve this to set a different app as your default app
           | for texts? (Hopefully a more secure app.)
        
             | falsenapkin wrote:
             | I want to trust Apple more than a random 3rd party app in
             | general but regardless I don't think you have an option for
             | alternate SMS on iOS. Anyway the problem here, I think, is
             | that you can somehow send an iMessage (not SMS?) via an
             | account that is backed by an email address instead of a
             | phone number. So even if texts/SMS could have an alternate
             | app, iMessage would still be accepting messages from bad
             | accounts.
        
               | abecedarius wrote:
               | "Apple does not allow other apps to replace the default
               | SMS/messaging app." https://support.signal.org/hc/en-
               | us/articles/360007321171-Ca...
               | 
               | Oops, I must've been remembering Android. Well, it's one
               | way Apple could fix this unconscionably lasting security
               | hole.
        
         | jka wrote:
         | Or perhaps one step further (albeit verging into conspiracy
         | theory territory): they intentionally push ahead with known-
         | flawed approaches, projects and engineering practices because
         | it's profitable and there's generally a net benefit to them in
         | being more-aware and more-in-control of the vulnerabilities
         | within that ecosystem than anyone else could be.
         | 
         | (instead of taking the time to wait for research results, best
         | practices, security reviews and privacy concerns up-front at
         | design-time, and even -- shock -- perhaps deciding not to build
         | some societally risky products in the first place)
        
           | tptacek wrote:
           | Apple spends more on security than all but 2 other industry
           | firms (they may spend more than those 2 as well), and has a
           | comparable computing footprint to those firms. This is a
           | facile complaint.
        
             | jka wrote:
             | My comment may have been facile and poorly-argued, sure,
             | but if consumer devices are being sold that can be remotely
             | exploited without user interaction during something as
             | commonplace as rendering images.. surely it's worth
             | considering the potential for structural improvements in
             | industry?
             | 
             | Perhaps the associated billions of dollars of spending is
             | indeed the answer, and will translate into measurable
             | improvements. If so, very well.
             | 
             | Perhaps there are Conway-style architectural issues at hand
             | here as well, though. Can disparate teams working on (a
             | large number of) proprietary interconnected products and
             | features reliably produce secure results?
             | 
             | It seems wasteful that similarly-functioning tools -- like
             | messaging apps -- are continuously built and rebuilt and
             | yet the same old issues (generally exacerbated by
             | increasing web scale) mysteriously re-appear time and
             | again.
        
               | tptacek wrote:
               | This isn't a facile argument. I might disagree with it
               | --- I think things are more complicated than they seem
               | --- but I can't call it facile.
        
       | sorry_outta_gas wrote:
       | They have always been popular... lol
        
       | 0x008 wrote:
       | can we change this click bait title?
       | 
       | both statements are not true. a) zero click hacks have always
       | been popular b) of course there are ways to stop them.
        
       | rosndo wrote:
       | Years ago we used to regularly have worms that'd infect millions
       | of computers without any clicks at all.
       | 
       | The truth is that "Zero-Click" hacks are becoming increasingly
       | rare.
       | 
       | But of course everything is new for journos unfamiliar with the
       | field.
        
         | raesene9 wrote:
         | To me there's a difference between RCE and Zero click.
         | 
         | RCE occurs on a system with a listening daemon/service (e.g.
         | web, SQL, DNS SSH).
         | 
         | Zero-click describes an issue on a client system where usually
         | a user would have to click something to trigger it, but doesn't
         | as parsing/processing happens before the user actually sees
         | anything (e.g. via an SMS on a phone).
        
           | rosndo wrote:
           | There is no meaningful distinction between the two.
           | 
           | > Zero-click describes an issue on a client system where
           | usually a user would have to click something to trigger it,
           | but doesn't as parsing/processing happens before the user
           | actually sees anything (e.g. via an SMS on a phone).
           | 
           | Historically these have been referred to as RCE.
           | 
           | FWIW You are essentially describing a service listening on
           | the network. It's silly to try to make an artificial
           | distinction based on some irrelevant L4 differences.
        
             | [deleted]
        
             | raesene9 wrote:
             | That's a view of the world for sure :) Personally I don't
             | think it's irrelevant. From a threat modelling perspective,
             | exposed services are expected to be attacked.
             | 
             | Client services with zero interaction, have traditionally
             | been regarded as safer, usually for client side attacks
             | we'd expect a trigger from user action (e.g. a link being
             | clicked, a PDF file being opened).
             | 
             | Just because you don't find something to be useful as a
             | distinction in your line of work doesn't necessarily mean
             | that it's not useful to anyone ...
        
               | rosndo wrote:
               | Client services like these are also expected to be
               | attacked.
               | 
               | iMessage isn't meaningfully different from Apache,
               | instead of listening on a TCP number it listens on your
               | Apple user id.
        
           | tptacek wrote:
           | RCE is routinely used to describe clientside bugs; you're
           | mixing orthogonal concepts here.
        
         | staticassertion wrote:
         | Yes, Chrome pretty much single-handedly changed that, timed
         | well with Vista. For about a decade we got a reprieve because:
         | 
         | 1. Memory safety mitigations became much more common (Vista)
         | 
         | 2. Browsers adopted sandboxing (thanks IE/Chrome)
         | 
         | 3. Unsandboxed browser-reachable software like Flash and Java
         | was moved into a sandbox and behind "Click to Play" before
         | eventually being removed entirely.
         | 
         | 4. Auto-updates became the norm for browsers.
         | 
         | And that genuinely bought us about a decade. The reason things
         | are changing is because attackers have caught back up. Browser
         | exploitation is back. Sandboxing is amazing and drove up the
         | cost, but it is not enough - given enough vulnerabilities any
         | sandbox falls.
         | 
         | So it's not that security is getting worse, it's that security
         | got better really really quickly, we basically faffed around
         | for a decade more or less making small, incremental wins, and
         | attackers figured out techniques for getting around our
         | barriers.
         | 
         | If we want another big win, it's obvious. Sandboxing and memory
         | safety have to be paired together. Anything else will be an
         | expensive waste of time.
        
           | rosndo wrote:
           | > And that genuinely bought us about a decade. The reason
           | things are changing is because attackers have caught back up.
           | Browser exploitation is back. Sandboxing is amazing and drove
           | up the cost, but it is not enough - given enough
           | vulnerabilities any sandbox falls.
           | 
           | It's still a completely different world. We've come a long
           | way from back when Paunch was printing money with Blackhole.
        
         | f311a wrote:
         | Exactly, 10 years ago tens of millions users were using
         | outdated Flash and Internet Explorer. Literally everyone could
         | infect them using pretty old exploits. There were no
         | autoupdates.
        
           | rosndo wrote:
           | Yep, good luck finding an useful exploit pack on crime forums
           | now. The days of blackhole & co are long past, those people
           | hacked far more people than those discussed in this article
           | ever will.
        
         | ziml77 wrote:
         | I was about to say the same thing in response to people
         | claiming security is getting worse. Zero-Click is just another
         | name for a worm. I guess mayyybe you could consider Zero-Click
         | as more like a class of worm whose entry into the system is
         | visible (you can see that you got the strange message or
         | image).
         | 
         | And you're definitely right that they are far more rare. Worms
         | used to be nasty is now fast and easily they spread. Security
         | has come a long way since then.
         | 
         | That said, we could go further on security. But is selling
         | people on using more secure software and hardware. Even
         | something as simple as bounds checking has a cost. Look at the
         | reception of the Windows 11 change to have Virtualization Based
         | Security turned on by default. People are upset about it
         | because it takes away performance for security that they claim
         | they don't need on their home computer.
         | 
         | And then there's resistance from developers. For some reason
         | people get really upset about mechanisms designed to improve
         | security without increasing runtime overhead when they make
         | compile time take longer. If your application is used by any
         | significant number of people, surely the amount of runtime
         | you're saving dwarfs the amount of extra time to compile.
        
           | tptacek wrote:
           | Wormable bugs are a subset of zero-click bugs. Worms are very
           | rare, and always have been, even during "the Summer of
           | Worms".
        
             | [deleted]
        
             | rosndo wrote:
             | Even a browser driveby attack is wormable if you use
             | various social media for spreading.
             | 
             | Few vaguely reliable RCE bugs aren't wormable. Even ones
             | requiring significant user interaction are wormable, office
             | macros are wormable.
             | 
             | Workable bugs are far more common than actual worms.
        
         | skybrian wrote:
         | Both attacks and defenses have gotten a lot better. Meanwhile,
         | the consequences of hacks keep going up every year. You didn't
         | have viruses disrupting shipping or gas pipelines before,
         | because they didn't depend as much on computers.
        
           | ReptileMan wrote:
           | You have to be a special kind of naive to not have such
           | infrastructure behind airgap.
        
             | toss1 wrote:
             | Yup.
             | 
             | And yet in most organizations, airgapping is an alien
             | concept. Even for machine tools that could kill someone.
             | 
             | Security vs convenience...
        
               | freshpots wrote:
               | Airgapping didn't stop Stuxnet.
        
             | [deleted]
        
             | unionpivo wrote:
             | > You have to be a special kind of naive to not have such
             | infrastructure behind airgap.
             | 
             | Most of it is not behind air gap. And unless things get a
             | lot worse it wont be.
             | 
             | Proper air gaping is hard, expensive and pain in the ass,
             | on the ongoing basis.
             | 
             | That's why almost none does it. Not even most military
             | systems are air gaped.
        
         | [deleted]
        
         | foxtrottbravo wrote:
         | I was about to ask whether I'm missing something here. "Zero
         | Click" just means no user interaction is required right?So from
         | my Perspektive this is just another way of saying Remote Code
         | Execution?
         | 
         | There really isn't something new here other than a fancy name -
         | or I am not seeing the point.
        
           | l33t2328 wrote:
           | Some RCE can require user interaction.
        
           | rosndo wrote:
           | That's correct.
        
           | greiskul wrote:
           | A non zero click remote code execution would be for example
           | the attacker sends the victim a message, with a link or
           | attachment, that if the victim interacts with it, the
           | attacked gets to run code they wrote in the victims device.
           | 
           | A zero click remote code execution, would be for example
           | where the attacker send a message, and their phone just
           | processing the message on it's own is enough for the attacker
           | to execute code on the victims device.
           | 
           | A non zero click vulnerability can be mitigated by being
           | cautious. A zero click vulnerability cannot.
        
             | rosndo wrote:
             | > A non zero click vulnerability can be mitigated by being
             | cautious. A zero click vulnerability cannot.
             | 
             | No amount of caution will save you when the exploit is
             | injected into a major website.
             | 
             | Why bother with such meaningless distinction? Does your
             | browser never hit any http:// resources?
        
             | staticassertion wrote:
             | The terms are all stupid and were made up 40 years ago.
             | Trying to tease out nuance is pointless.
             | 
             | You get owned without clicking hence zero click. Is it
             | different from RCE? A subset? Doesn't matter. Title could
             | have said RCE.
        
           | i67vw3 wrote:
           | 'Zero-Click' is just a new buzzword, it probably got popular
           | due to Pegasus and NSO.
        
         | caoilte wrote:
         | errr what? Im struggling to figure out what you might mean
         | here. Are you talking about floppy disk shared worms?
        
           | benreesman wrote:
           | A sibling has already linked it, but extra context: Robert
           | Tappen Morris (aka 'rtm') is a _legend_ , his dad is a _Bell
           | Labs legend_ , and along with Trevor he's kind of the "silent
           | partner" in the Viaweb -> YC -> $$$$$ miracle.
           | 
           | Guy's a _boss_.
        
           | maxmouchet wrote:
           | https://en.wikipedia.org/wiki/Blaster_(computer_worm)
        
           | toredash wrote:
           | https://en.m.wikipedia.org/wiki/Melissa_(computer_virus)
           | 
           | For instance
        
           | kro wrote:
           | I think they talk about worms that spread by infecting other
           | devices in the local network using RCEs in net-services like
           | rdp/smb/..
           | 
           | That or maybe drive-by downloads / java/activeX code
           | execution, which have become more rare
        
           | efdee wrote:
           | Maybe Code Red
           | <https://en.wikipedia.org/wiki/Code_Red_(computer_worm)>,
           | Conficker <https://en.wikipedia.org/wiki/Conficker> or
           | Blaster
           | <https://en.wikipedia.org/wiki/Blaster_(computer_worm)>.
        
             | w1nk wrote:
             | Also, don't forget https://en.wikipedia.org/wiki/Nimda ...
             | all of these were a horror show to deal with on networks of
             | the era..
        
               | brazzy wrote:
               | My favourite: https://en.wikipedia.org/wiki/SQL_Slammer -
               | 376 bytes of malware, spread via spraying UDP packets at
               | random IP addresses, infected basically every vulnerable
               | system on the entire internet within 10 minutes.
        
           | ugl wrote:
           | Probably referencing this
           | https://en.wikipedia.org/wiki/Morris_worm
        
       | Arrezz wrote:
       | I have always wondered if the increasing technical complexity of
       | the world and the bugs it will bring is faster than the efforts
       | of bughunters/security industry. Often times it feels like a
       | losing battle but I would love to see some research on the
       | subject, might be hard to get any solid data however.
        
       | bbarnett wrote:
       | Zero click hacks have been around for all of computing. Nothing
       | connected to the internet, connected to a network, has ever, ever
       | been safe.
       | 
       | All you can do is reduce attack surface, and most of all,
       | monitor.
       | 
       | Another comment blames Apple, and financial incentives. Sure,
       | there may be some of that.
       | 
       | But the reality is that safe code is impossible. Now, you may say
       | "But...", yet think about this.
       | 
       | For all of computing history, all of it, no matter what language,
       | no matter how careful, there is always a vulnerability to be had.
       | 
       | Thinking about software, and security any other way, is an
       | immediate fail.
       | 
       | Arguing the contrary, is arguing that the endless litany of
       | endless security updates, for _the stuff discovered_ , doesn't
       | exist.
       | 
       | And those updates oyly cover stuff discovered. There are endless
       | zero days right now, being exploited in the wild, without
       | patches, of which we are unaware.
       | 
       | We're seen vulnerabilities on every kernel, in mainline software,
       | on every platform, sitting for years too. And you know those are
       | discovered by black hats, and used for a long time before being
       | found out by the rest of the community.
       | 
       | Humans cannot write safe software. Ever. No matter what.
       | 
       | Get over it.
       | 
       | Only detailed, targeted monitoring can help you detect intrusion
       | attempts, expose as little as possible, keep updated, and do your
       | best.
        
         | xorcist wrote:
         | Well designed software with attack surface within the bounds of
         | human understanding does not have these problems.
         | 
         | OpenSSH has been exposed to the public Internet for over two
         | decades, with nothing resembling this type of security problem.
         | OpenSSH runs the protocol parser without permissions on the
         | local filesystem, yet Apple thinks an ancient tiff library with
         | scripting abilitites can be run with full permissions. Of
         | course there is a discussion of financial incentives and
         | customer expectations to be had here.
         | 
         | URL previews are an anti feature for a many users. We could not
         | care less. But it gets shoved upon users by product feature
         | teams for whom a continous stream of new features are their
         | reason for being. That's how we develop commercial software,
         | but that's not the only way.
        
           | Retr0id wrote:
           | > Apple thinks an ancient tiff library with scripting
           | abilitites can be run with full permissions.
           | 
           | It doesn't. That is just one step in a chain of exploits.
        
         | Grimburger wrote:
         | > safe code is impossible
         | 
         | > Humans cannot write safe software. Ever. No matter what.
         | 
         | Formally proven code does what it says on the box? Do we have
         | different definitions of safe perhaps?
        
           | bregma wrote:
           | You could formally prove Unicode renderers are 100% correct.
           | It's because they are 100% correct that they can be relied on
           | to be exploited.
           | 
           | The weak spot when it comes to security is not the hardware
           | or the software, it's the human mind.
        
           | speedgoose wrote:
           | The proof may be valid but the implementation may have a
           | mistake, or the compiler, or the operating system, or the
           | hardware.
        
             | AussieWog93 wrote:
             | Or if it dynamically loads anything that isn't formally
             | proved.
        
           | mh7 wrote:
           | One can be fundamentally mistaken about what "it says on the
           | box" See WPA2/KRACK for example.
           | 
           | It becomes an infinite recursion of "how do we know the proof
           | of the proof of the..." is what we actually want?
        
           | CorrectHorseBat wrote:
           | but does the hardware? Formally proven code does not prevent
           | you from hardware bugs like rowhammer.
        
           | bbarnett wrote:
           | You mean, when you look at your code, or someone else does,
           | they think it's ok?
           | 
           | I guess that's why security issues, even in massively peer
           | reviewed code, are a thing of the past, right?
           | 
           | Do your best, code as safely and securely as you know how,
           | peer review and test and fuzz...
           | 
           | Then when you deploy your code, treat it as vulnerable,
           | because history days it likely is.
           | 
           | Treat your phone as compromised. Anything network connected
           | as compromised.
           | 
           | Because history says it can be, and easily.
           | 
           | Monitoring is one of the most important security measures for
           | a reason.
        
             | CorrectHorseBat wrote:
             | There does actually exist such a thing as formally proven
             | code, which is mathematically according to spec.
             | https://www.sel4.systems/Info/FAQ/proof.pml
        
         | littlestymaar wrote:
         | You're missing the point.
         | 
         | Humans cannot write bug-free software[1]. Then if you're soft
         | has security-related things to do (like credential management
         | etc.) you cannot be sure there won't be a way to bypass it.
         | 
         | But here we're talking almost exclusively about _remote
         | execution_ bugs coming from _memory-safety issues_ , which are
         | indeed preventable. Any managed language does the trick, and if
         | they are not fast enough for your use-case there is Rust. (And,
         | before anyone mentions it, since this isn't some low-
         | level/hardware related thing, you don't need to use unsafe
         | Rust).
         | 
         | Rewrites are costly and take time, but we're talking about the
         | wealthiest company on Earth, and this issues has been around
         | for years so they don't really have an excuse...
         | 
         | [1]: at least without using formal verification tools, which
         | are admittedly not practical enough...
        
           | bbarnett wrote:
           | There is no safe way to code anything, ever.
           | 
           | (Yes people, and Apple should try, but...)
        
             | roca wrote:
             | This absolutist statement is basically meaningless.
             | 
             | Taking Rust as an example (use Swift or even Java if that
             | works better for your use-case), we know how to write Rust
             | code that is guaranteed to be free from common classes of
             | bugs that these zero-click attacks exploit.
             | 
             | Yes, we aren't going to get rid of all bugs, yes, zero-
             | click attacks might still be possible once in a while, but
             | we can make it much, much harder and more expensive, and
             | therefore greatly reduce the set of people who have access
             | to such attacks, and reduce their frequency.
        
               | bbarnett wrote:
               | _we know how to write Rust code that is guaranteed to be
               | free from common classes of bugs that these_
               | 
               | No we don't.
               | 
               | You are trying to shift the sands, by saying "But.. this
               | one thing we can do...", except even that isn't true.
               | 
               | If we did, it wouldn't keep happening, year after year,
               | decade after decade.
               | 
               | But even with peer reviews, with people supposedly
               | knowing how, well.. it just keeps happening.
               | 
               | Do you think every occurrence is random chance? Or is it,
               | maybe, just maybe, that humans can't write bug free code?
        
               | littlestymaar wrote:
               | I guess kind of nihilist conservatism ("why bother
               | changing anything since there's nothing we can do") may
               | explain why we are in such a bad situation today...
        
               | roca wrote:
               | In practice safe Rust code never causes use-after-free
               | bugs, for example, and UAF bugs are large fraction of
               | exploitable RCE bugs.
               | 
               | Safe Rust code could trigger a compiler bug that leads to
               | use-after-free, or trigger a bug unsafe Rust code (i.e.,
               | code explicitly marked "unsafe") that leads to use-after-
               | free; the latter are rare, and the former are even rarer.
               | In practice I've been writing Rust code full time for six
               | years and encountered the latter exactly once, and the
               | former never. In either case the bug would not be in the
               | safe code I wrote.
               | 
               | I'm certainly not claiming that humans can write bug-free
               | code. The claim is that with the right languages you can,
               | in practice, eliminate certain important classes of bugs.
        
               | vbphprubyjsgo wrote:
        
           | malka wrote:
           | How are you sure there is no bug being unfound in rust itself
           | ?
        
             | roca wrote:
             | We don't need to be sure of that. We already have ample
             | evidence that code written in Rust has far fewer
             | vulnerabilities than, say, code written in C.
        
               | xet7 wrote:
               | Quoting from https://forum.nim-lang.org/t/8879#58025
               | 
               | > Someone in the thread said he has 30 years experience
               | in programming, and the only new lang which is really
               | close to C in speed is Rust.
               | 
               | He has a point. Both C and release-mode Rust have minimal
               | runtimes. C gets there with undefined behavior. Rust gets
               | there with a very, very robust language definition that
               | allows the compiler to reject a lot of unsafe practices
               | and make a lot of desirable outcomes safe and relatively
               | convenient.
               | 
               | However, Rust also allows certain behavior that a lot of
               | us consider undesirable; they just define the language so
               | that it's allowed. Take integer overflow, for instance
               | https://github.com/rust-
               | lang/rfcs/blob/26197104b7bb9a5a35db2... . In debug mode,
               | Rust panics on integer overflow. Good. In release mode,
               | it wraps as two's complement. Bad. I mean, it's in the
               | language definition, so fine, but as far as I'm concerned
               | that's about as bad as the C
               | https://stackoverflow.com/a/12335930 and C++
               | https://stackoverflow.com/a/29235539 language definitions
               | referring to signed integer overflow as undefined
               | behavior.
               | 
               | I assume the Rust developers figure you'll do enough
               | debugging to root out integer overflow happens, and maybe
               | that's true for the average system, but not all! I once
               | had to write a C++ program to compute Hilbert data for
               | polynomial ideals. The data remained relatively small for
               | every ideal that could reasonably be tested in debug
               | mode, since debug mode is much slower, after all. But
               | once I got into release mode and work with larger ideals,
               | I started to encounter strange errors. It took a while to
               | dig into the code, add certain manual inspections and
               | checks; finally I realized that the C++ compiler was
               | wrapping the overflow on 64 bit integers! which is when I
               | realized why several computer algebra systems have gmp
               | https://gmplib.org/ as a dependency.
               | 
               | OK, that's the problem domain; sucks to be me, right? But
               | I wasted a lot of time realizing what the problem was
               | simply because the language designers decided that speed
               | mattered more than correctness. As far as I'm concerned,
               | Rust is repeating the mistake made by C++; they're just
               | dressing it up in a pretty gown and calling it a
               | princess.
               | 
               | This is only one example. So, sure, Rust is about as fast
               | as C, and a lot safer, but a lot of people will pay for
               | that execution boost with errors, and will not realize
               | the cause until they've lost a lot of time digging into
               | it... all to boast, what? a 1% improvement in execution
               | time?
               | 
               | IMHO the better design choice is to make it extremely
               | hard to override those overflow checks. There's a reason
               | Ada has historically been a dominant language in
               | aerospace and transportation controls; they have lots of
               | safety checks, and it's nigh impossible to remove them
               | from production code. (I've tried.) Nim seems more like
               | Ada than Rust in this respect: to eliminate the overflow
               | check, you have to explicitly select the very-well-named
               | --danger option. If only for that reason, Nim will seem
               | slower than Rust to a lot of people who never move
               | outside the safe zone of benchmarks that are designed to
               | test speed rather than safety.
               | 
               | To be fair, once you remove all these ~1% slowdown
               | checks, you get a much higher performance boost. And Rust
               | really is a huge improvement on C/C++ IMHO, with very
               | serious static analysis and a careful language design
               | that isn't encumbered by an attempt to be backwards
               | compatible with C. So if you're willing to make that
               | tradeoff, it's probably a perfectly reasonable choice.
               | Just be aware of the choice you're making.
        
               | kibwen wrote:
               | Wrapping on integer overflow isn't a memory safety bug in
               | Rust. It's often a memory safety bug in C because of how
               | common pointer arithmetic is in C, and the likelihood
               | that the overflowed integer will be used as part of that
               | pointer arithmetic. But pointer arithmetic is so
               | exceedingly uncommon in Rust that I've never seen it done
               | once in my ten years of using it. This is a place where
               | familiarity with C will mislead you regarding accurate
               | risk assessment of Rust code; wrapping overflow isn't in
               | the top 20 things to worry about when auditing Rust code
               | for safety.
        
               | littlestymaar wrote:
               | This is again confusing perfect bug-freedom and memory
               | safety. Rust does guarantee your code won't have bugs
               | (like integer overflow), but it will never lead to memory
               | vulnerabilities (in safe Rust), which means you'll never
               | encounter a remote code execution caused by and integer
               | overflow in Rust.
               | 
               | The key takeaway is the following: Rust programs will
               | contain bugs, but none of those bugs will lead to the
               | kind of crazy vulnerabilities that allow those "zero-
               | click attacks". Is that perfect? No, but it's an enormous
               | improvement over the status quo.
        
               | roca wrote:
               | You can enable integer overflow checking in Rust release
               | builds. Android does. I think that trend will continue
               | and at some point even become the default.
        
             | vbphprubyjsgo wrote:
        
         | egberts1 wrote:
         | Apparently, formal proof of algorithms being safe and sound has
         | been repeatedly demonstrated, just not so toward Apple's closed
         | (proprietary) software specifically their large 14-format image
         | decoders running outside a sandbox.
        
           | saagarjha wrote:
           | Apple sandboxes their image decoder.
        
             | egberts1 wrote:
             | NOW they do. it is STILL a large attack area.
        
       | pyman wrote:
       | There are no laws in Israel preventing companies like NSO from
       | building and selling zero-day and zero-click exploits? Without
       | proper regulations the Israeli government is creating a
       | sophisticated and dangerous platform for these kind of illegal
       | attacks.
        
         | sva_ wrote:
         | I don't think NSO is building & selling exploits. They're
         | buying and renting them out. Exploit-as-a-service.
        
         | kworks wrote:
         | A super reductive way of explaining it is it's because a lot of
         | state actors (including NSA) have a lot of skin in the game
         | through active, deep investment in the cyber weapon market.
         | State actors strongly incentivize the 'attack' side of the
         | market while companies historically disincentivize the
         | 'defense' side. A solid elucidation of the system (for
         | laypeople like me) can be found in Nicole Perlroth's book "This
         | is How They Tell Me the World Ends":
         | https://browse.nypl.org/iii/encore/record/C__Rb22352302__STh...
         | 
         | Anyone interested in learning more about how NSO group operates
         | can check out digitalviolence:
         | https://www.digitalviolence.org/#/
        
         | tptacek wrote:
         | What would outlawing NSO Group accomplish? The trade would
         | simply move to jurisdictions with even less oversight.
        
         | markdown wrote:
         | Israel are arguably the worlds biggest beneficiary of the arms
         | trade. Why would they have anything against selling weapons?
        
           | recuter wrote:
           | Only in your active imagination. In reality it is roughly in
           | 8th place with 3% marketshare.
           | 
           | https://www.weforum.org/agenda/2019/03/5-charts-that-
           | reveal-...
        
             | lupire wrote:
             | Now try that analysis per capita.
        
             | kevin_thibedeau wrote:
             | That doesn't include the free weapons the US gives them
             | with their annual stipend.
        
         | jraby3 wrote:
         | In the article it states that it's illegal to sell to over 90
         | countries. But maybe resellers are getting it to these
         | countries?
        
         | rmbyrro wrote:
         | The Israeli government is exploiting NSO as a global diplomacy
         | leverage. In exchange for approving the export of NSO software,
         | they request foreign State support for their interests abroad,
         | ranging from UN voting to commercial deals and anything in
         | between.
        
           | mastax wrote:
           | Essentially works like arms exports, which makes a lot of
           | sense.
        
       | manholio wrote:
       | I have a Galaxy Tab 3 which was on sale in my area until 2018.
       | It's a perfectly usable device. Samsung refuses to upgrade past
       | Android 7. The last security patch is more than a year old.
       | 
       | Mobile security is a huge mess because of planned obsolescence.
       | There should exist no security reasons that force me to junk a
       | device faster than 10 years if the manufacturer is still in
       | business.
       | 
       | Regulatory action is required, that should think past the
       | traditional "warranty" periods, removing security support is a
       | remote confiscation of private property.
        
         | smichel17 wrote:
         | Similar story here. The worst part is that the locked
         | bootloader means that I can't upgrade it myself, either.
        
       | obblekk wrote:
       | Why don't Apple & Google spend a few billion dollars over a few
       | years to rewrite their (non-crypto) unix stack from scratch? It
       | seems like that would be an enduring competitive advantage, good
       | for their users, and reduce future liabilities.
       | 
       | Every programming language can result in bugs, but some are
       | worse/more frequent/harder to solve afterwards than others.
       | 
       | Better yet, why wasn't "rebuild commonly used standard libraries"
       | in the US Infrastructure bill last year? The government could pay
       | programmers a lot, and pay whitehat pen-testers a lot (+ per bug
       | discovered) and in a few years of iteration, we'd have incredibly
       | hardened, durable software infrastructure that would benefit us
       | for decades to come, in the public domain.
        
         | mrtksn wrote:
         | Remember when Apple replaced mDNSResponder with Discoveryd? It
         | was total disaster, %95 CPU usage and all kind of connectivity
         | issues. They had to bring back mDNSResponder not long after.
         | 
         | So there's no guarantee that the replacements would be bug-
         | free. If anything, the current stuff is battle tested through
         | the years and gets better with each scar. I would guess that
         | they are also employing all kind of hacks, i.e things that are
         | not supposed to be like that but are like that and they will
         | break a lot of things if they make the new code work the way it
         | is supposed to work.
         | 
         | There's even XKCD for that: https://xkcd.com/1172/
        
           | contravariant wrote:
           | See also Hyrum's law:
           | 
           | > "With a sufficient number of users of an API, it does not
           | matter what you promise in the contract: all observable
           | behaviors of your system will be depended on by somebody."
        
         | [deleted]
        
         | servercobra wrote:
         | Isn't Google (allegedly) already doing this with Fuchsia?
        
           | Dinux wrote:
           | Fuchsia isn't Unix afaik
        
             | lupire wrote:
             | "Fuchsia implements some parts of Posix, but omits large
             | parts of the Posix model."
             | 
             | -- fuchsia.dev
        
           | Natsu wrote:
           | They've done their own version of a lot of things, but then
           | have open sourced what they can to grow the base of people
           | familiar with that tech who can make things with it.
           | 
           | I mean, on some level you can try to make your own custom
           | TempleOS for everything, but that only gets you (possibly)
           | reduced scrutiny and hiring issues simply because nobody
           | knows how to use it. But if you're already a target, the
           | reduced scrutiny is probably a bad thing since the good guys
           | won't point out the bugs to get them fixed.
        
         | bo1024 wrote:
         | I think to a large extent this is a mythical man-month thing.
         | Beyond a small scale, you probably can't improve or speed up
         | operating system design by throwing money and person count at
         | it.
        
         | Natsu wrote:
         | Mostly because it's not clear that from-scratch rewrites
         | produce better results. They can if the entire architecture
         | needs to be different, but for many libraries it just devolves
         | into an exercise in bikeshedding.
         | 
         | This is particularly true of the US government which, if you've
         | seen their IT systems, is not going to be anyone sane's first
         | choice for doing from-scratch rewrites.
        
           | thelittleone wrote:
           | I disagree. I think governments prioritise spending, jobs and
           | votes over "better results".
        
         | qualudeheart wrote:
         | It's a good long term investment. Just not viable in the short
         | term. Too expensive.
        
         | kdioo88 wrote:
        
       | Hnrobert42 wrote:
       | It seems one way to stop many of them would be to only access
       | email using a web client.
        
         | upofadown wrote:
         | Text email. HTML email provides access to a vast attack surface
         | on the user's device. Normally webmail clients will cheerfully
         | send along all the HTML. Filtering is a futile game of whack a
         | mole.
         | 
         | If you are only doing text email (or some very restricted HTML
         | interpretation) then there is no extra risk in using a local
         | email client. The lack of a HTML interpreter probably means you
         | would be safer than with a webmail client.
         | 
         | If you are doing some form of email as a precaution, you still
         | need a secure place to do it. That might not be a typical smart
         | phone.
        
       | oefrha wrote:
       | What's the evidence that zero-click hacks are growing in
       | popularity? TFA doesn't seem to provide any, and given that in
       | the not so distant past, every other Windows PC was infested with
       | viruses and/or trojans, it's hard to believe device security is
       | on a downward trajectory.
        
         | orbital-decay wrote:
         | Yeah, there was a time when installing Windows XP with an
         | Ethernet cable plugged in was impossible, because the PC would
         | get infected before even finishing the setup, and reboot.
        
         | staticassertion wrote:
         | You can just count ITW exploits against Chrome, for example, to
         | see that they're increasing over the last 3 years. I assume the
         | same is true for some other software.
        
       | dang wrote:
       | Recent and related:
       | 
       |  _A Saudi woman 's iPhone revealed hacking around the world_ -
       | https://news.ycombinator.com/item?id=30393530 - Feb 2022 (158
       | comments)
       | 
       | Before that:
       | 
       |  _A deep dive into an NSO zero-click iMessage exploit: Remote
       | Code Execution_ - https://news.ycombinator.com/item?id=29568625 -
       | Dec 2021 (341 comments)
        
       | stunt wrote:
       | As a software engineer I still don't understand how this is even
       | possible.
       | 
       | What kind of logic behind a URL preview can bypass everything? I
       | think companies like NSO Group are just finding backdoors not
       | software bugs.
        
         | phendrenad2 wrote:
         | I like how all of the replies to this are basically "No they
         | exploited things that were already there". Yeah, and the things
         | that were already there were written by..? Robots? Monkeys? Oh,
         | employees. Got it. _rolls eyes_ I think it 's completely
         | reasonable to assume that any OS vendor has enemy spies working
         | for them. How could they not?
        
         | bawolff wrote:
         | URL preview is a pretty big attack surface, you have to fetch
         | over network using complex protocols, parse the result for a
         | variety of formats, and then render it.
        
           | 55555 wrote:
           | Right. Showing an "image preview" for myriad file types means
           | executing them, essentially, and perhaps on buggy code.
        
         | dpacmittal wrote:
         | AFAIK, they are exploiting vulnerabilities in image and video
         | decoders
        
           | tebbers wrote:
           | Yep it's usually that or body parsers, that sort of thing.
        
         | strstr wrote:
         | From what I recall the stagefright vulnerability might be a
         | good example.
        
         | ajconway wrote:
         | There are software engineers who sometimes write code that's
         | not perfect.
        
           | xvector wrote:
           | So frustrated with the slow adoption/transition to memory-
           | safe languages.
        
             | petalmind wrote:
             | True. Perl exists for more than 30 years already.
        
               | giantrobot wrote:
               | Ada throws out its back with a chuckle.
        
         | sva_ wrote:
         | This one is a good example:
         | https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
         | 
         | Really worth the read, it was quite eye-opening.
         | 
         |  _> JBIG2 doesn 't have scripting capabilities, but when
         | combined with a vulnerability, it does have the ability to
         | emulate circuits of arbitrary logic gates operating on
         | arbitrary memory. So why not just use that to build your own
         | computer architecture and script that!? That's exactly what
         | this exploit does. Using over 70,000 segment commands defining
         | logical bit operations, they define a small computer
         | architecture with features such as registers and a full 64-bit
         | adder and comparator which they use to search memory and
         | perform arithmetic operations. It's not as fast as Javascript,
         | but it's fundamentally computationally equivalent._
         | 
         |  _> The bootstrapping operations for the sandbox escape exploit
         | are written to run on this logic circuit and the whole thing
         | runs in this weird, emulated environment created out of a
         | single decompression pass through a JBIG2 stream. It 's pretty
         | incredible, and at the same time, pretty terrifying._
        
           | ugl wrote:
           | Terrifyingly smart folks there.
        
           | Digit-Al wrote:
           | You have to admire the ingenuity. Just wish it was being put
           | to better use. I can't even fathom the amount of effort
           | required to, basically, create an entire scripting language
           | running in an environment like that.
        
             | saagarjha wrote:
             | Probably an order of magnitude less than was put in to
             | creating that environment ;)
        
             | bagacrap wrote:
             | I feel like this is the same amount of ingenuity put into a
             | typical techie diy project "I turned a coffee pot into a
             | robot that presses my shirts"
        
           | gfd wrote:
           | Holy shit
        
           | 55555 wrote:
           | This is an impressive example, but is it really a common
           | example? I think typical examples are much more mundane and
           | possible only due to poorly written code and memory overflow
           | exploits, etc, no?
        
             | sva_ wrote:
             | Difficult to say. I'd keep in mind that NSO Group is a
             | private company, with limited funding and limited
             | privileges. There are also government actors out there with
             | secret services. Who knows what they have been up to
             | recently.
        
             | CyberRage wrote:
             | definitely rare and highly targeted exploits.
             | 
             | Exploits for mobile phones in the "open market" are in the
             | millions of dollars, for a single working exploit.
             | 
             | But the incentive is growing as these devices are becoming
             | the center of our lives.
        
           | d0mine wrote:
           | It is so improbable and complicated that it is easier to
           | believe that it is just a parallel construction to hide the
           | fact backdoors are used.
        
             | sva_ wrote:
             | I'm not sure if you're being sarcastic, but for parallel
             | construction they'd still need to find this exploit. Are
             | you saying Google Project Zero is out there to hide the
             | traces of backdoors?
        
             | Gigachad wrote:
             | It doesn't seem all that unrealistic. These companies buy
             | and research every single bug they can get for iOS and
             | eventually you have enough that you can glue them together
             | in to full exploits. When you have enough funding, this
             | stuff becomes realistic.
        
               | scoopertrooper wrote:
               | Never underestimate the extremes computer science types
               | will go to in order to prove a point.
        
             | axiosgunnar wrote:
             | I believe this line of thought has not been given enough
             | attention recently.
        
         | jeroen wrote:
         | From the article:
         | 
         | > In December, security researchers at Google analyzed a zero-
         | click exploit they said was developed by NSO Group, which could
         | be used to break into an iPhone by sending someone a fake GIF
         | image through iMessage.
         | 
         | And the thread from back then:
         | 
         | https://news.ycombinator.com/item?id=29568625
         | 
         | That Project Zero blog post lays out the details under the "One
         | weird trick" header.
        
       | A4ET8a8uTh0 wrote:
       | Interesting. Would a whitelist approach have prevented this ( no
       | random person sending you a gif )?
        
       | aborsy wrote:
       | We need a security focused phone. General purpose consumer phones
       | are focused on features; security is not a top priority for the
       | average person.
       | 
       | What are the options now?
        
         | staticassertion wrote:
         | I would imagine https://grapheneos.org/ is the state of the
         | art.
        
         | phh wrote:
         | Not sure who is "we" here, but yes I agree, a general purpose
         | customer phone can't be considered secure against state-level
         | hackers, there MUST be tradeoffs.
         | 
         | As an example, I consider that a secure phone MUST have boot-
         | time full disk encryption passphrase, which needs to be
         | different from lockscreen. For obvious reasons (which is that
         | the user will tend forget their password), you can't have this
         | even as an option on general purpose phones.
         | 
         | That being said. GrapheneOS is IMO a pretty good option wrt
         | security (like they chose to disable JIT, which impacts
         | performance, but supposedly improve security), even though
         | lately their focus is no longer security for business reasons.
         | 
         | Architecture-wise, the best smartphone are pinephones/Librem,
         | because of separation of modem (which is in the case of state-
         | actors, an actual danger), and you can force encryption of all
         | communications (it's even possible to do VoLTE encryption CPU-
         | side rather than modem-side), but I think at the moment their
         | OS really lags behind Android when it comes to security.
        
           | xvector wrote:
           | > even though lately their focus is no longer security for
           | business reasons.
           | 
           | Context?
        
       | dariosalvi78 wrote:
       | Why would anybody work for those companies?
        
         | lpcvoid wrote:
         | It's probably a very interesting domain and pays exceptionally
         | well.
        
         | Digit-Al wrote:
         | Look how many people work for arms manufacturers. How is this
         | any different?
        
       | allisdust wrote:
       | It's high time governments and mega corps fund projects that
       | rewrite all media decoding libraries in pure Rust (or some
       | managed language is performance is not a concern).
       | 
       | People keep saying RIR is some how pointless but the reality is
       | that its impossible to keep ahead of those vulnerabilities that
       | aren't found yet.
        
         | [deleted]
        
       | mathverse wrote:
       | Grsecurity for iOS/Android would stop them.
        
       | fsflover wrote:
       | Qubes OS defends even from such attacks: it doesn't show non-
       | ASCII symbols in window titles in dom0: https://www.qubes-
       | os.org/doc/config-files.
       | 
       | I think this OS deserves more attention. By the way, new version
       | 4.1 is out: https://www.qubes-
       | os.org/news/2022/02/04/qubes-4-1-0/.
        
         | bawolff wrote:
         | > it doesn't show non-ASCI symbols in window titles in dom0:
         | 
         | Seems like one of the least interesting aspects of qubes. Was
         | there a zero day in the font renderer? I would assume such a
         | thing would be more about homograph attacks.
        
           | [deleted]
        
           | formerly_proven wrote:
           | There have been many exploits related to Unicode text
           | rendering.
        
           | foxfluff wrote:
           | > Was there a zero day in the font renderer?
           | 
           | As far as I'm concerned, freetype is another spelling for
           | CVE. There have been multiple high impact vulns. Though it
           | usually seems to require crafted fonts, so I wouldn't be too
           | concerned about window titles using system fonts. Web fonts
           | on the other hand.. disable 'em.
        
         | bastawhiz wrote:
         | If you have so little faith in your system that Unicode
         | characters will lead to an exploit that you block them in
         | window titles, your problem isn't Unicode, the problem is your
         | code that processes and renders it. You still have a problem,
         | you're just making it the user's burden to bear.
        
           | kevin_thibedeau wrote:
           | They should just force the use of bitmap fonts. Unicode isn't
           | the enemy.
        
           | fsflover wrote:
           | You should not have faith in software. You should verify and
           | isolate.
        
         | qualudeheart wrote:
         | I used Qubes as a daily driver for much of 2021. It hogged too
         | much ram so I stopped. Never disliked the lack of non-ascii
         | support. Security is always more important.
        
           | anon_123g987 wrote:
           | > _Never disliked the lack of non-ascii support._
           | 
           | Ah, the elusive quadruple-negative.
        
             | qualudeheart wrote:
             | Each negative means +1 standard deviation verbal iq.
        
               | [deleted]
        
               | anon_123g987 wrote:
               | +1 for the reader. -1 for the writer.
        
               | qualudeheart wrote:
               | I'll have you know I'm in the top 50 on the wordle
               | leaderboard.
        
               | anon_123g987 wrote:
               | What language? Prolog?
        
               | xvector wrote:
               | SHA-256 Passwordle, of course.
        
         | bouke wrote:
         | Not supporting unicode as a feature leaves out most of the
         | world's population. I'm not interested in such "features" as a
         | non-native English speaker.
        
           | secondaryacct wrote:
           | In window titles, that s fine. I like my French accents too
           | but I can give them up for the hypervisor communication...
        
             | mbarbar wrote:
             | More than accents are non-Latin scripts.
        
             | testermelon wrote:
             | Japanese, Korean, Chinese, Arabic, Hebrew, Russian, Tamil,
             | Thai, etc begs to be remembered.
        
           | Proven wrote:
        
           | [deleted]
        
           | fsflover wrote:
           | If you open my link, you will find how to switch it on.
        
         | 55555 wrote:
         | Also segregates every app/workspace into a different
         | vritualized system IIRC
        
           | orblivion wrote:
           | Yes this is in fact the main feature :-) Though it's not
           | exactly as you say it. VMs are a first-class entity. You can
           | easily make as many as you want to represent different
           | security domains. But it's not _every_ app (unless you want
           | it to be).
           | 
           | I didn't even notice the unicode thing. But it doesn't
           | surprise me. They have various similar conservative features.
           | For instance, by default an app in a VM cannot get full
           | screen access. To full-screen a video in youtube you have to
           | full-screen in app and then hit Alt-F11. The concern is that
           | the app somehow tricks the user into thinking that they're
           | interacting with the host OS desktop. Also the host OS
           | doesn't have Internet access; update package files are
           | downloaded by another VM and copied over.
           | 
           | It's fairly paranoid by design, and their tagline is "a
           | reasonably secure operating system".
        
       ___________________________________________________________________
       (page generated 2022-02-19 23:00 UTC)