[HN Gopher] Zero-day in Sign in with Apple
       ___________________________________________________________________
        
       Zero-day in Sign in with Apple
        
       Author : masnick
       Score  : 720 points
       Date   : 2020-05-30 16:13 UTC (6 hours ago)
        
 (HTM) web link (bhavukjain.com)
 (TXT) w3m dump (bhavukjain.com)
        
       | sparker72678 wrote:
       | Any word on what the fix was?
        
       | [deleted]
        
       | will_raw wrote:
       | Some people commenting this to be overpriced, but I don't think
       | so even if they are considering the INR value. The bug is quite
       | critical considering how large the mac and iOS ecosystem is.
        
       | danans wrote:
       | This is why it's good to run fuzzers against any public API
       | (especially an auth API), to verify its behavior on novel inputs.
       | 
       | https://en.m.wikipedia.org/wiki/Fuzzing
        
         | capableweb wrote:
         | In general I agree with you that it's good to run fuzzers
         | against any endpoints, public or internal (as you never know if
         | someone can wrangle data to go from public -> internal
         | somehow), but in this particular case, you'd only find a issue
         | if the fuzzer somehow randomly used the ID of another user that
         | was already created, and verified that it couldn't access it.
         | 
         | In that case, you'd catch it way before even implementing the
         | fuzzer.
         | 
         | So in this case, I don't think a fuzzer would have helped. Some
         | E2E tests written by humans should have caught this though.
        
           | danans wrote:
           | There's no reason that a fuzzer couldn't draw sample email
           | addresses from a large pool of test valid email addresses to
           | add as input. That would just require a fuzzer that allowed
           | you to provide the sample population for a particular data
           | type.
        
       | xyst wrote:
       | Glad we have people willing to disclose these vulnerabilities
       | rather than just selling it on the black market.
        
       | nick-garfield wrote:
       | Am I understanding the article right: the endpoint would accept
       | any email address and generate a valid JWT without verifying the
       | caller owned the email address?
       | 
       | If so, what extra validation did Apple add to patch the bug?
        
       | tly_alex wrote:
       | The write-up is not very clear in my opinion. The graph seems to
       | show that there're 3 API calls (maybe there're more API calls in
       | reality?).
       | 
       | And if I understand this correctly, the issue is in the first API
       | call, where the server does not validate whether the requester
       | owns the Email address in the request.
       | 
       | What confuses me are where're the "decoded JWT's payload" comes
       | from. Is it coming from a different API call or it's somewhere in
       | the response?
        
         | tly_alex wrote:
         | And the choice of black arrow on top of an almost black
         | background... I am not a designer but that's just killing my
         | eyes here.
        
       | awinter-py wrote:
       | my brain mis-parsed as:
       | 
       | (sign in) with (apple zero day)
       | 
       | which is kind of appealing
        
         | xkcd-sucks wrote:
         | If Apple launched a product called Apple Zero Day - like
         | haveibeenpwned maybe - Then the top search results for apple
         | exploits would be an advertisement :)
        
         | xwes wrote:
         | I'm not sure that's a mis-parse. Anyone could sign in with the
         | Apple zero day.
        
         | Retr0spectrum wrote:
         | For anyone else wondering, the correct parse seems to be (Sign
         | in with Apple) (Zero Day)
        
         | supernova87a wrote:
         | I think they were missing a colon, like one of those old-time
         | jokes:
         | 
         | Sign in with Apple: zero day flaw
        
         | saagarjha wrote:
         | I did that too and wondered if they were finally offering a
         | real bug bounty platform...
        
           | jtbayly wrote:
           | I'm not a security researcher, but this guy got paid $100k.
           | Seems to be working?
        
             | saagarjha wrote:
             | One example of a bug being fixed and a researcher being
             | paid does not mean it works, generally.
        
           | big_youth wrote:
           | They have a bug bounty program:
           | https://developer.apple.com/security-bounty/
           | 
           | I actually think they have a good approach. Rewarding major
           | finds with good payouts and avoiding the flood of info and
           | low level web app 'bugs'.
        
             | saagarjha wrote:
             | I am well aware of the bug bounty program. I think it needs
             | work.
        
         | 1f60c wrote:
         | Me too! For me it was because I've usually seen "zero day"
         | written as "0day".
        
       | tusharsoni wrote:
       | Excellent writeup! About 4 months ago, I wrote a comment[0] on HN
       | telling folks how Apple simply omitted the server-side
       | validations from their WWDC videos. And given the lack of good
       | documentation at the time, WWDC videos were what most developers
       | were following.
       | 
       | Even then, the only "security" that developers had was that the
       | attacker wouldn't know the victim's Apple userId easily. With
       | this zero-day attack, it would have been trivial for many apps to
       | get taken over.
       | 
       | [0] https://news.ycombinator.com/item?id=22172952
        
         | zemnmez wrote:
         | your original post has several replies explaining why this is
         | not a security issue. the token you ultimately get is a signed
         | concatenation of 3 base64 encoded fields, and unless you
         | decided to manually separate and decode these without
         | verification (instead of doing the easy thing, just using a
         | standard OIDC library) you would not have any user data that
         | could ultimately result in a security issue
        
       | oauea wrote:
       | Wow, I'm so glad that apple forced me to implement this broken
       | garbage into my apps!
       | 
       | For those not aware, some time ago apple decided it would be a
       | good idea to develop their own sing in system, and then force all
       | apps on their store (that already support e.g. Google Account
       | login) to implement it.
       | 
       | So they brought a huge amount of additional complexity in a large
       | amount of apps, and then they fucked up security. Thank you
       | apple!
        
         | WhyNotHugo wrote:
         | Actually, developers are only forced to implement it _if_ they
         | support logging in with other social auths.
         | 
         | A big problem of many apps is that they only had a "log in with
         | google"/"log in with facebook" button, which is very
         | problematic for people who have neither.
         | 
         | On Android this is more acceptable since you need a Google
         | account for the OS itself anyway.
        
           | aidenn0 wrote:
           | But if I support "create an account" and "sign in with
           | google" then you don't need a google account, but still I
           | need to do sign in with apple.
        
           | jimsug wrote:
           | > On Android this is more acceptable since you need a Google
           | account for the OS itself anyway.
           | 
           | I don't think you do, I'm pretty sure I've skipped that step
           | during device setup on occasion.
        
         | toomuchtodo wrote:
         | I still trust Apple over a rando site or SaaS app. No system is
         | flawless.
        
           | paxys wrote:
           | Yeah but I don't trust them over Google or Facebook when it
           | comes to server side security, and this proves it.
        
             | toomuchtodo wrote:
             | I will still take an Apple security incident over the
             | corporate surveillance apparatus that is Facebook and
             | Google.
        
             | cdubzzz wrote:
             | Eh. I'll take a found and fixed security issue in a feature
             | aimed at keeping my email address private over Google and
             | Facebook's invasive behaviors.
        
           | buboard wrote:
           | OTOH, rando SaaS flaws don't compromise the security of
           | billions
        
             | toomuchtodo wrote:
             | haveibeenpwned would say otherwise.
        
           | oauea wrote:
           | Fortunately some rando site or SaaS app doesn't have the
           | leverage to force me to implement additional garbage! Apple
           | does, and did. I'm still surprised that this didn't trigger
           | an antitrust investigation like when Microsoft abused their
           | monopoly to push internet explorer. This is exactly the same
           | thing, if not worse.
        
             | alwillis wrote:
             | _I 'm still surprised that this didn't trigger an antitrust
             | investigation like when Microsoft abused their monopoly to
             | push internet explorer. This is exactly the same thing, if
             | not worse._
             | 
             | Um... no.
             | 
             | Microsoft gave Internet Explorer away for free when
             | Netscape was selling their browser to businesses, an
             | obvious attempt to undermine Netscape.
             | 
             | They also threatened to cancel the Windows 95 licenses for
             | companies like HP that shipped Netscape with their
             | computers instead of Internet Explorer. That would have
             | essentially put them out of business.
             | 
             | Because Microsoft had 95% of the operating system market
             | share, it had signed a decent decree with the federal
             | government that they wouldn't use their monopoly in
             | operating systems to their advantage in web browsers, which
             | were a new software category then.
             | 
             | So of course, they bundled Internet Explorer with Windows
             | 95 and claim they couldn't be separated, an obvious lie,
             | claiming Internet Explorer was a critical part of the
             | operating system.
             | 
             | All of this orchestrated by future humanitarian Bill Gates,
             | who was quoted as saying then Microsoft needed to "cut off
             | Netscape's air supply."
             | 
             | Even in the United States, Apple isn't a monopoly with
             | about 40% market share. Everything Apple mandates is with
             | companies who've contractually agreed to be part of Apple's
             | developer program and abide by its rules.
             | 
             | Nobody agreed to not ship a competing web browser back in
             | the day.
        
               | harpratap wrote:
               | > Even in the United States, Apple isn't a monopoly with
               | about 40% market share.
               | 
               | This is a non argument. A Duopoly is no reason to not
               | being able to behave like a monopoly. If you don't play
               | by Apple's or Google's rules you essentially lose 50% of
               | the market.
        
               | alwillis wrote:
               | Monopolies aren't illegal; it's the use of monopoly power
               | in corrupt ways that's illegal.
               | 
               | I suspect if you don't want to deal with Apple or Google
               | directly, you can create web apps.
        
             | [deleted]
        
             | cdubzzz wrote:
             | I agree that the requirement from Apple here is kind of
             | dumb, but I don't see how it would not be in the best
             | interest of a user of an app on an iOS device to have the
             | option to sign in with an Apple ID. It also seems silly to
             | consider it "garbage" when you already using a Google
             | Account solution that is essentially the same thing.
        
               | oauea wrote:
               | It's garbage because it was forced into already
               | functioning apps with the threat of removal, and because
               | it evidently has gaping security holes.
        
           | j4yav wrote:
           | I think I trust Apple over a random website too, but was
           | adding an additional kind of sign in and forcing everyone to
           | use it even needed in the first place?
        
             | kennywinker wrote:
             | Almost like apple and google have dangerously monopolistic
             | positions in the mobile sphere and we need meaningful anti-
             | trust action to claw back user freedom and choice?
        
             | neetdeth wrote:
             | I'm very willing to believe that this move was driven by
             | actual user research. As a user, the last thing I want to
             | do is create a user name and password for your app, click a
             | link to validate my email, then enter my password again
             | into some sort of cross platform widget that doesn't
             | support keychain autofill. Unless it's an essential service
             | like a bank or an airline, I'll probably opt out of using
             | it.
             | 
             | I'm also very lazy when it comes to payment methods. Trying
             | to order food and the app doesn't support Apple Pay? Delete
             | it and do something else.
             | 
             | Clearly there are issues with the entrenchment of Apple at
             | the center of all this, and these problems would be better
             | solved with open standards, but the consistency and
             | convenience makes an actual measurable benefit in the end
             | user's daily life.
        
       | gruez wrote:
       | Is it me or is this writeup low on details? There are a couple of
       | commenters saying that this is a great writeup, but all it
       | amounts to is:
       | 
       | 1. what sign in with apple is
       | 
       | 2. sign in with apple is like oauth2
       | 
       | 3. there's _some_ bug (not explained) that allows JWTs to be
       | generated for arbitrary emails
       | 
       | 4. this bug is bad because you can impersonate anyone with it
       | 
       | 5. I got paid $100k for it
        
         | cheez wrote:
         | it's literally that simple.
        
         | ahupp wrote:
         | It seems low on details because the exploit was incredibly
         | simple. AFAICT you didn't have to do anything special to get
         | the signed token, they just gave it out.
         | 
         | > Here on passing any email, Apple generated a valid JWT
         | (id_token) for that particular Email ID.
        
         | antoncohen wrote:
         | I think the write up is so short because the bug is so simple.
         | Send a POST to appleid.apple.com with an email address of your
         | choice, and get back an auth token for that user. Use the auth
         | token to log-in as that user. It's that simple.
        
           | snazz wrote:
           | Did it show what URL you had to send the request to? It
           | looked to me like that was redacted. I'm guessing that that
           | URL would have been in the developer documentation.
        
             | adrianmonk wrote:
             | The URL has "X"s in it. I don't know if that means it is
             | redacted or is variable.
             | 
             | Note that when they give the POST request, they say "Sample
             | Request (2nd step)".
             | 
             | But what is step 2? The diagram above shows step 2 as a
             | _response_ , not a _request_. At least that 's how I
             | interpret an arrow pointing back toward the user. So the
             | write-up conflicts with the diagram.
             | 
             | How do you resolve that conflict? One guess is that "Sample
             | Request (2nd step)" should say "1st step" instead.
             | 
             | Another guess is that the arrow directions don't
             | necessarily always indicate whether a step is a request or
             | a response, so that step 1 could be a request and response,
             | and step 2 could be another request and response that POSTs
             | to a secret URL that was learned about in step 1. (This
             | guess could make sense because the request is a JSON
             | message with just the email field. There must be
             | credentials somewhere, so either it's redacted or some kind
             | of credentials were given another way, like in step 1.)
             | 
             | If this second guess is right, then a follow-on guess is
             | that the crux of the bug is that in step 1, you sign in
             | with a particular email, then Apple says "OK, now here's a
             | secret URL to call to get a JWT token", and then in step 2,
             | you change email, and it doesn't notice/care that you
             | changed emails between step 1 and 2.
        
       | mormegil wrote:
       | With all those high-profile third parties using Apple ID, what
       | would happen if somebody stole/deleted/damaged my data/assets on
       | Dropbox/Spotify/Airbnb/...? Would I sue the provider who would
       | sue Apple? But does Apple provide any guarantees to the relying
       | parties? And if not and the only way is to depend on the
       | reputation when choosing the ID providers you want to support,
       | how would anyone want to support Apple ID after this? And could
       | they not use it if Apple forces them to...?
        
         | foobarbazetc wrote:
         | The ToS of every service has a liability waiver.
        
       | dandigangi wrote:
       | I always have a minute of nervousness while I read these security
       | posts hoping that the bottom will say it's already been fixed
       | with XYZ security team. Glad it's fixed w/ Apple already. The
       | "they still haven't fixed it" or "still haven't responded" ones
       | are scary.
        
       | phamilton wrote:
       | > The Sign in with Apple works similarly to OAuth 2.0.
       | 
       | > similarly
       | 
       | I understand why they wanted to modify OAuth 2.0, but departing
       | from a spec is a very risky move.
       | 
       | > $100,000
       | 
       | That was a good bounty. Appropriate given scope and impact. But
       | it would have been a lot cheaper to offer a pre-release bounty
       | program. We (Remind) occasionally add unreleased features to our
       | bounty program with some extra incentive to explore (e.g. "Any
       | submissions related to new feature X will automatically be
       | considered High severity for the next two weeks"). Getting some
       | eyeballs on it while we're wrapping up QA means we're better
       | prepared for public launch.
       | 
       | This particular bug is fairly run-of-the-mill for an experienced
       | researcher to find. The vast majority of bug bounty submissions I
       | see are simple "replay requests but change IDs/emails/etc". This
       | absolutely would have been caught in a pre-release bounty
       | program.
        
         | zemnmez wrote:
         | > I understand why they wanted to modify OAuth 2.0, but
         | departing from a spec is a very risky move.
         | 
         | The token described in this disclosure is an OpenID Connect 1.0
         | Token. OIDC is a state of the art AuthN protocol that supersets
         | OAuth with additional security controls. It's used by Google,
         | Facebook and Twitch amongst others.
         | 
         | I'd do more analysis, but the author leaves off the most
         | important part here (not sure why)
         | 
         | https://openid.net/specs/openid-connect-core-1_0.html#IDToke...
        
           | phamilton wrote:
           | My understanding is that the token itself is fine and within
           | spec. But they altered the flow to accept an email address in
           | one of the request payloads which opened the door for
           | spoofing the email address. I've never seen an OAuth or
           | OpenID flow that relied on the payload for identity.
        
             | donmcronald wrote:
             | This is likely it IMO. They probably pass the preferred
             | email around as a parameter and the user can jump into the
             | flow and modify it.
        
             | SV_BubbleTime wrote:
             | Wait... I don't get it.
             | 
             | Why was Apple signing a response JWT when the user only
             | supplied an email?
             | 
             | I'm not a web guy so I just don't see what they were going
             | for here.
        
               | phamilton wrote:
               | The gap is that Apple fully verified identity via the
               | normal OAuth flow, and then once identiy was verified
               | they give the use control over what email to include in
               | the token. The idea is that the user can include their
               | email or an apple relay email (that forwards to their
               | email). The bug seems to be in that step, and an attacker
               | can provide an email that is neither their own nor an
               | apple relay email.
               | 
               | Your apple account is safe, but if a 3rd party trusts the
               | signed apple payload without further verification of
               | email, an attacker could sign in as you on the 3rd party
               | app.
        
           | thephyber wrote:
           | The important part is in the author's article. The POST to
           | the opened endpoint generates a valid JWT token for the email
           | address in the payload, not for the one in the logged-in
           | session. Everything else is extraneous.
        
             | SV_BubbleTime wrote:
             | Oh. Ok, so you did have to have an existing logged in
             | session for any account, then could leverage that to get
             | the token for another account by changing out the email?
        
               | mazeltovvv wrote:
               | Ah. So this creates a valid JWT for any email you want,
               | but it is now associated with your own apple account?
        
             | dtech wrote:
             | That also explains why Apple could exclude abuse happening
             | in the logs, which some commenters have refuted.
             | 
             | If they have all the JWTs, seeing if one had a different
             | e-mail than the logged-in user should be fairly doable.
        
           | noctune wrote:
           | I think it's actually the OIDC access token and not the ID
           | token. The OIDC spec does not mandate any structure for the
           | access token, but letting it be a JWT isn't out-of-spec.
        
         | saagarjha wrote:
         | Apple supposedly marks certain beta builds with a bounty
         | multiplier. I say supposedly because like their "research
         | iPhones" they mentioned it in a presentation once and I never
         | heard about it again.
        
           | snazz wrote:
           | I'm guessing that the research iPhones were given to a very
           | select group of security researchers with track records of
           | reporting important vulnerabilities under some kind of NDA.
        
             | saagarjha wrote:
             | 1. Still never heard of anyone getting them and 2. that's
             | worse than useless.
        
               | snazz wrote:
               | Oh, for sure. I should clarify that I meant that they
               | received the iPhones under an NDA, not that they reported
               | bugs under an NDA (aside from the 90-day disclosure to
               | get any bounties).
        
               | phamilton wrote:
               | I'm fine with pre-release bugs being reported under an
               | NDA. If pre-release bugs are publicly disclosed that is
               | arguably a punishment for companies who seek that
               | validation early in the cycle rather than later.
        
               | saagarjha wrote:
               | Word on the street suggests they don't exist:
               | https://twitter.com/thegrugq/status/1236264193906495488
        
               | snazz wrote:
               | I guess it's unrealistic to assume that their supply
               | chain would be secure enough for these that no one would
               | have heard anything.
        
               | saagarjha wrote:
               | Right. You can find pictures of actual internal devices
               | all over the internet (supposedly some people will even
               | sell them to you), so it's quite strange to not hear
               | anything about these. With Apple going after Corellium, I
               | think many researchers are thankful for the various
               | exploits we've had recently that have kept iPhone open.
        
           | g_p wrote:
           | This might be what you're referring to:
           | 
           | From https://developer.apple.com/security-bounty/payouts/
           | 
           | "Issues that are unique to designated developer or public
           | betas, including regressions, can result in a 50% additional
           | bonus if the issues were previously unknown to Apple."
        
             | saagarjha wrote:
             | Yes, that's it.
        
       | Stierlitz wrote:
       | "I found I could request JWTs for any Email ID from Apple and
       | when the signature of these tokens was verified using Apple's
       | public key, they showed as valid."
       | 
       | What are they teaching them in computer school these days. How
       | can you write a security function and not test it for these kind
       | of bugs. Unless all there accidental backdoors have a more
       | nefarious purpose <shoosh>
        
       | ksec wrote:
       | I am hoping WWDC 2020 will have some great news and events that
       | let us forget all the mistake they made in Catalina and incidents
       | like this.
       | 
       | I am not sure if I am understanding the blog post correctly,
       | because its simplicity is beyond ridiculous.
        
       | ani-ani wrote:
       | " _Apple also did an investigation of their logs and determined
       | there was no misuse or account compromise due to this
       | vulnerability._ "
       | 
       | Given the simplicity of the exploit, I really doubt that claim.
       | Seems more likely they just don't have a way of detecting whether
       | it happened.
        
         | deathgrips wrote:
         | Yeah, doesn't this just mean they didn't _detect_ misuse?
        
           | thephyber wrote:
           | It's not clear because it's not a direct quote and Apple
           | probably wasn't explicit about the difference. I wouldn't
           | infer one way or the other from this sentence.
        
         | drivebycomment wrote:
         | The one case (and about the only case) I can think of where
         | they can claim above is:
         | 
         | If they have a log of all JWTs issued that records which user
         | requested and which email in JWT, then they can retroactively
         | check if they issued any (user, email) pair that they shouldn't
         | have. Then they can assert that there was no misuse, if they
         | only found this researcher's attempt.
        
           | mlthoughts2018 wrote:
           | How could you prove the user was the correct user in any
           | given case?
        
             | rubyn00bie wrote:
             | They very like have a complete log of the action performed;
             | I'd guess, they'd perform some kind of replay/playback
             | after the bug was fixed, and see what failed to pass.
             | Assuming their changes immediately flag things like the
             | researcher's initial attempts and discovery, it'd probably
             | be pretty safe to say that no one was affected if no other
             | instances are flagged.
        
               | mlthoughts2018 wrote:
               | How does that answer the question? So what if you can
               | replay logs of all attempts? How can you prove for any
               | specific log that it was the "real" user making the
               | request, and not someone using their email maliciously to
               | make an identical request?
        
               | caseysoftware wrote:
               | It doesn't. That's also the downside of most
               | login/identity providers that implement some form of
               | "Impersonation."
               | 
               | Without really smart and well-considered limitations and
               | logging, it's impossible to tell the User from the User*
               | without digging through audit trails, etc.. and if the
               | developers/architects involved didn't consider the
               | limitations and logging in the first place, odds are they
               | didn't consider the audit trails either.
               | 
               | And yes, I do this for a living.. and have seen bad
               | things from major organizations. :(
        
             | tedunangst wrote:
             | Assume they have two log entries.                 request
             | 678: request from user bananas       request 678: issued
             | token for bananas
             | 
             | That looks good.                 request 987: request from
             | user <blank>       request 987: issues token for carrots
             | 
             | That doesn't look good.
        
             | drivebycomment wrote:
             | I can think of two possible "root causes" with this
             | vulnerability.
             | 
             | One is where the API ("2nd step" mentioned in the doc, POST
             | with a desired email address to get a JWT) is an
             | authenticated API, meaning it requires a valid credential,
             | but Apple's implementation of this API made a mistake of
             | not checking if the user-requested email belongs to the
             | user or not. In this case, the log can give enough
             | information for the forensic analysis to determine misuse.
             | I presumed this was the case.
             | 
             | The other possibility is if they implemented that API as
             | unauthenticated. I _presumed_ this was not the case - as
             | this is a more difficult mistake to make, and given that
             | they claimed some knowledge of no misuse - but I have no
             | way to know for sure this isn 't the case here. The end
             | result would be the same. If the root cause was this case,
             | indeed it's difficult to know if no misuse has happened.
        
         | thephyber wrote:
         | It depends what the fix was. If the fix was just to add a
         | validation check to the POST endpoint to validate that the
         | logged in user session matched the payload (and session data
         | was comprehensively logged/stored), this may be verifiable.
         | 
         | There are obviously lots hypotheticals for which this might not
         | be verifiable.
        
         | phamilton wrote:
         | I'd also like the exact wording of their claim. "There is no
         | evidence of misuse or account compromise" is what I would
         | expect them to say, as "There was no misuse or account
         | compromise" likely opens them up to legal repercussions if that
         | isn't 100% accurate.
        
         | joering2 wrote:
         | I stand corrected and removing my message now since my scenario
         | wasn't related to this zeroday bug.
         | 
         | Thank you to everyone who educated me.
        
           | jtbayly wrote:
           | My guess would be that it was just a lucky guess/bot sending
           | to a lot of addresses. I've had email addresses get spam
           | before without using them anywhere.
        
           | zenexer wrote:
           | This isn't an information disclosure vulnerability that would
           | allow someone to gain knowledge of new Apple IDs. It also
           | doesn't affect first-party applications.
           | 
           | I can't provide an explanation of the behavior you observed
           | without more information, but I can reasonably conclude that
           | the vulnerability here wasn't the cause.
        
             | snazz wrote:
             | I find it hard to believe that signing up for an Apple ID
             | caused the start of the phishing emails unless the email
             | account or computer has been compromised. This is not
             | normal when signing up for an Apple ID.
        
         | rantwasp wrote:
         | they used this tool grep. look it up. /s
        
         | lordofmoria wrote:
         | I agree, especially given how many developer "eyes" were on
         | this from having to integrate the log in with Apple flow into
         | their apps.
         | 
         | Just as a first-hand anecdote to back this up, a dev at my
         | former company which did a mix of software dev and security
         | consulting found a much more complex security issue with Apple
         | Pay within the first hour of starting to implement the feature
         | for a client and engaging with the relevant docs.
         | 
         | How did no one else notice this? The only thing I can think of
         | is the "hidden in plain sight" thing? Or maybe the redacted URL
         | endpoint here was not obvious?
        
       | planetjones wrote:
       | Absolutely astonishing. The internal controls at Apple seem to be
       | borderline non existent.
        
         | tpmx wrote:
         | The average IQ and experience of their software developers has
         | dropped remarkably over the past decade, as they have expanded.
         | 
         | I've had multiple occasions of "Seriously, Apple hired person
         | X? lol" over the past five years or so.
        
           | alfalfasprout wrote:
           | frankly that's true of any silicon valley giant at this
           | point.
        
       | moralestapia wrote:
       | $100,000 (!)
       | 
       | Props to Apple for raising the bar on bounties!
        
         | gwintrob wrote:
         | Feels low given the impact?
        
         | saagarjha wrote:
         | They've seemingly been fairly responsive for web-based issues.
        
       | calimac wrote:
       | Is the dev team that wrote that line of code fired?
        
       | fortran77 wrote:
       | What's amazing is that Apple gets away with claiming their
       | computers are "secure by design."
       | https://www.apple.com/business/docs/site/AAW_Platform_Securi...
       | 
       | There's nothing inherent in their design that guarantees
       | security.
        
         | lotsofpulp wrote:
         | But for some reason I have never had to remove malware from my
         | parents' iOS or macOS devices.
        
           | Lammy wrote:
           | Don't assume that malware will make itself visible. This
           | isn't 2003. iOS vulnerabilities have been used to facilitate
           | genocide: https://blog.trendmicro.com/trendlabs-security-
           | intelligence/...
           | 
           | e: Hello CCP downvote brigade :)
        
         | duskwuff wrote:
         | This software security issue in Sign In With Apple was
         | unrelated to the security of Apple's hardware platform.
        
         | fermienrico wrote:
         | Literally every system in the world has flaws, no matter how
         | secure. We just don't know about these bugs yet.
        
           | resfirestar wrote:
           | "Every system in the world has flaws" and "it's a serious
           | problem that one of the world's most important software
           | vendors, that markets itself as the most secure, keeps
           | releasing products with flaws that would have been discovered
           | in a very basic audit" are not incompatible statements..
        
             | fermienrico wrote:
             | Even then there will be more flaws. I don't think it is
             | possible to build a 100% secure system in the modern age of
             | 22 abstraction layers between the atom to the data center.
             | 
             | Perhaps you do not understand the staggering complexity
             | that lies behind watching a cat video on your iPhone. There
             | are almost uncountable ways to break into the system. This
             | is why there is a bug bounty program from every company -
             | from Stripe (you could argue as a major software company in
             | payment systems) to Microsoft, from Apple to Gitlab, every
             | company has a bug bounty program. Why do you think they
             | give out $1m for a serious bug? If they were not serious
             | about it, that sounds like a big waste of time.
             | 
             | This kind of entitlement attitude is usually from people
             | who've never developed a complex piece of software such as
             | an operating system.
        
       | broooder wrote:
       | You didn't make enough money.
        
       | yalogin wrote:
       | Wow that's a really simple bug. Kudos to the OP to even try that.
       | Most people would just look elsewhere thinking Apple of all
       | companies would get such a basic thing right.
        
         | Yajirobe wrote:
         | What do you mean simple? The result/exploit is simple, but what
         | is the reason the bug is there? Surely the Apple code base is
         | not that simple.
        
       | alexashka wrote:
       | What level of incompetence will it take for the government to
       | step in and create some laws surrounding companies exposing
       | user's private data because 'oops, we don't want to pay security
       | experts what they're actually worth, even though we have billons
       | sitting in bank accounts doing nothing'.
        
       | spartak wrote:
       | where is the ptacek rant about JWT?!
        
       | beamatronic wrote:
       | I'm thankful for all the smart, diligent people working hard to
       | keep us all safe.
        
       | cmauniada wrote:
       | Easiest $100k ever made?
        
         | YetAnotherNick wrote:
         | And the 'Won $100000 from apple's bug bounty program' in CV is
         | enough to raise the salary by $100000
        
           | cmauniada wrote:
           | I would wear that distinction with pride! Kudos to him.
        
       | jagged-chisel wrote:
       | > This bug could have resulted in a full account takeover of user
       | accounts on that third party application irrespective of a victim
       | having a valid Apple ID or not.
       | 
       | The headline makes me think the entire problem lies with Apple,
       | when that's not the case.
        
         | saagarjha wrote:
         | This seems very much like Apple's bug, to the extent that they
         | paid out a $100k bug bounty?
        
           | jagged-chisel wrote:
           | Really?
           | 
           | > ...affected third-party applications which were using it
           | and didn't implement their own additional security measures.
        
             | pquerna wrote:
             | This allowed you to forge an attestation of user identity
             | from Apple for any App that was setup to consume it. Apple
             | is acting as an IdP for its consumer ecosystem. It's
             | definitely their problem.
             | 
             | Third-party applications really have no recourse but to
             | trust the signed JWT. That is just how OAuth2/OIDC works.
             | 
             | User impersonation against an IdP is a serious security
             | issue. 100k is cheap.
             | 
             | The bug was basically on the IdP's "consent screen".
             | Instead of using the email from the active logged in
             | account, it allowed the attack to POST in any email they
             | wanted.
             | 
             | Obviously not having the bug would be great. Apple could do
             | "more", and layer on more things on top of OAuth, like a
             | proof of key extension (DPoP) on the flow:
             | https://tools.ietf.org/html/draft-fett-oauth-dpop-04
             | 
             | But if you have a bug like this, where you can edit your
             | claims arbitrarily inside the IdP, extra security layers
             | kinda don't matter.
        
             | aPoCoMiLogin wrote:
             | Apple is an email authority in this case, and as third
             | party you have to rely on their security. Same as with ssl
             | certificate authority.
        
             | detaro wrote:
             | That there were ways of mitigating it (I'd assume verifying
             | email addresses out of band?) doesn't mean it's not Apple's
             | problem when their authentication system can be tricked to
             | confirm false identities, when its entire purpose is
             | confirming identities.
        
             | Kikawala wrote:
             | Is everyone in this thread only going to read the first two
             | paragraphs of the article and skip the rest of it?
             | 
             | > I found I could request JWTs for any Email ID from Apple
             | and when the signature of these tokens was verified using
             | Apple's public key, they showed as valid. This means an
             | attacker could forge a JWT by linking any Email ID to it
             | and gaining access to the victim's account.
        
         | lostmyoldone wrote:
         | This one rests squarely on Apple, as it was their auth service
         | that contained the bug.
         | 
         | While an application could potentially (not that I know exactly
         | how in this case) further verify the received token, that
         | verification is exactly what an authentication service is
         | supposed to provide, hence the responsibility absolutely rests
         | on Apple who provides the service.
        
       | mazeltovvv wrote:
       | This is an amazing bug, I am indeed surprised this happened in
       | such a critical protocol. My guess is that nobody must have
       | clearly specified the protocol, and anyone would have been able
       | to catch that in an abstracted english spec.
       | 
       | If this is not the issue, then the implementation might be too
       | complex for people to compare it with the spec (gap between the
       | theory and the practice). I would be extremely interested in a
       | post mortem from Apple.
       | 
       | I have a few follow up questions.
       | 
       | 1. seeing how simple the first JWT request is, how can Apple
       | actually authenticate the user at this point?
       | 
       | 2. If Apple does not authenticate the user for the first request,
       | how can they check that this bug wasn't exploited?
       | 
       | 3. Anybody can explain what this payload is?
       | 
       | { "iss": "https://appleid.apple.com", "aud": "com.XXXX.weblogin",
       | "exp": 158XXXXXXX, "iat": 158XXXXXXX, "sub": "XXXX.XXXXX.XXXX",
       | "c_hash": "FJXwx9EHQqXXXXXXXX", "email":
       | "contact@bhavukjain.com", // or "XXXXX@privaterelay.appleid.com"
       | "email_verified": "true", "auth_time": 158XXXXXXX,
       | "nonce_supported": true }
       | 
       | My guess is that c_hash is the hash of the whole payload and it
       | is kept server side.
        
         | PunksATawnyFill wrote:
         | Let's start with the fact that Apple is forcing people to use
         | an E-mail address as a user ID. That's just straight-up stupid.
         | 
         | How many members of the public think that they have to use
         | their E-mail account password as their password for Apple ID
         | and every other amateur-hour site that enforces this dumb rule?
         | 
         | MILLIONS. I would bet a decent amount of money on it. So if any
         | one of these sites is hacked and the user database is
         | compromised, all of the user's Web log-ins that have this
         | policy are wide open.
         | 
         | Then there's the simple fact that everyone's E-mail address is
         | on thousands of spammers' lists. A simple brute-force attack
         | using the top 100 passwords is also going to yield quite a
         | trove, I'd imagine.
         | 
         | Apple IDs didn't originally have to be E-mail addresses.
         | They're going backward.
        
           | ath0 wrote:
           | The thing that made this bug possible was because, while your
           | Apple ID has to be an email address, Apple has a mechanism to
           | avoid exposing it to third parties - unlike Google, Apple, or
           | Facebook's single sign-on implementation; the bug seems to be
           | in the step between verifying your identity and telling Apple
           | whether you would or would not like your email address to be
           | exposed.
           | 
           | If anything, the issue is that _third parties_ treat the
           | email address as a unique, unchangeable identity, and then
           | agree to rely on Apple 's assertion of what your email
           | address is. But given how hard identity is - and the
           | challenges in dealing with passwords, account recovery, and
           | name changes at scale - it's a pretty reasonable tradeoff to
           | make.
        
             | 0x0 wrote:
             | Sign in with facebook also lets the user choose whether or
             | not to share their email address.
        
         | sdhankar wrote:
         | The bug is not in the protocol. The bug is about the extra
         | value addition that apple was doing by letting the user choose
         | any other email address. 1. The account take over happens on
         | the third party sites that use the apple login. 2. This seems
         | like a product request to add value to user by providing a
         | relay email address of a user's choice. From the report- `I
         | found I could request JWTs for any Email ID from Apple and when
         | the signature of these tokens was verified using Apple's public
         | key, they showed as valid.`
         | 
         | It's not a bug with protocol or security algorithm. A lock by
         | itself does not provides any security if its not put in the
         | right place.
        
           | albertTJames wrote:
           | Exactly, a case of broken security by overdoing privacy.
        
         | arcdigital wrote:
         | For #3 it's part of the JWT ID Token. Take a look at
         | https://openid.net/specs/openid-connect-core-1_0.html#Hybrid...
        
         | guessmyname wrote:
         | All your questions can be answered by reading "Sign in with
         | Apple REST API" [1][2]:
         | 
         | 1. User clicks or touches the "Sign in with Apple" button
         | 
         | 2. App or website redirects the user to Apple's authentication
         | service with some information in the URL including the
         | application ID (aka. OAuth Client ID), Redirect URL, scopes
         | (aka. permissions) and an optional state parameter
         | 
         | 3. User types their username and password and if correct Apple
         | redirects them back to the "Redirect URL" with an identity
         | token, authorization code, and user identifier to your app
         | 
         | 4. The identity token is a JSON Web Token (JWT) and contains
         | the following claims:
         | 
         | * iss: The issuer-registered claim key, which has the value
         | https://appleid.apple.com.
         | 
         | * sub: The unique identifier for the user.
         | 
         | * aud: Your client_id in your Apple Developer account.
         | 
         | * exp: The expiry time for the token. This value is typically
         | set to five minutes.
         | 
         | * iat: The time the token was issued.
         | 
         | * nonce: A String value used to associate a client session and
         | an ID token. This value is used to mitigate replay attacks and
         | is present only if passed during the authorization request.
         | 
         | * nonce_supported: A Boolean value that indicates whether the
         | transaction is on a nonce-supported platform. If you sent a
         | nonce in the authorization request but do not see the nonce
         | claim in the ID token, check this claim to determine how to
         | proceed. If this claim returns true you should treat nonce as
         | mandatory and fail the transaction; otherwise, you can proceed
         | treating the nonce as optional.
         | 
         | * email: The user's email address.
         | 
         | * email_verified: A Boolean value that indicates whether the
         | service has verified the email. The value of this claim is
         | always true because the servers only return verified email
         | addresses.
         | 
         | * c_hash: Required when using the Hybrid Flow. Code hash value
         | is the base64url encoding of the left-most half of the hash of
         | the octets of the ASCII representation of the code value, where
         | the hash algorithm used is the hash algorithm used in the alg
         | Header Parameter of the ID Token's JOSE Header. For instance,
         | if the alg is HS512, hash the code value with SHA-512, then
         | take the left-most 256 bits and base64url encode them. The
         | c_hash value is a case sensitive string
         | 
         | [1]
         | https://developer.apple.com/documentation/sign_in_with_apple...
         | 
         | [2]
         | https://developer.apple.com/documentation/sign_in_with_apple...
        
       | playpause wrote:
       | If the bug is as simple as everyone is saying, why hasn't it been
       | discovered until now?
        
       | zucker42 wrote:
       | Isn't this not a "zero-day"? Zero-day refers to when the company
       | has no notice of an exploit.
        
       | hank_z wrote:
       | Wow, this bug is incredibly simple but severe. I'm wondering how
       | did Bhavuk Jain find it
        
         | alfalfasprout wrote:
         | honestly I'm not surprised people didn't run into it during
         | testing... you make a test email account and get a signin token
         | for it. And then realize wait... how does apple know I own that
         | email??
        
       | cfors wrote:
       | Wow. That's almost inexcusable, especially due to the requirement
       | of forcing iOS apps to implement this. If they didn't extend the
       | window (from originally April 2020 -> July 2020) so many more
       | apps would have been totally exploitable from this.
       | 
       | After this, they should remove the requirement of Apple Sign in.
       | How do you require an app to implement this with such a
       | ridiculous zero day?
        
         | driverdan wrote:
         | > That's almost inexcusable
         | 
         | No, it's completely inexcusable. There should never be such a
         | simple, major security vulnerability like this. Overlooking
         | something this basic is incompetence.
        
         | yreg wrote:
         | I believe the deadline is June 30. [0]
         | 
         | [0] - https://developer.apple.com/news/?id=03262020b
        
         | thephyber wrote:
         | I'm of the mind that just about any security bug is "excusable"
         | if it passed a good faith effort by a qualified security audit
         | team and the development process is in place to minimize such
         | incidents.
         | 
         | The problem I have is that I can't tell what their processes
         | are beyond the generic wording on this page[1]
         | 
         | [1] support.apple.com/guide/security/introduction-
         | seccd5016d31/web
        
           | resfirestar wrote:
           | Even if there was clear evidence that this system underwent a
           | proper security audit, with a failure this basic you would
           | have to ask why it didn't work. What is going on inside Apple
           | that brought them to the point of releasing a lock that
           | simply opens with any key, despite the efforts of their state
           | of the art lock design process and qualified lock auditors?
        
             | Areading314 wrote:
             | Writing some test cases for "can anyone generate a valid
             | token" or "does an invalid token allow access" should be
             | the first thing to do when writing an auth system.
        
               | thephyber wrote:
               | Your test cases make sense, but they ignore an obvious
               | hypothetical possibility: The OIDC implementation was a
               | well-tested core feature (with the tests that you
               | mention), but the email proxy feature was a bolt on that
               | was somehow not considered risky (so it could easily have
               | bypassed a full, renewed security audit).
               | 
               | Also, it's not sufficient to "have a test case". The
               | intent and the implementation must be coherent.
        
       | XCSme wrote:
       | (Unrelated to the Apple bug)
       | 
       | Is there any bug bounty program for small businesses/apps? I only
       | found hackerone but it seems to be only for enterprise. Is there
       | any recommended platform for small businesses to create their own
       | public bounty program?
        
       | afrcnc wrote:
       | Replace "zero-day" with "privately reported security bug for
       | which I got $100k"
       | 
       | That's not how zero-day works
        
         | saagarjha wrote:
         | It was a zero day up until the first report was made.
        
       | earth2mars wrote:
       | "A lot of developers have integrated Sign in with Apple since it
       | is mandatory for applications that support other social logins"
       | -- How pathetic Apple is to force their own service on
       | developers!!
        
         | wmichelin wrote:
         | Why are you surprised? They force you to use the App Store.
         | They force you to process payments through their systems. They
         | force you to comply with many things. How is this any
         | different?
        
       | jasoneckert wrote:
       | Since this was an extremely simple exploit, I can't help but
       | wonder if it was a purposeful one on Apple's part.
       | 
       | Apple has been spending a lot of money on a security-focused
       | marketing campaign these past few years, and encouraging a high-
       | price payout of $100k is sage marketing.
        
       | NicoJuicy wrote:
       | Perhaps slightly related that finding Apple zero days was less
       | bounty award than finding Android zero days.
       | 
       | I think we can wrap up the security and anonymous part that Apple
       | has been claiming for their overpriced devices.
        
         | matchbok wrote:
         | Overpriced? The average android phones cost the same as iPhone.
         | That argument is tried and not relevant anymore.
         | 
         | Plus, iPhones actually work longer than a year.
        
           | NicoJuicy wrote:
           | And what is the root cause of a minority of Android phones
           | raising their prices every year :)
           | 
           | Plus I have my Android phone for 3 years now.
           | 
           | The one before that was for 4 years old and priced at 345
           | EUR.
           | 
           | At least use a counter argument that is correct.
        
           | tpush wrote:
           | > The average android phones cost the same as iPhone.
           | 
           | They don't.
        
         | saagarjha wrote:
         | Wrong exploit target.
        
       | tyrion wrote:
       | How is this something that can happen? I mean, the only
       | responsibility of an "authentication" endpoint is to release a
       | JWT authenticating the current user.
       | 
       | At least from the writeup, the bug seems so simple that it is
       | unbelievable that it could have passes a code review and testing.
       | 
       | I suspect things were maybe not as simple as explained here,
       | otherwise this is at the same incompetence level as storing
       | passwords in plaintext :O.
        
         | enitihas wrote:
         | Apple has had more simple "unbelievable" bugs, e.g
         | 
         | https://news.ycombinator.com/item?id=15800676 (Anyone can login
         | as root without any technical effort required)
         | 
         | And to top it off
         | (https://news.ycombinator.com/item?id=15828767)
         | 
         | Apple keeps having all sorts of very simple "unbelievable"
         | bugs.
        
           | meowface wrote:
           | You can't forget the infamous "goto fail":
           | https://www.imperialviolet.org/2014/02/22/applebug.html
           | 
           | There seems to be kind of a common theme to these:
           | 
           | - SSL certificates not validated at all
           | 
           | - root authentication not validated at all
           | 
           | - JWT token creation for arbitrary Apple ID users not
           | validated at all
           | 
           | I think these are all very likely due to error and not
           | malice, but it's pretty crazy how these gaping holes keep
           | being found.
        
           | saagarjha wrote:
           | More recent example of Apple "undoing" patches:
           | https://www.synacktiv.com/posts/exploit/return-of-the-ios-
           | sa...
        
           | fishywang wrote:
           | Last year (or maybe 2018?) my employer hired an external
           | consultant to give engineers security trainings (all are
           | optional, they provide a few sessions on different topics,
           | and engineers can sign up for interested ones). In one of the
           | sessions I signed up, during the pre-session chat (while
           | waiting for everyone signed up show up in the conference
           | room), the external trainer "casually" chatted about "if you
           | have an Android phone, you should throw it out of the window
           | right now and buy an iPhone instead". That's the point I lost
           | all my respect to them.
           | 
           | (The session itself was ok-ish. It was some trainings about
           | xsrf, nothing special either)
           | 
           | (That incident also triggered me to purchase a sheet of
           | [citation needed] stickers from xkcd to put on my laptop, so
           | the next time this kind of thing happens I can just point to
           | the sticker on my laptop. But I didn't got a chance to do
           | that yet since received the stickers)
        
             | kohtatsu wrote:
             | This was pretty true not long ago. It's still a notoriously
             | short window for OEM software patches on Android, whereas
             | Apple's first 64-bit phone, the 5s from Fall 2013 is still
             | getting patches (May 20th was the last one, iOS 12.4.7)
             | 
             | Apple pioneered usable security with TouchID and the secure
             | enclave; a lot of Android fingerprint readers were gimmicks
             | for years, same with the face unlocks. https://manuals.info
             | .apple.com/MANUALS/1000/MA1902/en_US/app...
             | 
             | They also invest piles of money into privacy
             | https://apple.com/privacy (1 minute overview),
             | https://apple.com/privacy/features (in-depth with links to
             | whitepapers).
             | 
             | I imagine that's where your teacher was coming from.
        
               | enitihas wrote:
               | Fortunately, it seems google has separated security
               | updates from the OEM updates on some newer phones it
               | seems. The phone I bought In November 2018, right now is
               | receiving monthly security updates via Play Services
               | updates, and is right now on the May version for some
               | time.
        
               | saagarjha wrote:
               | > The phone I bought In November 2018
               | 
               | I think it's too early to claim anything for that one.
        
               | fishywang wrote:
               | Yes I'm not gonna defend Google's privacy issues, but
               | privacy is totally different from security. People tend
               | to confuse them. I understand it if it's average Joe got
               | confused. But if you are a "security consultant" and you
               | still have no idea what's the difference between them,
               | then that's a big problem.
               | 
               | Regarding security, see examples like
               | https://qz.com/1844937/hong-kongs-mass-arrests-give-
               | police-a...
        
               | fishywang wrote:
               | For privacy, I guess you can have full confident that
               | Apple has the _intention_ to keep your data private. But
               | do you have full confident that Apple has the
               | _competence_ to fulfill that intention?
               | 
               | Google doesn't have the intention to keep _all_ your data
               | private, sure. They are an ads company after all. But for
               | things they want to protect, in most cases they are
               | competent enough to protect them.
               | 
               | (Disclaimer: I also worked for Google, but the "employer"
               | I mentioned in my original comment was not Google)
        
             | SV_BubbleTime wrote:
             | It's none of my concern what camp people fall into...
             | but...
             | 
             | I hired a very high level pen test company, they mandated
             | iPhones for their company work. They were the best infosec
             | company we've ever hired. Sample of one.
             | 
             | I wouldn't suggest iPhones are more safe than Android, but
             | i also wouldn't suggest in any way they are less safe
             | overall.
        
         | randomfool wrote:
         | The only thing I can think of is some 'test mode' override
         | which inadvertently got enabled in production.
         | 
         | 1. Don't add these.
         | 
         | 2. If you _must_ add something, structure it so it can only
         | exist in test-only binaries.
         | 
         | 3. If you really really need to add a 'must not enable in prod'
         | flag then you must also continuously monitor prod to ensure
         | that it is not enabled.
         | 
         | Really hoping they follow up with a root-cause explanation.
        
           | saagarjha wrote:
           | Apple? No way.
        
         | Jaxkr wrote:
         | Apple has really lost their touch, software quality has
         | declined dramatically
        
           | VMisTheWay wrote:
           | I'm not sure if Apple ever had quality.
           | 
           | They are the Nintendo of Computing. They have some novelties,
           | but in general they are average at their best. Notice that
           | both Nintendo and Apple are big advertisers.
        
           | iphone_elegance wrote:
           | does it really matter though?
        
             | hootbootscoot wrote:
             | Someone with an AV production studio totalling over $100k
             | in Apple products and a few million in outboard gear whose
             | drivers worked fine before may just care a bit...
             | 
             | The whole pro-multimedia production crowd probably cares...
             | 
             | (vs the current Apple paramour: the multimedia consumer who
             | wants to order pizza and get back to netflix on their
             | phablet or whatever..)
        
           | SaltyBackendGuy wrote:
           | Anecdotally, I upgraded my wife's iMac to Catalina and she's
           | experiencing issues (rendering latency) she's never had
           | before (hadn't upgraded the OS since buying it 4 years ago).
           | I figured is was good to get on the latest and greatest for
           | security reasons, now she wont let me touch her computer
           | anymore.
        
             | ksec wrote:
             | I used to be on the latest and security camp as well. But
             | after all these years I am starting to understand why
             | people dont update.
             | 
             | It is extremely frustrating. Especially when Catalina
             | removes features that were working perfectly.
        
               | hootbootscoot wrote:
               | I'm about this close to making an e-petition demanding
               | Snow Leopard FOSS for posterity, now that they have
               | successfully milked us all multiple times. My 2011
               | hardware works just fine, and 'obsolete' is meaningless
               | in a world where IRC was replaced by Slack and where
               | Visual Basic tutorial fodder from 1998 become MVP web
               | products...
        
               | TheSpiceIsLife wrote:
               | I'm still on High Sierra, most recent 10.13.6 security
               | update was ~3 days ago.
               | 
               | I'll upgrade when some piece of software I need to use
               | requires it.
        
               | lowdose wrote:
               | I went to Mojave and that went without trouble except I
               | lost the ability to use my external GPU, but I knew that.
        
               | TheSpiceIsLife wrote:
               | Mojave removed Facebook, Twitter, Vimeo, and Flickr,
               | integrations, none of which I use, so that would be good.
               | 
               | But I'm not aware of any new feature in Mojave I want or
               | need, so the 2013 MacBook Pro Retina I'm using will stay
               | on High Sierra for today :)
        
               | alphaomegacode wrote:
               | As an Apple user for decades, have to say that High
               | Sierra seemed to be one of their better recent releases.
               | 
               | I have an iMac that uses it and a Mac Mini that is on
               | Mojave and for some reason, High Sierra just feels more
               | stable with some software.
               | 
               | Firefox runs fine on High Sierra and has crashed multiple
               | times in the past few weeks after using it on Mojave.
               | 
               | Maybe I'm just biased having used High Sierra for so long
               | and dreading Catalina lol.
        
             | StreamBright wrote:
             | Welcome to the late adopter group. Never upgrade, unless it
             | is absolutely necessary.
        
         | donmcronald wrote:
         | My guess is that it has to do with that private relay because
         | OAuth isn't too complex by itself. During the OAuth flow they
         | probably collect the user preference, (if needed) go out to the
         | relay service and get a generated email, and POST back to their
         | own service with the preferred email to use in the token.
         | 
         | If that's it, it's about as bad as doing password
         | authentication in JavaScript and passing authenticated=true as
         | a request parameter.
         | 
         | Edit: Looking at the OAuth picture in the article, my guess
         | would be like adding a step in between 1 and 2 where the server
         | says "what email address do you want here" and the (client on
         | the) user side is responsible for interacting with the email
         | relay service and posting back with a preferred email address.
         | Or the server does it but POSTS back to the same endpoint which
         | means the user could just include whatever they want right from
         | the start.
         | 
         | The only thing that makes me think I might not be right is that
         | doing it like that is just way too dumb.
         | 
         | AND I'm guessing a bunch of Apple services probably use OAuth
         | amongst themselves, so this might be the worst authentication
         | bug of the decade. The $100k is a nice payday for the
         | researcher, but I bet the scope of the damage that could have
         | been done was MASSIVE.
         | 
         | Edit 2: I still don't understand why the token wouldn't mainly
         | be linked to a subject that's a user id. Isn't 'sub' the main
         | identifier in a JWT? Maybe it's just been too long and I don't
         | remember right.
        
       | Ronnie76er wrote:
       | Just want to mention something about the id_token provided. I'm
       | on my phone, so I don't have apples implementation handy, but in
       | OIDC, the relying party (Spotify for example) is supposed to use
       | the id_token to verify the user that is authenticated,
       | specifically the sub claim in the jwt id_token.
       | 
       | https://openid.net/specs/openid-connect-core-1_0-final.html#...
       | 
       | It's likely (although like others have noted, this is scant on
       | details), that this value was correct and represented the
       | authenticated user.
       | 
       | A relying party should not use the email value to authenticate
       | the user.
       | 
       | Not contesting that this is a bug that should be fixed and a
       | potential security issue, but perhaps not as bad.
       | 
       | Anyone else? Am I reading this right?
        
       | PunksATawnyFill wrote:
       | WTF is a "zero-day?"
        
       | rvz wrote:
       | > I found I could request JWTs for any Email ID from Apple and
       | when the signature of these tokens was verified using Apple's
       | public key, they showed as valid. This means an attacker could
       | forge a JWT by linking any Email ID to it and gaining access to
       | the victim's account.
       | 
       | Great writeup there. Looks like a Apple JWT bug and the
       | verification went through despite it being 'signed' and
       | 'tamperproof'. Clearly its footguns allowed this to happen, thus
       | JWTs is the gift that keeps on giving to researchers.
       | 
       | What did I just outline days before? [0]. Just don't use JWTs,
       | there are already secure alternatives available.
       | 
       | [0] https://news.ycombinator.com/item?id=23315026
        
         | arkadiyt wrote:
         | No one should be using JWT but it's unfair to blame JWT here.
         | Apple wasn't verifying the supplied email address belonged to
         | the signed in user - that's completely outside of the token
         | format they chose.
        
           | nick-garfield wrote:
           | > No one should be using JWT
           | 
           | What??
        
           | switz wrote:
           | > No one should be using JWT
           | 
           | I've heard criticisms of JWT -- mostly around the lack of
           | ability to revoke a JWT.
           | 
           | One could then introduce a refresh token with a longer ttl,
           | which can be revoked on the server. But of course then you
           | lose some of the statelessness that JWT benefits from.
           | 
           | But still, it seems like a reasonable approach to
           | authentication to me. I can authenticate with several
           | services if need be, and I can check locally if my token is
           | 'likely' valid.
           | 
           | Care to expand why you think one shouldn't use JWT tokens?
        
         | blntechie wrote:
         | In the apps I write for my org integrating with the org SSO
         | provider, I treat the JWT tokens mostly like a non-JWT token.
         | Verify the token with the IDP, map the token to a specific user
         | and never relying on the JWT payload user info for the resource
         | auth. It takes additional 0.25s during the login process but
         | has never let me down. As the SSO provider was issuing non-JWT
         | tokens few years back, this was the way we went about making
         | sure the user is who they are saying they are and just stuck
         | with the same approach when they moved to JWT tokens.
        
         | quesera wrote:
         | Your earlier comment, and your comment today, are both
         | baseless.
         | 
         | They demonstrate no issues with JWTs. There _are_ issues with
         | JWTs, but you have not hit on any of them.
        
       | outime wrote:
       | Wow, I'm in shock. How could Apple let this one slip in? When I
       | was a junior fullstack I had to design a very similar system and
       | this was one of the very basic checks that I had in mind back
       | then. I don't know how could anyone excuse this very basic bug in
       | such critical service.
        
         | enitihas wrote:
         | Apple has let all sorts of things slip in which seem
         | unbelievable.
         | 
         | e.g https://news.ycombinator.com/item?id=15800676 and
         | 
         | https://news.ycombinator.com/item?id=15828767
         | 
         | So I don't get shocked anymore seeing Apple security issues.
        
       ___________________________________________________________________
       (page generated 2020-05-30 23:00 UTC)