[HN Gopher] When MFA isn't MFA, or how we got phished
       ___________________________________________________________________
        
       When MFA isn't MFA, or how we got phished
        
       Author : dvdhsu
       Score  : 144 points
       Date   : 2023-09-13 19:48 UTC (3 hours ago)
        
 (HTM) web link (retool.com)
 (TXT) w3m dump (retool.com)
        
       | hn_throwaway_99 wrote:
       | Question for security folks out there:
       | 
       | So often I see these kinds of phishing attacks that have hugely
       | negative consequences (see the MGM Resorts post earlier today),
       | and the main problem is that just one relatively junior employee
       | who falls for a targeted phishing attack can bring down the whole
       | system.
       | 
       | Is anyone aware of systems that essentially require _multiple_
       | logins from different users when accessing sensitive systems like
       | internal admin tools? I 'm thinking like the "turn the two keys
       | simultaneously to launch the missile" systems. I'm thinking it
       | would work like the following:
       | 
       | 1. If a system detects a user is logging into a particularly
       | sensitive area (e.g. a secrets store), and the user is from a new
       | device, the user first needs to log in using their creds
       | (including any appropriate MFA).
       | 
       | 2. In addition, _another_ user like an admin would need to log in
       | simultaneously and approve this access from a new device.
       | Otherwise, the access would be denied.
       | 
       | I've never seen a system like this in production, and I'm curious
       | why it isn't more prevalent when I think it should be the default
       | for accessing highly sensitive apps in a corporate environment.
        
         | [deleted]
        
         | landemva wrote:
         | Transactions (messages) can be required to have multi-sig, if
         | that is desired.
         | 
         | There are smartphone apps and various tools to send a multi-sig
         | message:
         | 
         | https://pypi.org/project/pybtctools
        
         | fireflash38 wrote:
         | You're looking for quorums, or key splits. They aren't super
         | common. You see them with some HSMs (need M of N persons to
         | perform X action).
        
           | joshxyz wrote:
           | not good with acronyms, what is hsm here?
        
             | justaddwater wrote:
             | Hardware Security Module
        
             | coderintherye wrote:
             | Hardware security module
             | https://en.wikipedia.org/wiki/Hardware_security_module
        
         | joshxyz wrote:
         | i wonder on this too if people really use shamir secret sharing
         | as part of some security compliance
        
         | aberoham wrote:
         | Teleport has two person rule + hardware token enforcement,
         | https://goteleport.com/resources/videos/hardened-teleport-ac...
        
           | hn_throwaway_99 wrote:
           | Really, really appreciate you sending this! I will dig in but
           | this seems to be exactly what I was asking about/looking for.
           | I'm always really curious why the big native cloud platforms
           | don't support this kind of authentication natively.
        
         | johngalt wrote:
         | Mechanisms like this exist, but they probably aren't integrated
         | into whatever system you are using, and delays which involve an
         | approval workflow add a lot of overhead.
         | 
         | In most cases the engineering time is better spent pursuing
         | phishing resistant MFA like FIDO2. Admin/Operations time is
         | better spent ensuring that RBAC is as tight as possible along
         | with separate admin vs user accounts.
        
       | miki123211 wrote:
       | does iOS have a "is there a call in progress" API?
       | 
       | If so, it would be a good idea for OTP apps to use it and display
       | a prominent warning banner when opened during a call.
        
       | yieldcrv wrote:
       | I just call them one-time passcodes (otp)
       | 
       | Most of the time I am not using multifactor or 2factor the way it
       | was designed
       | 
       | But it is accurately a one time passcode
        
       | AYBABTME wrote:
       | To deepfake the voice of an actual employee, they would need
       | enough recorded content of that employee's voice... and I would
       | think someone doing admin things on their platform isn't also in
       | DevRel with a lot of their voice uploaded online for anyone to
       | use. So it smells like someone with close physical proximity to
       | the company would be involved.
        
         | V__ wrote:
         | One possibility would be to just call the employee and record
         | their voice. One could pretend to be a headhunter.
        
         | gabereiser wrote:
         | There's a lot of ways to get clips of recordings of someone's
         | voice. You can get that if they ever spoke at a conference or
         | on a video. Numerous other ways I won't list here.
        
         | themagician wrote:
         | Probably wasn't a "deepfake" just someone decent with
         | impressions and a $99 mixer. After compression this will be
         | more than good enough to fool just about anyone. No deepfake is
         | needed. Just call the person once and record a 30 second phone
         | call. Tell them you are delivering some package and need them
         | to confirm their address.
        
       | tamimio wrote:
       | You know how I never get phished? I never answer any call or sms
       | asking anything, and a link in a text message is ALWAYS a major
       | red flag. I know everyone is talking about the MFA, but the entry
       | point was the employees phone numbers, how they got that in the
       | first place? Especially from the article the attacker knew the
       | internals of this company..
       | 
       | As for the MFA, google should have the on demand peer-peer sync
       | rather than cloud save, for example, a new device is added, then
       | your Google account is used to link between these new device and
       | existing device, click sync and you will be asked on your old
       | device that a new device is requesting bla bla would you allow
       | it? And obviously nothing saved in the cloud, just a peer-peer
       | sync and google is a connection broker.
        
       | brunojppb wrote:
       | Fantastic write-up. Major props for disclosing the details of the
       | attack in a very accessible way.
       | 
       | It is great that this kind of security incident post-mortem is
       | being shared. This will help the community to level-up in many
       | ways, specially given that its content is super accessible and
       | not heavily leaning on tech jargon.
        
         | hn_throwaway_99 wrote:
         | I disagree. I appreciate the level of detail, but I don't
         | appreciate Retool trying to shift the blame to Google, and only
         | putting a blurb in the end about using FIDO2. They should have
         | been using hardware keys years ago.
        
           | dvdhsu wrote:
           | Hi, I'm sorry you felt that way. "Shifting blame to Google"
           | is absolutely not our intention, and if you have any
           | recommendations on how to make the blog post more clear,
           | please do let me know. (We're happy to change it so it reads
           | less like that.)
           | 
           | I do agree that we should start using hardware keys (which we
           | started last week).
           | 
           | The goal of this blog post was to make clear to others that
           | Google Authenticator (through the default onboarding flow)
           | syncs MFA codes to the cloud. This is unexpected (hence the
           | title, "When MFA isn't MFA"), and something we think more
           | people should be aware of.
        
             | hn_throwaway_99 wrote:
             | I felt like you were trying to shift blame to Google due to
             | the title "When MFA isn't MFA" and your emphasis on "dark
             | patterns" which, to be honest, I don't think they are that
             | "dark". To me it was because this felt like a mix of a post
             | mortem/apology, but with some "But if it weren't for
             | Google's dang dark patterns..." excuse thrown in.
             | 
             | FWIW, nearly every TOTP authenticator app I'm aware of
             | supports some type of seed backup (e.g. Authy has a
             | separate "backup password"). I actually like Google's
             | solution here _as long as_ the Workspace accounts are
             | protected with a hardware key.
             | 
             | The only real lesson here is that you should have been
             | using hardware keys.
        
           | duderific wrote:
           | It was also a bit weird how they kept emphasizing how their
           | on-prem installations were not affected, as if that lessens
           | the severity somehow. It's like duh, that's the whole point
           | of on-prem deployments.
        
       | kerblang wrote:
       | I don't understand: Why on earth does google want to sync MFA
       | tokens? They're one-time use, aren't they? Or... feh, I can't
       | even fathom
        
         | jeremyjh wrote:
         | They mean they are syncing the private key used to generate the
         | tokens on demand.
        
           | kerblang wrote:
           | Do all these 2FA apps - like say Microsoft Authenticator -
           | have these hidden/not-so-hidden private keys? From other
           | posts it sounds like you can view the token and write it
           | down... MA doesn't have that, I don't think.
        
             | kerblang wrote:
             | Answering myself again, yeah, they all seem to have this
             | private key hidden away somewhere. Didn't know that.
             | 
             | https://frontegg.com/blog/authentication-apps#How-Do-
             | Authent...?
        
           | magospietato wrote:
           | Well that's even worse isn't it?
        
         | kerblang wrote:
         | Answering myself, this helps a bit:
         | https://www.zdnet.com/article/google-authenticator-will-now-...
         | 
         | I guess we need a better way to handle "Old phone went
         | swimming, had to buy another, now what?"
        
           | mbesto wrote:
           | Which is funny because the 2nd factor is "something I have",
           | which means if you don't "have it" then you can ever complete
           | the 2nd factor. This ultimately means the 2nd factor, when
           | you're phone goes swimming, is ultimately your printed codes.
        
         | unethical_ban wrote:
         | Syncing of "MFA codes" is really syncing of the secret
         | component of TOTP (time based one time password).
         | 
         | And it's a good thing, and damn any 2fa solution that blocks
         | it. I don't want to go through onerous, incompetent, poorly
         | designed account recovery procedures if a toddler smashes my
         | phone. So I use authy personally, while a friend backs his up
         | locally.
        
           | charcircuit wrote:
           | If you can backup a key it is not MFA. It just a second
           | password and not another factor. The solution to having your
           | phone smashed is to have multiple "something you have", so
           | you have a backup.
        
           | skybrian wrote:
           | A better way to fix this is to have multiple ways to log in.
           | Printed backup codes in your safe with your personal papers
           | and/or a Yubikey on your keychain. This works for Google and
           | Github, at least.
           | 
           | Passkey syncing is more convenient, though, and probably an
           | improvement on what most people do.
        
           | itake wrote:
           | > I don't want to go through onerous, incompetent, poorly
           | designed account recovery procedures if a toddler smashes my
           | phone
           | 
           | Why don't you use the printed recovery tokens?
        
             | monocasa wrote:
             | Who has a printer these days?
        
               | CameronNemo wrote:
               | Local libraries, print shops... but yeah that may be an
               | attack vector.
        
             | unethical_ban wrote:
             | Not all websites offer them.
             | 
             | Hell, no bank I use (several large and several regional)
             | support generic totp. Some have sms, one has Symantec VIP,
             | proprietary and not redundant.
             | 
             | Edit: since I'm posting too fast according to HN, even
             | though I haven't posted in an hour, I'll say it here.
             | Symantec is totp but You cannot back up your secrets and
             | you cannot have backup codes.
        
               | CameronNemo wrote:
               | Symantec VIP is TOTP under the hood.
               | 
               | https://github.com/dlenski/python-vipaccess
        
         | AshamedCaptain wrote:
         | For me the question is "who the fsck uses Google Authenticator
         | to store all their tokens, both company and personal?"
        
           | burkaman wrote:
           | Google Authenticator was I believe the first available TOTP
           | app, and is by far the most popular. It used to be open
           | source and have no connection to your Google account. Many
           | people installed it years ago when they first set up MFA, and
           | have just been adding stuff to it ever since because it's
           | easy and it works. Even for technical users who understand
           | how TOTP works, there is no obvious reason it appears unsafe
           | to put all your tokens in the app (until you read this
           | article).
           | 
           | Look at the MFA help page for any website you use. One of the
           | first sentences is probably something like "First you'll need
           | to install a TOTP app on your phone, such as Google
           | Authenticator or Authy..."
           | 
           | It really did used to be the best option. For example, see
           | this comment from 10 years ago when Authy first launched:
           | 
           | > The Google Authenticator app is great. I recently got
           | (TOTP) 2-factor auth for an IRC bot going with Google
           | Authenticator; took about 5 minutes to code it up and set it
           | up. It doesn't use any sort of 3rd party service, just the
           | application running locally on my phone. TOTP/HOTP is dead
           | simple and, with the open source Google Authenticator app,
           | great for the end user.
           | 
           | - https://news.ycombinator.com/item?id=6137051
        
             | fireflash38 wrote:
             | I think technically Blizzard Authenticator (even the app)
             | was available before Google Authenticator, but obviously
             | for extremely limited use.
        
       | andrewstuart wrote:
       | Some startup, please make a product that uses AI to identify
       | these obviously fake emails.
       | 
       | Hello A, This is B. I was trying to reach out in regards to your
       | [payroll system] being out of sync, which we need synced for Open
       | Enrollment, but i wasn't able to get ahold of you. Please let me
       | know if you have a minute. Thanks
       | 
       | You can also just visit
       | https://retool.okta.com.[oauthv2.app]/authorize-client/xxx and I
       | can double check on my end if it went through. Thanks in advance
       | and have a good night A.
        
         | [deleted]
        
       | batmansmk wrote:
       | Are the claims of deepfake and intimate knowledge of procedures
       | based of the sole testimony of the employee who oopsed terribly?
       | This is a novelisation of an events
       | 
       | Retool needs to revise the basic security posture. There is no
       | point in complicated technology if the warden just gives the key
       | away.
        
         | tongueinkek wrote:
         | [dead]
        
         | dvdhsu wrote:
         | It is not based on the sole testimony of the employee. (Sorry I
         | can't go into more details.)
        
         | hn_throwaway_99 wrote:
         | > Retool needs to revise the basic security posture.
         | 
         | Couldn't agree more. TBH I thought this post was an exercise in
         | blame shifting, trying to blame Google.
         | 
         | > We use OTPs extensively at Retool: it's how we authenticate
         | into Google and Okta, how we authenticate into our internal
         | VPN, and how we authenticate into our own internal instances of
         | Retool. The fact that access to a Google account immediately
         | gave access to all MFA tokens held within that account is the
         | major reason why the attacker was able to get into our internal
         | systems.
         | 
         | Google Workspace makes it very easy to set up "Advanced
         | Protection" on accounts, in which case it requires using a
         | hardware key as a second factor, instead of a phishable
         | security code. Given Retool's business of hosting admin apps
         | for lots of other companies, they should have known they'd be a
         | prime target for something like this, and not requiring
         | hardware keys is pretty inexcusable here.
        
           | dotty- wrote:
           | > Google Workspace makes it very easy to set up "Advanced
           | Protection" on accounts, in which case it requires using a
           | hardware key as a second factor, instead of a phishable
           | security code.
           | 
           | This isn't immediately actionable for every company. I agree
           | Retool should have hardware keys given their business, but at
           | my company with 170 users we just haven't gotten around to
           | figuring out the distribution and adoption of hardware keys
           | internationally. We're also a Google Workspace customer. I
           | think it's stupid for a company like Google, the company
           | designing these widely used security apps for millions of
           | users, to allow for cloud syncing without allowing
           | administrators the ability to simply turn off the feature on
           | a managed account. Google Workspace actually lacks a lot of
           | granular security features, something I wish they did better.
           | 
           | What is a company like mine meant to do here to counter this
           | problem?
           | 
           | edit: changed "viable" for "immediately actionable". It's
           | easy for Google to change their apps. Not for every company
           | to change their practices.
        
             | hn_throwaway_99 wrote:
             | > What is a company like mine meant to do here to counter
             | this problem?
             | 
             | What is hard about mailing everyone a hardware key? I
             | honestly don't see the problem. It's not like you need to
             | track it or anything, people can even use their own
             | hardware keys.
             | 
             | 1. Mail everyone a hardware key, or tell them if they
             | already have one of their own they can just use that.
             | 
             | 2. Tell them to enroll at
             | https://landing.google.com/advancedprotection/
             | 
             | > Google Workspace actually lacks a lot of granular
             | security features, something I wish they did better.
             | 
             | Totally agree with that one. Last time I checked you
             | couldn't _enforce_ that all employees use Advanced
             | Protection in a Google Workspace account. However, you can
             | still get this info (enabled or disabled) as a column in
             | the Workspace Admin console so you can report on people who
             | don 't have it enabled. I'm guessing there is also probably
             | a way to alert if it is disabled.
        
       | wepple wrote:
       | Why did they need to call? They could've phished the password and
       | MFA by simply MITMing?
       | 
       | Perhaps we need a distinction from phishable MFA and unphishable
       | U2F/WebAuthn style
        
         | rsstack wrote:
         | > The caller claimed to be one of the members of the IT team,
         | and deepfaked our employee's actual voice. The voice was
         | familiar with the floor plan of the office, coworkers, and
         | internal processes of the company. Throughout the conversation,
         | the employee grew more and more suspicious, but unfortunately
         | did provide the attacker one additional multi-factor
         | authentication (MFA) code.
         | 
         | > The additional OTP token shared over the call was critical,
         | because it allowed the attacker to add their own personal
         | device to the employee's Okta account, which allowed them to
         | produce their own Okta MFA from that point forward.
         | 
         | They needed to have a couple of minutes to set things up from
         | their end, and then ask for the second OTP code. A phone call
         | works well for that.
        
           | wepple wrote:
           | Ahh, thanks and apologies for not re-reading before asking.
           | 
           | That is indeed interesting; keep the con going a bit longer
           | to get a proper foothold.
        
       | RcouF1uZ4gsC wrote:
       | One thing that is left out it to use unphishable MFA like
       | hardware security keys (Yubikey, etc).
        
       | macNchz wrote:
       | Beyond having hardware keys, this scenario is why I really try to
       | drive home, in all of my security trainings, the idea that you
       | should instantly short circuit any situation where you _receive_
       | a phone call (or other message) and someone starts asking for
       | information. It 's always okay to say, "actually, let me get back
       | to you in a minute" and hang up, calling back on a known phone
       | number from the employee directory, or communicate on different
       | channel altogether.
       | 
       | Organizationally, everyone should be prepared for and encourage
       | that kind of response as well, such that employees are never
       | scared to say it because they're worried about a
       | snarky/angry/aggressive response.
       | 
       | This also applies to non-work related calls: someone from your
       | credit card company is calling and asking for something? Call
       | back on the number on the back of your card.
        
         | tyingq wrote:
         | >This also applies to non-work related calls: someone from your
         | credit card company is calling and asking for something? Call
         | back on the number on the back of your card.
         | 
         | There's a number of situations, not just credit card ones,
         | where it's impossible or remarkably difficult to get back to
         | the person that had the context of why they were calling.
         | 
         | Your advice holds, of course, because it's better to not be
         | phished. But sometimes it means losing that conversation.
        
         | hinkley wrote:
         | Advice I haven't even followed myself:
         | 
         | It's probably a good idea to program your bank's fraud number
         | into your phone. The odds that someone hacks your bank's
         | Contact Us page are small but not zero.
         | 
         | The bedrock of both PGP and .ssh/known_hosts could be restated
         | as, "get information before anyone knows you need it".
         | 
         | Fraud departments contacting _me_ about potentially fraudulent
         | charges is always going to make me upset. Jury is still out on
         | whether it will always trigger a rant, but the prognosis is not
         | good.
        
           | GauntletWizard wrote:
           | At least once I have gotten a terribly phrased and link-
           | strewn "Fraud Alert" from a bank, reported it to said bank's
           | anti-phisihing e-mail address, gotten a personalized mail
           | that responded that it was in fact fraud and that they had
           | policies against using third party subdomains like... And
           | then found out the day later that yes, that was their real
           | new anti-fraud tool and template.
           | 
           | There will need to be jail time for the idiots writing the
           | government standards on these fraud departments before we get
           | jail time for the idiots running these fraud departments
           | before it gets better.
        
         | dataflow wrote:
         | > this scenario is why I really try to drive home, in all of my
         | security trainings, the idea that you should instantly short
         | circuit any situation where you receive a phone call (or other
         | message) and someone starts asking for information.
         | 
         | The trouble is, calling the number on the back of your card
         | requires actually taking out your card, dialing it, wading
         | through a million menus, and waiting who-knows-how-long for
         | someone to pick up, and hoping you're not reaching a number
         | that'll make you go through fifteen transfers to get to the
         | right agent. People have stuff to do, they don't want to wait
         | around with one hand occupied waiting for a phone call to get
         | picked up for fifteen minutes. When the alternative is just
         | telling your information on the phone... it's only natural that
         | people do it.
         | 
         | Of course it's horrible for security, I'm not saying anyone
         | should just give information on the phone. But the reality is
         | that people will do it anyway, because the cost of the
         | alternative isn't necessarily negligible.
        
           | macNchz wrote:
           | I don't think most people who get scammed this way pause to
           | say "oh, this might be someone stealing my credit card
           | number", then disregard that thought because it's too much of
           | a pain to call back on an official line. Instead I think they
           | don't question the situation at all, or the scammer has
           | enough information to sound sufficiently authoritative. Most
           | non-technical people I've talked to about this are pretty
           | scared of getting scammed, but tell me the thought never
           | crossed their mind they could call back on a trusted number.
           | 
           | I like the "hang up, call back" approach because it takes
           | individual judgment out of the equation: you're not trying to
           | evaluate in real time whether the call is legit, or whether
           | whatever you're being asked to share is actually sensitive.
           | That's the vulnerable area in our brains that scammers
           | exploit.
        
           | josho wrote:
           | Great point. But it could be easily solved with something
           | like: "Call the number on the back of your credit card. Push
           | *5 and when prompted enter your credit card number and you
           | will be immediately connected back to my line"
        
             | dataflow wrote:
             | Or just connect you directly if you call back within a few
             | minutes from the same number they called, no need to press
             | anything. But I guess that's too advanced for 2023
             | technology
        
       | rahidz wrote:
       | >The caller claimed to be one of the members of the IT team, and
       | deepfaked our employee's actual voice. The voice was familiar
       | with the floor plan of the office, coworkers, and internal
       | processes of the company.
       | 
       | Wow that is quite sophisticated.
        
         | skeaker wrote:
         | Highly reminiscent of the sort of social engineering hacks
         | Mitnick would run. In his autobiography he would pull this sort
         | of thing by starting small and simply asking lower ranking
         | employees over the phone for low risk info like their name and
         | things like that so when it came time to call higher ranking
         | ones he could have trustworthy-sounding info to call back to.
         | The attack is clever for sure, but not necessarily any more
         | sophisticated than multiple well-placed calls.
        
         | oldtownroad wrote:
         | And obviously untrue. If you're an employee who just caused a
         | security incident of course you're going to make it seem as
         | sophisticated as possible but considering Retool has hundreds
         | of employees from all over the world, the range of accents is
         | going to be such that any voice will sound like that of at
         | least one employee.
         | 
         | Are you close enough to members of your IT team to recognise
         | their voices but not be close enough to them to make any sort
         | of small talk that the attacker wouldn't be able to respond to
         | convincingly?
         | 
         | If you're an attacker who can do a convincing french accent,
         | pick an IT employee from LinkedIn with a french name. No need
         | to do the hard work of tracking down source audio for a
         | deepfake when voices are the least distinguishable part of our
         | identity.
         | 
         | Every story about someone being conned over the phone now
         | includes a line about deepfakes but these exact attacks have
         | been happening for decades.
        
           | luma wrote:
           | Fully agreed, saying a deepfaked voice was involved without
           | hard proof is deflecting blame by way of claiming magic was
           | involved.
        
             | yieldcrv wrote:
             | I think its right to be skeptical, but its also easy to do
             | this if you've identified the employee to train on the
             | voice of. You could even call them and get them to talk for
             | a few minutes if you couldnt find their instagram.
        
         | bombcar wrote:
         | Sophisticated enough that I'd just suspect the employee unless
         | there was additional proof.
        
         | tough wrote:
         | inside job?
        
           | dmazzoni wrote:
           | Anything's possible, but the simplest explanation (per
           | Occam's razor) is just that the employee was fooled.
           | 
           | Is it plausible that if a good social engineer cold-called a
           | bunch of employees, they'd eventually get one to reveal some
           | info? Yes, it happens quite frequently.
           | 
           | So any suggestion that it was an inside job, or used deep
           | fakes, or something like that would require additional
           | evidence.
           | 
           | Kevin Mitnick's "The Art of Deception" covers this
           | extensively. The first few calls to employees wouldn't be
           | attempts to actually get the secret info, it'd be to get
           | inside lingo so that future calls would sound like they were
           | from the inside.
           | 
           | For example, the article says the caller was familiar with
           | the floor plan of the office.
           | 
           | The first call might be something like "Hey, I'm a new
           | employee. Where are the IT staff, are they on our floor?" -
           | they might learn "What do you mean, everyone's on the 2nd
           | floor, we don't have any other floors. IT are on the other
           | side of the elevators from us."
           | 
           | They hang up, and now with their next call they can pretend
           | to be someone from IT and say something about the floor plan
           | to sound more convincing.
        
           | mistrial9 wrote:
           | how's that Zero Trust architecture working out for everyone ?
        
             | Wojtkie wrote:
             | They mention in the article that their zero-trust
             | architecture is what prevented the attacker from gaining
             | access to on-prem data. So it seemed like it worked pretty
             | well in mitigating the damage.
        
             | wepple wrote:
             | What's this got to do with zero trust?
        
               | mistrial9 wrote:
               | it is a cynical comment that is meant to hilite the
               | relationship between _humans_ where oppressive and
               | untrusting employment leads to increase in antipathy,
               | ill-will, feelings of being abused and all of that
               | leading to insider theft and serious pre-meditated
               | betrayal ?
        
               | wyldberry wrote:
               | Zero Trust is such a bad branding for how the
               | architecture works. It's just "always prove"
               | architecture.
        
               | tough wrote:
               | It does seem to sound pretty well on the mind of the
               | executives signing the deals that hear the marketing talk
        
       | j-bos wrote:
       | Where can one find a breakdown of how to build implement a TOTP
       | generator? For curiosity's sake
        
         | deathanatos wrote:
         | The basic premise is in
         | https://datatracker.ietf.org/doc/html/rfc6238, although today
         | I'd use SHA-256, not SHA-1, if possible.
         | 
         | But I'd disfavor TOTP over hardware tokens that can sign
         | explicit requests.
        
       | alsodumb wrote:
       | Maybe it's just me, but I am really skeptical about the DeepFake
       | part - it's a theoretically possible attack vector, but the only
       | evidence they possibly could have to support this statement would
       | be the employees testimony. Targeting a particular employee with
       | the voice of a specific person this employee knows requires a lot
       | of information and insider info.
       | 
       | Also, I think the article spends a lot of effort trying to blame
       | Google Authenticator and make it seems like they had the best
       | possible defense and yet attackers managed to get through because
       | of Googles error. Nope, not even close. They would have had
       | hardware 2FA if they were really concerned about security. Come
       | on guys, it's 2023 and hardware tokens are cheap. It's not even a
       | consumer product where one can say that hardware tokens hinder
       | usability. It's a finite set of employees, who need to do MFA
       | certain times for certain services mostly using one device. Just
       | start using hardware keys.
        
         | dvdhsu wrote:
         | Hi, David, founder @ Retool here. We are currently working with
         | law enforcement, and we believe they have corroborating
         | evidence through audio that suggests a deepfake is likely. (Put
         | another way, law enforcement has more evidence than just the
         | employee's testimony.)
         | 
         | (I wish we could blog about this one day... maybe in a few
         | decades, hah. Learning more about the government's surveillance
         | capabilities has been interesting.)
         | 
         | I agree with you on hardware 2FA tokens. We've since ordered
         | them and will start mandating them. The purpose of this blog
         | post is to communicate that what is traditionally considered
         | 2FA isn't actually 2FA if you follow the default Google flow.
         | We're certainly not making any claims that "we are the world's
         | most secure company"; we are just making the claim that "what
         | appears to be MFA isn't always MFA".
         | 
         | (I may have to delete this comment in a bit...)
        
       | bawolff wrote:
       | While the google cloud thing is a weird design, that seems like
       | the wrong place to blame.
       | 
       | TOTP and SMS based 2FA are NOT designed to prevent phishing. If
       | you care about phishing use yubikeys.
        
       | rolobio wrote:
       | Very sophisticated attack, I would bet most people would fall for
       | this.
       | 
       | I'm surprised Google encourages syncing the codes to the cloud...
       | kind of defeats the purpose. I sync my TOTP between devices using
       | an encrypted backup, even if someone got that file they could not
       | use the codes.
       | 
       | FIDO2 would go a long way to help with this issue. There is no
       | code to share over the phone. FIDO2 can also detect the domain
       | making the request, and will not provide the correct code even it
       | the page looks correct to a human.
        
         | tongueinkek wrote:
         | [dead]
        
         | softfalcon wrote:
         | >FIDO2 can also detect the domain making the request, and will
         | not provide the correct code even it the page looks correct to
         | a human.
         | 
         | I could not agree more with this sentiment! We need more of
         | this kind of automated checking going on for users. I'm tired
         | of seeing "just check for typo's in the URL" or "make sure it's
         | the real site!" advice given to the average user.
         | 
         | People are not able to do this even when they know how to
         | protect themselves. Humans tire easily and are often fallible.
         | We need more tooling like FIDO2 to automate away this problem
         | for us. I hope the adoption of it will go smoothly in years to
         | come.
        
           | miki123211 wrote:
           | The problem with Fido (and other such solutions, including
           | smartphone-based passkeys) is that they make things extremely
           | hard if you're poor / homeless / in an unsafe / violent
           | family situation and therefore change devices often. It's
           | mostly a non-issue for Silicon Valley tech employees working
           | solely on their corporate laptops, and U2F is perfect for
           | that use-case, but these concerns make MFA a non-starter for
           | the wider population. We could neatly sidestep all of these
           | issues with cloud-based fingerprint readers, but the privacy
           | advocates won't ever let that happen.
        
             | softfalcon wrote:
             | You're right, software security is only really available to
             | rich and tech minded folks.
             | 
             | That's kind of what I was trying to get at with my previous
             | statement about humans being tired and fallible. The way we
             | access and protect our digital assets feels incredibly un-
             | human to me. It's wrapped up in complexity and difficulty
             | that is forced upon the user (or kept away from, if you
             | want to look at it that way).
             | 
             | As it is now, all of the solutions are only really
             | available to someone who can afford it (by life
             | circumstance, device availability, internet, etc) and those
             | who can understand all the rules they have to play by to be
             | safe. It's a very un-ideal world to live in.
             | 
             | When I brought up FIDO2, I was less saying "FIDO2 is the
             | answer" and more saying, "we need someone to revolutionize
             | the software authentication and security landscape because
             | it is very very flawed".
        
             | luma wrote:
             | Biometrics aren't a great key because they cannot generally
             | be revoked. This isn't a privacy concern, it's a security
             | problem. You leave your fingerprints nearly everywhere you
             | go, and they only need to be compromised once and then can
             | never be used again. At best, you can repeat this process a
             | sum total of 10 times without taking your shoes off to
             | login.
        
             | supertrope wrote:
             | Stronger security can also help the marginalized. If your
             | abusive SO has the phone plan in their name they can order
             | up a new SIM card and reset passwords on websites that way
             | too often fallback from "two factor" to SMS as a root
             | password.
        
         | bawolff wrote:
         | > I'm surprised Google encourages syncing the codes to the
         | cloud... kind of defeats the purpose.
         | 
         | Depends on what you think the purpose is. People talk about
         | TOTP solving all sorts of problems, but in practise the only
         | one it really solves for most setups is people choosing bad
         | passwords or reusing passwords on other insecure sites. Pretty
         | much every other threat model for it is wishful thinking.
         | 
         | While i also think the design decision is questionable, the
         | gain in security from people not constantly losing their phone
         | probably outweighs for the average person the loss of security
         | of it all being in a cloud account (as google cloud for most
         | people is probably one of their most well secured account)
        
           | luma wrote:
           | TOTP is helpful when you don't fully trust the input process.
           | If rogue javascript is grabbing creds from your page, or the
           | client has a keylogger they don't know about, TOTP can help.
        
             | hinkley wrote:
             | Blizzard was one of the first large customers of TOTP, and
             | what we learned from that saga is that 1) keyloggers are a
             | problem and 2) impersonating people for TOTP interactions
             | is profitable even if you're only a gold farmer.
             | 
             | The vector was this: Blizzard let you disable the
             | authenticator on your account by asking for 3 consecutive
             | TOTP outputs from your device. That would let you delete
             | the authenticator from your account.
             | 
             | The implementation was to spread a keylogger as a virus,
             | and when it detected a Blizzard login, it would grab the
             | key as you typed it, and make sure Blizzard got the wrong
             | value when you hit submit. Blizzard would say try again,
             | and the logger would collect the next two values, log into
             | your account, remove the authenticator and change your
             | password.
             | 
             | By the time you typed in the 4th attempt to log in, you'd
             | already be locked out of your account, and by the time you
             | called support, they would already have laundered your
             | stuff.
             | 
             | This was targeting 10 million people for their imaginary
             | money and a fraction of their imaginary goods. On the one
             | hand that's a lot of effort for a small payoff. On the
             | other, maybe the fact that it was so ridiculous insulated
             | them from FBI intervention. If they were doing this to
             | banks they'd have Feds on them like white on rice. But it
             | definitely is a proof of concept for something much more
             | nefarious.
        
             | bawolff wrote:
             | No it can't.
             | 
             | The rouge javascript or keylogger would just steal the totp
             | code, prevent the form submission, and submit its own form
             | on the malicious person's server.
             | 
             | Not to mention if your threat model includes attacker has
             | hacked the server and added javascript, why doesn't the
             | attacker just take over the server directly?
             | 
             | If the attacker installed a keylogger why dont they just
             | install software to steal your session cookies?
             | 
             | This threat model doesn't make sense. It assumes a powerful
             | attacker doing the hard attack and totally ignoring the
             | trivially easy one.
        
               | thayne wrote:
               | > attacker has hacked the server and added javascript
               | 
               | adding javascript doesn't necessarily mean the server is
               | hacked. XSS attacks usually don't require actually
               | compromising the server. Or a malicious browser plugin
               | could inject javascript onto a site.
        
               | hinkley wrote:
               | rogue javascript. It's naughty, not red.
        
               | timando wrote:
               | > Not to mention if your threat model includes attacker
               | has hacked the server and added javascript, why doesn't
               | the attacker just take over the server directly?
               | 
               | If the attacker can only hack the server that hosts your
               | SPA, but not your API server, they can inject javascript
               | to it, but can't do a lot beyond that
        
               | bawolff wrote:
               | So assuming server side compromise not xss - in theory
               | the servers can be isolated, in practise its rare for
               | people to do a good job with this except at really big
               | companies.
               | 
               | Regardless if they got your spa, they can replace the
               | html, steal credentials, act as users, etc. Sure the
               | attacker might want something more, but this is often
               | more than enough to do anything the attacker might want
               | if they are patient enough. Certainly its more than
               | enough to do anything TOTP would protect against.
        
           | wayfinder wrote:
           | Well all Google needed to do to make it at least a little
           | harder is to encrypt the backup with a password at least.
           | 
           | The user can still put in an insecure password but uploading
           | all your 2FA tokens to your primary email unencrypted is
           | basically willingly putting all your eggs in one basket.
        
         | Guvante wrote:
         | On the otherhand having your device die means without cloud
         | backup you either lose access or whoever was relying on that
         | 2FA needs to fall back on something else to authenticate you.
         | 
         | After all if I can bypass 2FA with my email whether 2FA is
         | backed up to the cloud doesn't matter from a security
         | standpoint.
         | 
         | Certainly I would agree with the assertion that opting out for
         | providers of codes would be nice. Even if it is an auto
         | populated checkbox based on the QR code.
        
           | pushcx wrote:
           | The workaround I've seen is to issue a user two 2FAs keys,
           | one for regular use and one to store securely as a backup. If
           | they lose their primary key, they have the backup until a new
           | backup can be sent to them. Using a backup may prompt partial
           | or total restriction until a security check can be done. If
           | they lose both, yes, there needs to be some kind of a reauth.
           | In workplace context like this it's straightforward to design
           | a high-quality reauth procedure.
        
         | [deleted]
        
         | hn_throwaway_99 wrote:
         | > Very sophisticated attack, I would bet most people would fall
         | for this.
         | 
         | No. If you think people at your company would fall for this,
         | then IMO you have bad security training. The simple mantra of
         | "Hang up, lookup, call back"
         | (https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-
         | lo...) would have prevented this.
         | 
         | Literally like 99% of social engineering attacks would be
         | prevented this way. Seriously, make a little "hang up, look up,
         | call back" jingle for your company. Test it _frequently_ with
         | phishing tests. It _is_ possible in my opinion to make this an
         | ingrained part of your corporate culture.
         | 
         | Agree that things like security keys should be in use (and
         | given Retool's business I'm pretty shocked that they weren't),
         | but there are other places that the "hang up, look up, call
         | back" mantra is important, e.g. in other cases where finance
         | people have been tricked into sending wires to fraudsters.
        
           | roywiggins wrote:
           | They just have to catch someone half-awake, or already very
           | stressed out, or otherwise impaired once.
        
           | pvg wrote:
           | The ineffectiveness of "security training" is precisely why
           | TOTP is on its way out - you couldn't even train Google
           | employees to avoid getting compromised.
        
             | hn_throwaway_99 wrote:
             | IMO most of this is because most security training I've
             | seen is abysmal. It's usually a "check the box" exercise
             | for some sort of compliance acronym. And, because whatever
             | compliance frameworks usually mandate hitting lots of
             | different areas, it basically becomes too much information
             | that people don't really process.
             | 
             | That's why I really like the "Hang up, look up, call back"
             | mantra: it's so simple. It shouldn't be a part of "security
             | training". If corporations care about security, it should
             | be a mantra that corporate leaders begin all company-wide
             | meetings with. It's basically teaching people to be
             | suspicious of any inbound requests, because in this day and
             | age those are difficult to authenticate.
             | 
             | In other words, skip _all_ the rest of  "security
             | training". Only focus on "hang up, look up, call back".
             | Essentially all the rest of security training (things like
             | keeping machines up to date, etc.) should be handled by
             | automated policies anyway. And while I agree TOTP is and
             | should be on its way out, the "hang up, look up, call back"
             | mantra is important for requests beyond just things like
             | securing credentials.
        
           | [deleted]
        
           | yesimahuman wrote:
           | This fails to satisfy one of the core lessons here: trust
           | nothing, not even your own training and culture.
        
             | _jal wrote:
             | So I take it you are employed by someone that allows you to
             | connect to nothing and change nothing? Because if you can
             | do any of those things, your employer is clearly Doing It
             | Wrong, based on your interpretation.
             | 
             | (If you happen to be local-king, flip the trust direction,
             | it ends up in the same place.)
        
           | devjab wrote:
           | I've done 6 different versions of "security training" as well
           | as "GDPR training" over the past few years. I think they are
           | mostly tools to drain company money and wasting time. About
           | the only thing I remember from any of it is when I got some
           | GDPR answer wrong because I didn't resize your shoe size was
           | personal information and it made me laugh that I had failed
           | the whatever quiz right after I had been GDPR certified by
           | some other training tool.
           | 
           | If we look at the actual data, we have seen a reduction in
           | employees who fall for phishing emails. Unfortunately we
           | can't really tell if it's the training or if it's the company
           | story about all those million that got transferred out of the
           | company when someone fell for a CEO phishing scam. I'm
           | inclined to think it's the latter considering how many people
           | you can witness having the training videos run without sound
           | (or anyone paying attention) when you walk around on the days
           | of a new video.
           | 
           | The only way to really combat this isn't with training and
           | awareness it's with better security tools. People are going
           | to do stupid things when they are stressed out and it's
           | Thursday afternoon, so it's better to make sure they at least
           | need a MFA factor that can't be hacked as easily as SMS, MFA
           | spamming and so on.
        
             | hn_throwaway_99 wrote:
             | To emphasize, I 100% agree with you. I'm not arguing for
             | _more_ security training, I 'm arguing for _less_.
             | 
             | "Hang up, look up, call back". That's it. Get rid of pretty
             | much all other "security training", which is just a box
             | ticking exercise for most people anyway.
             | 
             | I also agree with the comment about better security tools,
             | but that's why I think "hang up, look up, call back" is
             | still important, because it teaches people to be
             | fundamentally suspicious of inbound requests even in ways
             | where security tools wouldn't apply.
        
         | rakkhi wrote:
         | Sophisticated... ok
         | 
         | I mean it's a great reason to use U2F / Webauthn second factor
         | that cannot be entered into a dodgy site
         | 
         | https://rakkhi.substack.com/p/how-to-make-phishing-impossibl...
        
         | duderific wrote:
         | In my company, such a communication would never come via a
         | text, so that would be a red flag immediately. All such
         | communications come via email, and we have pretty sophisticated
         | vetting in place to ensure that no such "sketchy" emails even
         | arrive in our inboxes in the first place.
         | 
         | Additionally, we have a program in place which periodically
         | "baits" us with fake phishing emails, so we're constantly on
         | the lookout for anything out of the ordinary.
         | 
         | I'm not sure what the punishment is for clicking on one of
         | these links in a fake phishing email, but it's likely that you
         | have to take the security training again, so there's a strong
         | disincentive in place.
        
           | rainsford wrote:
           | After initially thinking it was a good idea, I've come to
           | disagree pretty strongly with the idea of phish baiting
           | employees. Telling employees not to click suspicious links is
           | fine, but taking a step further to constantly "testing" them
           | feels like it's placing an unfair burden on the employee. As
           | this attack makes clear, well done targeted phishing can be
           | pretty effective and hard for every employee to detect (and
           | you need _every_ employee to detect it).
           | 
           | Company security should be based on the assumption that
           | someone will click a phishing link and make that not a
           | catastrophic event rather than trying to make employees
           | worried to ever click on anything. And has been pointed out,
           | that seems a likely result of that sort of testing. If I get
           | put in a penalty box for clicking on fake links from HR or
           | IT, I'm probably going to stop clicking on real ones as well,
           | which doesn't seem like a desirable outcome.
        
             | wayfinder wrote:
             | Every company I've worked with has phish baited employees
             | and I've never had any problem. It keeps you on your toes
             | and that's good.
             | 
             | What happened in the article -- getting access to one
             | person's MFA one time -- is not exactly a catastrophic
             | event. It just happens, as with most security breaches, a
             | bunch of things happened to line up together at one time to
             | make intrusion possible. (And I skimmed the article but it
             | sounded like the attacker didn't get that much anyway, so
             | it was not catastrophic.)
             | 
             | And things lining up rarely happens but it will happen
             | enough times for there to be an article posted to Hacker
             | News once in a while with someone saying that it's possible
             | to make it perfectly secure.
        
             | [deleted]
        
         | adamckay wrote:
         | > I'm surprised Google encourages syncing the codes to the
         | cloud... kind of defeats the purpose
         | 
         | Probably so when you upgrade/lose your phone you don't
         | otherwise lose your MFA tokens. Yes, you're meant to note down
         | some recovery MFA codes when you first set it up, but how many
         | "normal people" do that?
        
           | Master_Odin wrote:
           | A number of sites I've signed up for recently have required
           | TOTP to be setup, but did not provide back up codes at the
           | same time. There's a lot of iffy implementations out there.
        
         | halfcat wrote:
         | > I sync my TOTP between devices using an encrypted backup,
         | even if someone got that file they could not use the codes.
         | 
         | What do you use to accomplish this?
        
           | fn-mote wrote:
           | After the sync, you have exactly two devices that you can use
           | to answer the MFA challenge, instead of one. It's a backup.
        
       | xorcist wrote:
       | Stopped reading at "deepfake".
       | 
       | It's the new advanced persistent threat, a perfect phrase to
       | divert any resposibility.
       | 
       | (Yes, there are deepfakes. Yes, there are APTs. This is likely
       | neither.)
        
       | pyrolistical wrote:
       | Naming/training issue imo.
       | 
       | We need a better name than MFA.
       | 
       | Something like "personal password like token that should only be
       | entered into secure computer on specific website/app/field and
       | never needed to be shared"
        
         | mr_mitm wrote:
         | It's well known that OTP is not immune to phishing. Force your
         | users on webauthn or some other public key based second factor
         | if you're aiming at decreasing the incident rate.
        
           | jpc0 wrote:
           | I blame SAML and any other federated login being an
           | "enterprise only" feature on most platforms.
           | 
           | So users get used to sharing passwords between multiple
           | accounts and no centralised authority for login. This causes
           | the "hey what's your password? I need to quickly fix this
           | thing" culture in smaller companies which should never be a
           | thing in the first place.
           | 
           | If users knew the IT department would never need their
           | passwords and 2FA codes they would never give them out, the
           | reason they give them out is because at some point in the
           | past that was a learned behaviour.
        
             | fireflash38 wrote:
             | Ugh, or being able to generate an API/service token. It
             | just ingrains the bad passwords and password sharing if you
             | have to use passwords everywhere.
        
           | unethical_ban wrote:
           | Well, push based 2fa with "select this number on your 2fa
           | device" helps prevent some vectors. Simple totp doesn't do
           | that.
           | 
           | "Never give your totp or one time code over the phone" is
           | good advice.
           | 
           | "Never give info to someone who called you, call them back on
           | the official number" is another.
           | 
           | This is user error at this point.
        
             | AshamedCaptain wrote:
             | I disagree. Specially again that companies are centralizing
             | on a couple 2FA companies (like Okta from TFA), this is
             | just ripe for phishing. Okta itself is terrible at this;
             | they don't consistently use the okta.com domain, so users
             | are at a loss and have basically no protection against
             | impersonators.
        
               | unethical_ban wrote:
               | For okta, if it is set up properly, the user should get
               | push notifications. And in that push notification is a
               | number they need to select to validate the push.
               | 
               | This eliminates credential phishing and "notification
               | exhaustion" where a user just clicks "ok" on an auth
               | request by a bad actor.
               | 
               | As much as I advocate for non cloud services, what okta
               | provides is very secure.
        
         | jabroni_salad wrote:
         | man you should see what people are getting up to with evilginx2
         | these days. They are registering homoglyph URLs just for your
         | and running MITMs that passthru the real site 1:1, and
         | forwarding to the real thing once they skim your login token so
         | you never even notice. The really crappy phishes and jankfest
         | fake sites are pretty much obsolete.
         | 
         | Then they hang out in your inbox for months, learn your real
         | business processes, and send a real invoice to your real
         | customer using your real forms except an account number is
         | wrong.
         | 
         | Then the forensics guy will have to determine every site that
         | can be accessed from your email and if any PII can be seen.
         | What used to be a simple 'hehe i sent spam' is now a 6 month
         | consulting engagement and telling the state's attorney general
         | how many customers were breached.
        
         | captn3m0 wrote:
         | I've been thinking along these lines for a while. The whole
         | "factors of authentication", where higher=better is no longer a
         | good summary of the underlying complexity in modern authn
         | systems.
         | 
         | We need better terminology.
        
         | [deleted]
        
       | account-5 wrote:
       | I wonder how long it'll be before a similar attack happens before
       | someone's/a companies passkeys are synced to the cloud.
        
       ___________________________________________________________________
       (page generated 2023-09-13 23:00 UTC)