[HN Gopher] To uncover a deepfake video call, ask the caller to ...
       ___________________________________________________________________
        
       To uncover a deepfake video call, ask the caller to turn sideways
        
       Author : Hard_Space
       Score  : 660 points
       Date   : 2022-08-08 12:26 UTC (10 hours ago)
        
 (HTM) web link (metaphysic.ai)
 (TXT) w3m dump (metaphysic.ai)
        
       | jacobsenscott wrote:
       | Adversary keeps camera off (bad hair day, broken web cam, low
       | bandwidth, etc). Now how do you verify their identity? (Hint, the
       | same way you would when the camera is on.)
        
       | 17amxn17 wrote:
        
       | pojzon wrote:
       | Slowly companies using it will pop up to help you pass hiring
       | exam.
       | 
       | Someone good answers all questions for you with your face and you
       | get hired for few months till they realize you are a con.
        
       | keepquestioning wrote:
        
       | GuB-42 wrote:
       | It reminds me of the meme where guys sing over "Evanescence -
       | Bring Me To Life" with the snapchat gender swap filter on. The
       | female vocals are done facing the camera, showing a female face,
       | the male vocals are done sideways. Turning sideways effectively
       | disables the filter, showing the real (male) face.
       | 
       | Just look it up, (or go there if you feel lazy
       | https://digg.com/2019/bring-me-to-life-gender-swap )
        
       | 3jckd wrote:
       | Source: I work in the field.
       | 
       | This is a current limitation, and an artifact of the data+method
       | but not something that should be relied upon.
       | 
       | If we do some adversary modelling, we can find two ways to work
       | around this:
       | 
       | 1) actively generate and search for such data; perhaps expensive
       | for small actors but not well equipped malicious ones.
       | 
       | 2) wait for deep learning to catch up, e.g. by extending NERFs
       | (neural radiance fields) to faces; matter of time.
       | 
       | Now, if your company/government is on the bleeding edge of ML-
       | based deception, they can have such policy, and they will update
       | it 12-18-24 months (or whenever (1) or (2) materialises).
       | However, I don't know one organisation that doesn't have some
       | outdated security guideline that they cling to, e.g. old school
       | password rules and rotations.
       | 
       | Will "turning sideways to spot a deepfake" be a valid test in 5
       | years? Prolly no, so don't base your secops around this.
        
         | dylan604 wrote:
         | >Will "turning sideways to spot a deepfake" be a valid test in
         | 5 years? Prolly no, so don't base your secops around this.
         | 
         | We'll just ask them to do "the Linda Blair". If they can turn
         | their head 360 degrees, prolly a deepfake ;P
        
         | stcredzero wrote:
         | _1) actively generate and search for such data_
         | 
         | What about doing a bunch of video calls, and asking for callers
         | to show their profile, "to guard against deepfakes?"
        
         | cpach wrote:
         | As far as I can see, secops is an eternal cat-and-mouse game.
        
           | Scoundreller wrote:
           | job security
        
             | cpach wrote:
             | Indeed (:
        
           | guerrilla wrote:
           | Literally an arms race.
        
             | HPsquared wrote:
             | Face race?
        
               | bbarnett wrote:
        
               | dang wrote:
               | Would you please stop posting unsubstantive and/or
               | flamebait comments to HN? You've been doing it
               | repeatedly, it's against the site guidelines, we end up
               | banning such accounts, and we've had to warn you more
               | than once before.
               | 
               | If you'd please review
               | https://news.ycombinator.com/newsguidelines.html and
               | stick to the rules when posting here, we'd appreciate it.
               | 
               | Note this one, just to pick one example:
               | 
               | " _Eschew flamebait. Avoid unrelated controversies and
               | generic tangents._ "
               | 
               | That's one of the most important rules for avoiding what
               | turns internet threads dumb and nasty.
        
               | bbarnett wrote:
               | Not sure what to say to this one. Women can get
               | sensitive, if requests of them can be seen in an
               | unpleasant light. Women have also been historically
               | tricked into posing for cameras, had their images
               | misused, and are often quite sensitive about it.
               | 
               | I thought my comment was legit, and on topic. If one is
               | going to implement a policy where people have to slowly
               | move their camera around their body, there may be severe
               | misunderstandings ... and an inability to clarify if a
               | bad response runs-away on twitter and such.
               | 
               | Support persons should be carefully coached on how to
               | handle this.
               | 
               | I guess all I can say here is, I didn't mean this to be
               | so controversial.
               | 
               | Sorry.
        
           | devteambravo wrote:
           | Some see secops as futile until the tools are here. So we're
           | making those tools instead.
        
         | dheera wrote:
         | The other thing is, why is this even important, when you
         | shouldn't be basing decisions off the other person's race or
         | face in general?
         | 
         | Base everything off the work they do, not how they look.
         | Embracing deepfakes is accepting that you don't discriminate on
         | appearances.
         | 
         | Hell, everyone should systematically deepfake themselves into
         | white males for interviews so that there is assured to be zero
         | racial/gender bias in the interview process.
        
         | simonswords82 wrote:
         | Interesting to bump in to somebody that works in this field.
         | 
         | What do you do in this field?
         | 
         | What's the direction of travel on it?
         | 
         | What makes it worth pursuing at a commercial level? In other
         | words - how is this tech going to be abused/monetized?
        
         | bushbaba wrote:
         | Asking for entropy that's easy for a real human to comply with
         | and difficult for a prebuilt AI is at least a short term
         | measure. Such-as show me the back of your head sideways then go
         | from head to feet without cutting the feed.
         | 
         | Easy for a human, difficult for ML/AI
        
         | elondaits wrote:
         | "The Impossible Mission Force has the technical capabilities to
         | copy anyone's face and imitate their voice, so don't base your
         | secops around someone's appearance."
         | 
         | ... yes, because that worked well.
        
           | jhardy54 wrote:
           | > ... yes, because that worked well.
           | 
           | Just to be clear, Mission Impossible is not a documentary.
        
             | salawat wrote:
             | It is however, a lower bound on whether it is the case that
             | something is a reasonably forseeable/precedented area of
             | research.
             | 
             | After all, if the artist can imagine and build a story
             | around it, there'll be an engineer somewhere who'll go "Ah,
             | what the hell, I could do that."
             | 
             | *By Golblum/Finnagle's Law, it is guaranteed said Engineer
             | will not contemplate whether they should before
             | implementing it, and distributing it to the world.
             | 
             | This another example of why we can't have nice things.
        
         | wildmanx wrote:
         | It saddens me how many smart people are working in such an
         | unethical field.
        
         | SoftTalker wrote:
         | Last time I applied for a credit card online, they asked me to
         | take a video of myself and turn my head from side to side.
        
           | sockaddr wrote:
           | May I ask what card/Institution? This would be an immediate
           | no for me.
        
             | monksy wrote:
             | I want to know so that I can forward this to lawyers that
             | specialize in biometric privacy law (in IL).
             | 
             | Fuck these biometeric data farmers.
        
             | reaperducer wrote:
             | I'd trust the data with a (real, not online) bank more than
             | most other companies like Google.
             | 
             | I'd be more worried about people hacking into networked
             | security camera DVRs at stores and cafes and extracting
             | image data from there. Multiple angles. Movement. Some are
             | very high resolution these days. Sometimes they're mounted
             | right on the POS, in your face. Sometimes they're actually
             | in the top bezel of the beverage coolers.
             | 
             | Banks are the hardest way to get this data, not the easiest
             | one.
        
               | kevin_thibedeau wrote:
               | No bank is going to run such a system in house. It will
               | be a contracted service whose data is one breach away
               | from giving fraudsters a firehose of data to exploit
               | their victims.
        
               | rapind wrote:
               | > Banks are the hardest way to get this data, not the
               | easiest one.
               | 
               | Is this statement based on data or a hunch? A quick
               | google turns up a lot of bank data breaches.
        
               | reaperducer wrote:
               | _A quick google turns up a lot of bank data breaches._
               | 
               | Because banks have to report data breaches. Do you think
               | every neighborhood Gas-N-Blow is publicizing, or even
               | knows, that it's been hacked?
        
               | rapind wrote:
               | Good point. I'm still wary of just assuming (if that's
               | what we're doing here?) that old established
               | organizations you'd expect to be secure are in fact
               | secure. For example I would have expected credit rating
               | agencies to be secure...
               | 
               | Mandatory reporting certainly helps IMO. Reporting should
               | be mandatory for anyone handling PII.
        
               | hdhdjdjd wrote:
        
               | monksy wrote:
               | You would? You would trust a random number to call you
               | and talk to you about your bank account?
               | 
               | (That's what Chase's fraud department tells you to do..
               | no joke)
        
               | mulmen wrote:
               | "I trust you more than Google" is a pretty low bar in
               | terms of personal data.
        
           | spurgu wrote:
           | Yeah this is quite common with fintech (stock brokers and
           | crypto IME) KYC nowadays I've noticed.
        
           | golemotron wrote:
           | And now that scan could eventually end up out there
           | someplace.
        
             | sroussey wrote:
             | Agreed. Now they have the data to deep fake you turning
             | your head.
             | 
             | I hope they delete the data immediately after use.
        
               | notahacker wrote:
               | Frankly, of all the personally identifying data I share
               | with my bank, a low resolution phone video of the side of
               | my head is the least worrying. It's like worrying the
               | government knows my mum's maiden name!
               | 
               | In the eventuality that robust deepfake technology to
               | provide fluid real-time animation of my head from limited
               | data sources exists and someone actually wants to use it
               | against me, they can probably find video content
               | involving the side of my head from some freely available
               | social network anyway.
        
               | tacocataco wrote:
               | I've been looking to rent housing and get a new job the
               | last few months. The amount of info I've sent strangers
               | always worries me.
               | 
               | At least with housing they don't ask me to input the
               | information I've already sent them into their crappy
               | website.
        
               | SoftTalker wrote:
               | And, if deepfake technology becomes so easy to use, video
               | of your face will no longer serve to identify you.
        
               | Philadelphia wrote:
               | The implementation I've seen only stores a hash based on
               | the image analysis
        
           | april_22 wrote:
           | Yes I believe this sideways turning thing is mandatory when
           | doing online identifications
        
             | marssaxman wrote:
             | What is an "online identification"? In what context would
             | such a thing occur?
        
           | nanomonkey wrote:
           | This sounds like a great way to get sufficient images/video
           | of you to create a deepfake that could pass this test.
           | Hmmm...
        
             | oconnor663 wrote:
             | New mandatory security rule: Employees must never turn
             | their heads side to side in a meeting.
        
               | cratermoon wrote:
               | Interesting that you bring that up. The most egregiously
               | invasive student and employee monitoring software
               | requires that the subject always face the camera. That
               | seems most ripe for bypassing with the current state of
               | deepfakes. https://www.wired.com/story/student-
               | monitoring-software-priv...
        
               | mirkules wrote:
               | Microsoft Teams developed a feature when if you're using
               | a background and turn sideways, your nose and the back of
               | your head are automatically cut off.
               | 
               |  _Bug closed, no longer an issue, overcome by events._
        
               | throwaway284534 wrote:
               | I work as a Digital Gardener[1] and we're trained to
               | NEVER use our real name.
               | 
               | - [1] https://youtu.be/XQLdhVpLBVE
        
             | Gigachad wrote:
             | My bank does a much better system where they ask for a
             | photo of you holding your ID and a bit of paper with a
             | number the support person gave you for authorizing larger
             | transactions. It's still not bullet proof but since you
             | already have to be logged in to the app to do this, I'd say
             | it is sufficient.
        
         | kazinator wrote:
         | OK, you passed the yokogao test. Now take a crayon and draw an
         | X on your cheek.
        
         | Strom wrote:
         | > _This is a current limitation_
         | 
         | The thing with any AI/ML tech is that current limitations are
         | always underplayed by proponents. Self-driving cars will come
         | out next year, every year.
         | 
         | I'd say that until the tech actually exists, this is a great
         | way to detect live deepfakes. Not using the technique just
         | because maybe sometime in the future it won't work isn't very
         | sound.
         | 
         | For an extreme opponent you may need additional steps. So this
         | sideways trick probably isn't enough for CIA or whatnot, but
         | that's about as fringe as you can get and very little generic
         | advice applies anyway.
        
           | pclmulqdq wrote:
           | The only person who is promising self driving cars next year
           | (and has done so every year for the past 5 years) is Elon
           | Musk. Most respectable self-driving car companies are both
           | further along than Tesla and more realistic about their
           | timelines.
        
             | Strom wrote:
             | Let's take a look at some of those realistic timelines. A
             | quick googling gave me a very helpful listicle by
             | VentureBeat from 2017, titled _Self-driving car timeline
             | for 11 top automakers_. [1]
             | 
             | Some examples:
             | 
             | Ford - _Level 4 vehicle in 2021, no gas pedal, no steering
             | wheel, and the passenger will never need to take control of
             | the vehicle in a predefined area._
             | 
             | Honda - _production vehicles with automated driving
             | capabilities on highways sometime around 2020_
             | 
             | Toyta - _Self-driving on the highway by 2020_
             | 
             | Renault-Nissan - _2020 for the autonomous car in urban
             | conditions, probably 2025 for the driverless car_
             | 
             | Volvo - _It's our ambition to have a car that can drive
             | fully autonomously on the highway by 2021._
             | 
             | Hyundai - _We are targeting for the highway in 2020 and
             | urban driving in 2030._
             | 
             | Daimler - _large-scale commercial production to take off
             | between 2020 and 2025_
             | 
             | BMW - _highly and fully automated driving into series
             | production by 2021_
             | 
             | Tesla - _End of 2017_
             | 
             | It certainly wasn't just Tesla who was promising self-
             | driving cars any second now. Tesla was definitely the most
             | agressive, but failed to meet its goals just like every
             | other manufacturer.
             | 
             | --
             | 
             | [1] https://venturebeat.com/2017/06/04/self-driving-car-
             | timeline...
        
               | ghaff wrote:
               | There was definitely a period when everyone (for certain
               | values of same) felt they needed to get into a game of
               | topper with increasingly outlandish claims. Because if
               | they didn't people on, say, forums like this one (and
               | more importantly the stock market) would see them as
               | hopelessly behind.
        
               | throwawaylinux wrote:
               | Wow they all really got suckered by the AI grifters
               | didn't they?
        
             | anticristi wrote:
             | Self-driving cars are common in Europe for decades. We just
             | use the less cool term "subway" for them.
             | 
             | Sorry, I couldn't resist. :)
        
               | [deleted]
        
               | deaddodo wrote:
               | Subways are common worldwide.
               | 
               | In fact, the first (practical) one was in Boston; not in
               | Europe.
               | 
               | Sorry, I couldn't resist. ;)
        
             | Gigachad wrote:
             | The problem for self driving cars is the risk tolerance. No
             | one cares if a deep fake tool fails once every 100,000
             | hours because it results in a sub standard video instead of
             | someone dying.
        
           | make3 wrote:
           | Self-driving cars are a million times harder than this, this
           | is a terrible comparison.
           | 
           | Getting a model to work with images turned sideways is a few
           | lines of code (just turn image sideways at training time).
        
             | kreeben wrote:
             | >> images turned sideways
             | 
             | Instead of pictures of faces, now they're just vertical
             | lines.
        
           | technothrasher wrote:
           | It sounded to me like the parent poster wasn't saying not to
           | use it, but simply that it cannot be relied upon. In other
           | words, a deepfake could fail a 'turn sideways' test and that
           | would be useful, but you shouldn't rely on a 'passing' test.
        
             | kbenson wrote:
             | Another way to think of it might be that it can be relied
             | on - until it can't. Be ready and wary of that happening,
             | but _until then_ you have what 's probably a good
             | mitigation of the problem.
        
               | hosh wrote:
               | I think the concern is complacency, and the inertia that
               | existing security practices leads to security gaps in the
               | future. "However, I don't know one organisation that
               | doesn't have some outdated security guideline that they
               | cling to, e.g. old school password rules and rotations."
               | 
               | Or put another way, humans can't be ready and wary,
               | constantly and indefinitely. At some point, fatigue sets
               | in. People move in and out of the organization. Periodic
               | reviews of security practices don't always catch
               | everything. Why something was implemented was forgotten
               | by institutional memory. And then there's the cost for
               | retraining people.
        
               | kbenson wrote:
               | The flip side of that is people feeling/assuming there's
               | nothing they can really do with the resources they have
               | therefore they choose to do nothing.
               | 
               | Also, those that are actively using mitigations that are
               | going to be outdated at some point are probably far more
               | likely to be aware of how close they are to being
               | outdated by encountering more ambiguous cases, as seeing
               | the state of the art progress right in front of them.
               | 
               | As for people sticking to outdated security practices?
               | That's a problem of people and organizations being
               | introspective and examining themselves, and is not linked
               | to any one thing. We all have that problem to a lesser or
               | greater degree in all aspects of what we do, so either
               | you have systems in place to mitigate it or you don't.
        
               | hosh wrote:
               | Therefore, developing and customizing a proper framework
               | for security and privacy starts by accurately assessing
               | statutory, regulatory, and contractual obligations, and
               | the organization's appetite for risks in balance with the
               | organization's mission and vision, _before_ developing
               | the policies and and specific practices that
               | organizational members should be doing.
               | 
               | To use a Go (the game, not the language) metaphor,
               | skilled players always assess the whole board rather than
               | automatically make a local move in response to a local
               | threat. What's right for one organization is not going to
               | be right for another. Asking the caller to turn sideways
               | to protect against deepfakes should be considered within
               | the organization's own framework, along with the various
               | risks involved with deepfakes, and many other risks aside
               | from deep fake video calls.
        
             | williamscales wrote:
             | Exactly. Even the article gave a couple cases of convincing
             | profile deepfakes. Admittedly they're exceptional cases,
             | but in general progress tends to be made.
        
           | jksmith wrote:
           | This may be like a proof of work cryptography issue, except
           | the burden of work is on the deep fake. Just ask a battery of
           | questions, just like out of a Bladerunner scene or whatever.
           | This is still the problem with AI. It depends on tons of
           | datasets and connectivity. Human data and human code are kind
           | of the same. Even individually, we can start with jackshit
           | and still come up with an answer, whether right or wrong. Ah,
           | Lisp.
        
           | esotericimpl wrote:
        
           | Nowado wrote:
           | In terms of this particular tech previous obvious limitation,
           | namely no blinking, worked for something like a quarter from
           | discovery.
           | 
           | Venn diagram of people who someone wants to trick by this
           | particular tech, those who read any security guidelines and
           | those worthy of applying this kind of approach to in the
           | first place is however pretty narrow for the foreseeable
           | future. It's more of a narrative framing device to talk about
           | 'what to do to uncover deepfake video call' as a way to
           | present interesting current tech limitations - not that I
           | particularly mind it.
        
             | anticristi wrote:
             | Exactly! Our SecOps includes seeing people regularly. Until
             | deep fakes can fake accents, tone, body language and jokes,
             | we're safe. :)
        
           | owl57 wrote:
           | > Self-driving cars will come out next year, every year.
           | 
           | "Come out" could mean different things in different contexts.
           | Deepfake defence context is analogous to something like:
           | there are cars on public roads with no driver at the wheel.
           | And this is already true in multiple places in the world.
        
             | verdverm wrote:
             | Waymo in Arizona is an example
        
         | kortex wrote:
         | What about reflections? When I worked on media forensics, the
         | reflection discrepancy detector worked extremely well, but was
         | very situational, as pictures were not guaranteed to have
         | enough of a reflection to analyze.
         | 
         | Asking the subject to hold up a mirror and move it around
         | pushes the matte and inpainting problems to a whole nother
         | level (though it may require automated analysis to detect the
         | discrepancies).
         | 
         | I think that too might be spoofable given enough time and data.
         | Maybe we could have complex optical trains (reflection,
         | distortion, chromatic aberration), possibly even one that
         | modulates in real time...this kind of just devolves into a
         | Byzantine generals problem. Data coming from an untrusted pipe
         | just fundamentally isn't trustable.
        
         | mrandish wrote:
         | > so don't base your secops around this.
         | 
         | If it's a high-threat context I don't think live video should
         | be relied on regardless of deep fakes. Bribing or coercing the
         | person is always an alternative when the stakes are high.
        
         | hugobitola wrote:
         | What if the real person draws something on his face? Does the
         | deepfake algorithm removes it from the resulting image? Can you
         | ask the caller to draw a line on his face with a pen as a test?
        
           | drdec wrote:
           | > Can you ask the caller to draw a line on his face with a
           | pen as a test?
           | 
           | I think if the caller did this without objection that would
           | be a bigger indication that it is a deep fake than the
           | alternative. What real person is going to comply with this?
        
         | peoplefromibiza wrote:
         | > Will "turning sideways to spot a deepfake" be a valid test in
         | 5 years? Prolly no, so don't base your secops around this.
         | 
         | couldn't the same thing be said about passwords, 2FA with SMS
         | or asymmetric cryptography?
         | 
         | meanwhile real IDs have been easy to replicate for decades, but
         | are still good enough for the job.
        
         | neximo64 wrote:
         | But currently, it's pretty much a guarantee that you can pick
         | out a deepfake with this method as there is no way for current
         | methods to account for it that are in use.
         | 
         | As with any interaction with more than one adversary, there is
         | an infinite escalation and evolution with time. And similarly
         | then something will come up then that is unaccounted for and so
         | on, and so on.
        
         | WalterBright wrote:
         | I wonder how good the deepfake would be for things it didn't
         | have training data on. For example, making an extreme grimace.
         | Or have the caller insert a ping pong ball in his cheek to
         | continue, or pull his face with his fingers.
         | 
         | One thing I notice with colorized movies is the color of the
         | actor's teeth tends to flicker between grey and ivory. I wonder
         | if there are similar artifacts with deep fakes.
        
           | bgro wrote:
           | Please drink a verification can to continue, caller.
        
             | robocat wrote:
             | Meme written in 2013(?), set in 2018, playing Halo 2k19:
             | https://gamefaqs.gamespot.com/boards/632877-halo-4/66477630
             | meme branch of https://knowyourmeme.com/memes/doritos-
             | mountain-dew
        
               | Gigachad wrote:
               | If I remember correctly, the context was that Microsoft
               | had made the Kinect mandatory for the Xbox One which
               | wouldn't function without it. And the Kinect was being
               | used for some silly voice/motion control crap.
               | 
               | The extreme reaction and copypastas like this probably
               | lead to microsoft scrapping that idea a few years later.
        
             | scyzoryk_xyz wrote:
             | A can of Ubik please
        
           | antihero wrote:
           | Years and years of having to do increasingly more insane
           | things to log into banking apps until we're fully doing
           | karaoke in our living rooms or stripping nude to reveal our
           | brand tattoos
        
             | notahacker wrote:
             | Plenty of new content for the banks' TikTok followers to
             | enjoy :D
        
           | ErikCorry wrote:
           | "Please put one finger behind each ear and flap them at me."
        
             | anticristi wrote:
             | I had to laugh with tears at this one. :)
        
           | pcrh wrote:
           | Shoe on head?
        
       | roessland wrote:
       | Might be a great article but I had to stop reading since I
       | couldn't bear the scroll hijacking.
        
         | budafish wrote:
         | 100% agree. Made me feel a bit nauseous.
        
         | mdp2021 wrote:
         | No issue here. It appears your system allows it.
        
         | nominusllc wrote:
         | I did not experience this, my system doesn't allow it
        
           | Sohcahtoa82 wrote:
           | How did you configure your browser to not allow websites to
           | hijack how scrolling works?
        
         | vrecan wrote:
         | Agreed, as soon as I scrolled once and I noticed it I was gone.
        
         | jwilk wrote:
         | https://archive.today/6Dis6 may work better.
        
       | ArrayBoundCheck wrote:
       | People asked stuff like this 15years ago (do bunny ears on
       | yourself or pretend to pick your nose). Usually to see if the
       | other person is catfishing with a prerecorded video. It usually
       | happens if the other person types instead of speaks (because it's
       | "late" and people are sleeping)
       | 
       | The only thing interesting about the title is the possibility of
       | real time deepfakes for calls. If it's not realtime then 15years
       | ago called and they want their technique back
        
       | JohnJamesRambo wrote:
       | Is audio harder to fake than video? I was watching the Keanu one
       | and wondered if it is harder to real time fake Keanu's voice than
       | his face?
        
         | 0xedd wrote:
         | No. Both face the same challenge - quality of data. The rest
         | has already been solved.
        
           | goatlover wrote:
           | Does this mean any possible audio or video a real human can
           | do, current ML can fake with enough quality of data? Like
           | there's no possible test a real human can do which can't be
           | faked, given the relevant data?
        
       | evan_ wrote:
       | It sounds like part of this issue is that it loses tracking if it
       | can't see both of your eyes, which of course could be defeated by
       | using a couple of cameras spaced at 45deg to one another and
       | calibrated to work together in some way.
       | 
       | Instead of a "deep fake" face swap an attacker could send virtual
       | video from a fully-virtual environment using something like an
       | nvidia Metahuman controlled by the camera array. I think that
       | would be pretty easily detectable today but maybe less so with an
       | emulated bad webcam and low-res video link. The models/rigging
       | are only going to improve in the future.
       | 
       | The classic "Put a shoe on your head" verification route would
       | still defeat that, at least until someone invents a very good
       | tool to allow those types of models to spawn and manipulate
       | props.
        
       | 12ian34 wrote:
       | Is this to be an empathy test? Capillary dilation of the so-
       | called blush response, fluctuation of the pupil, involuntary
       | dilation of the iris?
        
         | JorgeGT wrote:
         | We call it Voight-Kampff for short.
        
       | bobkazamakis wrote:
       | shoe on head
        
         | eesmith wrote:
         | Vermin Supreme, the leader in the fight against deepfakes.
         | https://en.wikipedia.org/wiki/Vermin_Supreme
        
       | isusmelj wrote:
       | Deepfake models are trained on very similar data. They don't
       | generalize well, usually. E.g. we take lots of data from YouTube
       | videos of a single person under a specific condition (same time,
       | same day, same haircut etc.) I know that as I spent quite some
       | time researching these models and worked on a deepfake detection
       | startup. Purely looking at it from a technological side, it's a
       | cat mouse game. Similar to an antivirus software. A new method
       | appears to create deepfakes. A new detection method is required.
       | 
       | However, we can also make use of the models to not properly
       | generalize and their limitations of the training process.
       | Anything that is out of distribution (very rare occurrence in
       | training data) will be hard for the model: - blinking (if the
       | model has ever only seen single frames it will create rather
       | random unusual blinking behavior - turn around (as mentioned by
       | the author, side views are rarer in the web) - take off your
       | glasses - slap your cheek - draw something on your cheek - take
       | scissors and cut a piece of your hair
       | 
       | The last two would be especially difficult and funny (:
        
         | cypress66 wrote:
         | Looking at how fast dall-e is improving, and how it
         | "understands" concepts even if you mix them in crazy ways, all
         | of your later examples seem solvable in less than a decade.
         | 
         | But I don't know much about ML so I might be wrong.
        
         | eckza wrote:
         | > Put shoe on head
        
           | westmeal wrote:
           | Ah yes the OG method of verification
        
           | octoberfranklin wrote:
           | > Put fish on head
           | 
           | https://www.nytimes.com/2007/07/02/technology/02spam.html#:~.
           | ..
        
             | prox wrote:
             | Meet in person
        
       | jstummbillig wrote:
       | Aaand they just got better at that.
        
       | jng wrote:
       | Ray Kurzweil: "The day it starts working, we're doomed". Reality:
       | "We got convincing front-facing deep fakes! Sideways? Don't
       | worry, it will be ready in just 24 months!"
        
       | diydsp wrote:
       | 8/9/2022: To prevent against uncovering, train your models to
       | generate sideviews.
        
         | tommoor wrote:
         | Yes, but one of the points of the article is a distinct lack of
         | source material to train models on profile views
        
           | e40 wrote:
           | For people like Tom Cruise that shouldn't be a problem.
        
             | renewiltord wrote:
             | > _...,we need to consider the high availability of data
             | for notable Hollywood TV and movie actors. By itself, the
             | TV show Seinfeld represents 66 hours of available footage,
             | the majority featuring Jerry Seinfeld, with abundant
             | profile footage on display due to the frequent multi-person
             | conversations._
             | 
             | > _Matt Damon's current movie output alone, likewise, has a
             | rough combined runtime of 144 hours, most of it available
             | in high-definition._
             | 
             | > _By contrast, how many profile shots do you have of
             | yourself?_
             | 
             | From the article
        
       | david_draco wrote:
       | "profile view challenge" coming in 3, 2, 1 ...
        
         | basilgohar wrote:
         | It's probably not obvious to many that there's nearly a
         | limitless source of training data on social media at this
         | point. Your comment is eerily prescient and now all trends can
         | become suspect as being a plant for additional training to
         | circumvent, well, known circumventions!
        
         | florbo wrote:
         | multiple pan angle 360 arc shot challenge
        
         | schroeding wrote:
         | "Hey! To make sure you stay secure, we require a short video.
         | Please look straight into the camera and tap the screen."
         | 
         | "You look great! We just need you to blink 5 times, and you're
         | almost done!"
         | 
         | "Almost done! Just show us your best side and turn your head to
         | the left like shown above."
         | 
         | "Of course, you only have best sides. Just turn your head to
         | the right like displayed above, and we can continue."
         | 
         | "You've almost got it! Please open your mouth and show us your
         | teeth."
         | 
         | "Wow, look at you go! Just one step remaining: Tilt your head
         | to the right like shown above."
         | 
         | "Now, to complete your verification, hold your national ID
         | beside your face. Make sure it does not obstruct your head! We
         | need to be able to see your pretty face!"
         | 
         | (Tongue in cheek, of course. But my banking app actually uses
         | this _kind_ of language, even for verification stuff, and I don
         | 't like it :D)
        
           | Theodores wrote:
           | I think you also need to add video of occluded areas, so
           | backs of ears and nostrils too. Shouldn't be too invasive but
           | you have got to do this so you don't get deep faked.
        
           | SapporoChris wrote:
           | Absurd requests will increase in absurdity as long as there
           | is not significant push back.
        
         | Traubenfuchs wrote:
         | Show your left side _like this_ and your right side _like this_
         | and let others comment which side looks prettier OwO.
        
       | maerF0x0 wrote:
       | Side thought: I really enjoy how similar some of the suggestions
       | (in TFA and comments) resemble reality checks for lucid dreaming.
       | In general Observing something and asking oneself "is this really
       | how reality behaves?" Which is such an interesting question
       | itself to question the nature of reality beyond our own initial
       | perceptions.
       | 
       | https://lucid.fandom.com/wiki/Reality_check
        
       | DenisM wrote:
       | Patiently waiting for the government(s) to step in and start
       | providing a modern ID service - a driver license with a built in
       | private key, a fingerprint unlock, and a PIN.
       | 
       | The combination of the three can still be defeated by someone
       | following you, stealing the card, lifting fingerprint from a
       | glass, and spying the PIN, but that's a lot of trouble to go
       | through and online identity fraud will become extinct.
        
         | xmprt wrote:
         | > a driver license with a built in private key
         | 
         | IDs should never be used as secrets. That's like mixing up your
         | username and password.
        
           | DenisM wrote:
           | What practical problem do you envision with this setup?
        
         | orthoxerox wrote:
         | Or by someone kidnapping you and applying a rubber hose to your
         | kidneys until you tell them the PIN.
        
       | testplzignore wrote:
       | Perhaps could ask the caller to perform some other interaction
       | that would be difficult to fake, like drinking a can of Mountain
       | Dew. Maybe make them sing a jingle and do a dance...
        
       | paparush wrote:
       | Face left. Face right. Recite Asimov's 3rd law.
        
       | londons_explore wrote:
       | Slightly more robust method...
       | 
       | Ask the caller to move out of the frame and then back in again.
       | 
       | You will see a noticable 'step' as the face that is partially in
       | the frame suddenly gets detected as a face and the deepfake is
       | applied.
       | 
       | The only way around this is to crop the input video quite heavily
       | - by at least one face diameter, which is a lot if the user is
       | near the camera.
        
         | robocat wrote:
         | Or pass a piece of paper or splayed fingered hand slowly in
         | front of your face?
        
         | IshKebab wrote:
         | > The only way around this is to crop the input video quite
         | heavily
         | 
         | I mean that sounds a lot easier than making deep fakes work
         | well with profile data surely?
        
           | Epokhe wrote:
           | Ask the caller to move their hand in front of the camera so
           | the hand fully obstructs the view, and then slowly slide the
           | hand to the side until it completely moves out of the view.
           | Crop-resistant!
        
       | ape4 wrote:
       | Also if Nicolas Cage is calling me its probably fake.
        
         | rexreed wrote:
         | What if it's your CEO? Or someone from the bank? Or a college
         | professor? Or a political prisoner?
        
           | WalterBright wrote:
           | I suppose the same way people used to deal with getting a
           | letter from someone.
        
             | bencollier49 wrote:
             | This is pretty funny. We're going to run the entire gamut
             | of different verification technologies for them all to
             | become compromised, forcing us to return to in-person
             | transactions for everything.
             | 
             | Time to start investing in closed bank branches.
        
       | wakahiu wrote:
       | I was recently looking for designers for my company when I came
       | across an interesting profile on Dribbble. I reached out and
       | quickly scheduled a time when we could talk over zoom. At the
       | meeting time, in comes this person who seems to have a strange-
       | looking, silicone-like face. I was using my Zoom account (I
       | rarely use other peoples zooms unless I trust them), to avoid
       | situations like this. One thing I noticed is that when the
       | candidate touched their face, their fingers would appear to sink
       | into their skin - almost as if it were made of liquid. Secondly,
       | their face appeared larger, lighter and smoother than their neck.
       | I got spooked an immediately let the candidate know that I was
       | not comfortable moving forward.
       | 
       | More interestingly, what exactly are them mechanics of getting a
       | deep fake into video call? How is it possible that a what seems
       | like a deepfake could make its way into my Zoom? Is Zoom enabling
       | external plugins that alter video details?
       | 
       | https://www.dropbox.com/s/4hf9c9kg52nxal0/Screen%20Shot%2020...
        
         | valarauko wrote:
         | For what it's worth, it looks more like an aggressive filter
         | rather than a deepfake.
        
           | foogazi wrote:
           | My thoughts too - but giving benefit of doubt to gp since
           | it's a still shot vs video
        
             | valarauko wrote:
             | Of course - just my opinion. To me, it looks like the
             | combination of low quality webcam + aggressive skin
             | smoothening "beauty" filter.
        
         | EliotBee wrote:
         | Things like OBS (streaming software) can create a virtual
         | camera. I am guessing its something like that where Zoom does
         | not even know the camera is not actually real hardware.
        
         | PullJosh wrote:
         | The live-streaming software OBS has a "virtual webcam" feature
         | that can make a generated video feed behave like a hardware
         | webcam. Perhaps something similar is being used to feed
         | generated video into zoom?
        
         | 0xedd wrote:
         | Input for software can be anything. Camera feed can be a
         | generated one and the software consuming it doesn't have to be
         | aware it isn't a real physical camera.
         | 
         | Zoom isn't aware.
        
         | thrashh wrote:
         | You can just make Zoom use any webcam on your system
         | 
         | And you can write your own webcam drivers to use in any program
         | 
         | Or use existing software with virtual webcam output like OBS or
         | ManyCam and write a plug-in for that
         | 
         | Our emit a network video stream and just play your video in
         | that kind of software instead of writing a plugin
        
         | Benjammer wrote:
         | It's fairly trivial to have a virtual camera source and point
         | Zoom to that as it's input. It has nothing to do with
         | integrating deeply with Zoom or getting "into" your Zoom. Check
         | out Snap Camera[0] for an example.
         | 
         | [0] https://snapcamera.snapchat.com/
        
         | [deleted]
        
         | mathverse wrote:
         | Out of curiosity was that person asian?
         | 
         | Maybe they were just using some of those beautifying filters
         | like chinese streamers do.
        
           | Sohcahtoa82 wrote:
           | Zoom has a beautifying filter built into it.
           | 
           | Admittedly, I use it, but I have it set pretty low. My face
           | isn't lit up very well, and without it, in my webcam, my skin
           | ends up looking a lot rougher than it really is.
           | 
           | If I set it to the max, then it just looks like a blurry
           | mess.
        
         | [deleted]
        
       | jiveturkey42 wrote:
       | Imagine what will happen when they start sending deepfake
       | kidnapping ransom videos
        
       | m3kw9 wrote:
       | Prob will mess up for upside down too
        
       | jjk166 wrote:
       | Probably a more robust test would be asking the caller to run
       | their hand through their hair a few times. Maybe you could pre-
       | render a few samples, but it would be trivial to request the
       | person pass their hands through their hair in a specific way, or
       | simply do it again after their hair is already messed up a bit
       | from the first time. It could still be defeated by the caller
       | having the same hair style (or wearing a good wig) as the person
       | they are imitating, but then making someone look like someone
       | else with practical effects has been a thing forever and it has
       | not been a huge problem.
        
         | robocat wrote:
         | > run their hand through their hair
         | 
         | That would have trouble passing anti-discrimination
         | requirements: disability (no hands), medical (bandanna covering
         | cancer treatment hair loss), religious (burka, rasta, yarmulke,
         | sheitel), racist (cornrow).
         | 
         | And trouble with: dreadlocks (can't run fingers through), bald
         | headed guys (as mentioned by sibling comment), and people with
         | hairdo's (coiffure, hairspray, topknots, plaits etcetera).
        
           | jjk166 wrote:
           | It doesn't need to be literally their hand through their
           | hair, it just needs to be some action which is easy to
           | perform but complicated to photo-realistically simulate in
           | real time from an arbitrary starting condition. Have them tug
           | on their clothes to see how the fabric moves, have them or a
           | caretaker turn a nearby light on and off such that their
           | illumination changes, etc.
        
         | 0xJRS wrote:
         | in my career i've personally worked with no less than half a
         | dozen bald coworkers. i do think this is a good idea but won't
         | work for everyone
        
           | happyopossum wrote:
           | why wouldn't it still work? Hands in front of faces are
           | already a huge problem for live deepfakes, wether or not the
           | faker or the faker are bald shouldn't make this much easier.
           | The only scenario this wound't be extra difficult for is if
           | both the faker and the person being faked are bald, and even
           | then the presence of a hand will likely cause some artifacts.
        
       | jacobsenscott wrote:
       | You don't need to do silly head movements. You could send the
       | other person and email with a password, or a text, or a signal
       | message or ask where you last had a drink together or...
       | 
       | If you are concerned that all methods of communications are
       | compromised you wouldn't suddenly trust zoom if they do some
       | silly head movement.
        
       | shiftpgdn wrote:
       | Tangentially related but a simple way to bust a chatbot is to ask
       | "What is larger, the Eiffel Tower or a shoe box?"
        
         | BitwiseFool wrote:
         | I think you're on to something. The modern day chat-bot/answer
         | engines seem very susceptible towards trying to answer fact-
         | based, yet obviously incorrect questions. They seem unable to
         | parse the entire question and instead focus on the most generic
         | terms. For instance, the "What year did Neil Armstrong land on
         | Mars?" example that shows up on HN from time to time.
        
         | elicash wrote:
         | Here's what Meta's blenderbot replied with: "The Tokyo tower is
         | taller than the eiffel tower. Interesting facts like that
         | interest me. Do you know about it?"
        
           | TillE wrote:
           | I'm not surprised that it responded with a random unrelated
           | fact, but it is funny that the second sentence is incredibly
           | awkward, and the last one isn't really coherent English.
           | 
           | Just a total AI meltdown from one simple question.
        
             | partdavid wrote:
             | People need to know about the CAN EAT MORE.
             | 
             | https://www.youtube.com/watch?v=CIoBSYpgYRw
             | 
             | For me, I call these Eliza-isms, since it reminds me of its
             | simple formulas like "Can you tell me more about ___" that
             | people got so much mileage out of.
        
           | [deleted]
        
         | hotpotamus wrote:
         | I remember someone posting a chat thread from one of the more
         | advanced AIs within the last few years wherein they asked it
         | who the president of the US is, and it was not able to answer.
         | 
         | Interestingly, this is a question my father would ask patients
         | as a paramedic who was trying to assess people's consciousness.
         | Another would be, "what day of the week is it?".
         | 
         | I'd say that these technologies are just like magic - they can
         | seem to do things that defy your expectations, but oftentimes
         | they fall apart when looked at from a different angle.
        
           | unsupp0rted wrote:
           | I'm not sure there's a basic question we can ask that a lot
           | of human users wouldn't fail. President of the country? Too
           | difficult.
        
             | bombcar wrote:
             | The point isn't to check if they actually know - it's to
             | gauge the response. If they say "I don't know" that may be
             | a valid answer, but if they say "George Bush" then
             | something is seriously wrong.
        
               | notahacker wrote:
               | Also, if a human has to be _told_ a basic fact they 'll
               | generally provide an indication of embarrassment or an
               | excuse or "why are you asking me these questions", not
               | try to continue the conversation with interesting
               | facts...
        
           | bnt wrote:
           | Depending where the bot is and what time of day it is, it
           | might tell you the wrong day of the week.
        
             | robertlagrant wrote:
             | "I was trained on a Thursday... damn."
        
             | PeterisP wrote:
             | For current mainstream text generation models it doesn't
             | really depend on where the bot is and what time of the day
             | it is, that's kind of the whole point - their text
             | generation process simply doesn't use the current time as a
             | possible input factor, these models would provide the exact
             | same result (or random picks from the exact same
             | distribution of potential results) no matter when and where
             | you run them.
             | 
             | They would be expected to answer with something matching
             | the day/time distribution that was represented in the
             | training data they used; like the answer to various prompts
             | of the "current president" question is dominated by Trump,
             | Obama and a bit of Bush and Clinton, simply because those
             | are the presidents in the training data and the more recent
             | events simply aren't there yet - like the many models who
             | have no idea how to interpret the word 'Covid' simply
             | because they have been trained on pre-2020 data even if the
             | model was built and released later.
        
           | Jiro wrote:
           | Many contexts in which the president is named in training
           | data are political. And nobody's going to put a chatbot on
           | the web without filtering out political material
        
       | yalogin wrote:
       | If you really want to verify the other end and if asking them to
       | do something is allowed, you can ask them to any number of
       | things, isn't it? The key is to not turning it into a protocol.
       | That will ensure it gets built into the software.
        
       | deedree wrote:
       | You might wanna argue his characteristic nose is the biggest
       | giveaway.
        
       | vivegi wrote:
       | Hybrid Video/audio/semantic Captcha, perhaps?
       | 
       | An audio prompt like 'Using your _< right | left>_ hand, repeat
       | the numbers that I am signaling. Use _< a different | the same>_
       | set of fingers from what I am using'.
        
         | Scoundreller wrote:
         | This is why I got really upset when my employer said the
         | swimsuit competition segment of the interview was past its
         | time. Its time is now!
        
           | robocat wrote:
           | I can wear a swimsuit, but the person verifying the video is
           | not going to enjoy seeing me in a swimsuit, at all.
        
             | Scoundreller wrote:
             | Don't worry, we won't add it to the training models.
        
       | mike_hearn wrote:
       | Long term, the only robust way to solve this is going to involve
       | a remote attestation chain i.e. video that's being signed by the
       | web cam as it's produced, and then transformed/recompressed
       | inside e.g. SGX enclaves or an SEV protected virtual machine
       | that's sending an RA to the other side. Although hard to set up
       | (you need a lot of people to cooperate and CPU vendors have to
       | bring these features back to consumer hardware), it has a lot of
       | advantages over what you might call trick-based approaches:
       | 
       | 1. Robust to AI improvements.
       | 
       | 2. Blocks all kinds of faking and tampering, not just deepfakes.
       | 
       | 3. With a bit of work can securely timestamp the video such that
       | it can become evidence useful for dispute resolution.
       | 
       | 4. Also applies to audio.
       | 
       | 5. Works in the static/offline scenario where you just get a
       | video file and have to check it.
       | 
       | There are probably other advantages too. The way to do such
       | things has been known about for a long time. The issue is not any
       | missing pieces of tech but simply building a consensus amongst
       | hardware vendors that there's actual market demand for
       | [deep]fake-proof IO.
       | 
       | In reality, deepfakes have been around for some years now but
       | have there been any reports of actual real world attacks using
       | them? Not sure, I didn't hear of any but maybe there's been one
       | or two. Problem is, that's not enough to sustain a market.
       | Attacks have to become pretty common before it's worth throwing
       | anything more than cheap heuristics at it.
        
         | feanaro wrote:
         | The solution you propose sounds vastly overengineered. Why
         | would we need remote attestation, tampering resistance and
         | enclaves when this is simply a problem of your peers being
         | unauthenticated?
         | 
         | If you care about the identity of who you are speaking to
         | remotely, the only solution is to cryptographically verify the
         | other end, which just requires plain old key distribution and
         | verification. It's just not widespread enough today for
         | videocalls because up to now, there wasn't much need for this.
        
           | judge2020 wrote:
           | How do you verify they are who they say they are, though? And
           | verifying their picture matches their name?
        
         | IMSAI8080 wrote:
         | I think it would be useful if news outlets signed their video
         | content using watermarking techniques. Then social media sites
         | where news is shared could automatically check for recognised
         | signatures for major outlets and give it a checkmark or
         | something. The signature could be easily removed but video
         | without the checkmark would then be suspicious. It would also
         | be useful if they added signed timecodes to frames so it could
         | be checked if the video has been edited.
        
         | jedberg wrote:
         | The only solution will be in person meetings, as it has always
         | been. Faking audio has been around a really long time. If you
         | needed to be absolutely sure the person you're talking to is
         | legit, you met them in person (mission impossible style
         | disguises not withstanding).
         | 
         | Nothing has really changed with deepfake, other than the fact
         | that for a brief period we could be sure the person we were
         | having a video chat with was legit because the tech didn't
         | exist to fake it.
        
         | ballenf wrote:
         | Then you just point the webcam at a screen or microphone at a
         | speaker?
         | 
         | I really don't think moving our trust to unknown, unnamed
         | manufacturers of hardware in far away places is a solution.
         | 
         | The solution is not going to be high tech, imho. Just like we
         | have learned a skepticism resulting from Photoshop, we'll learn
         | a skepticism of live video or audio.
        
           | oliwarner wrote:
           | You could layer on IR depth mapping, available in many
           | Windows Hello providing camera systems.
           | 
           | I happen to agree with the other voices here saying this is a
           | folly game of cat and mouse, but there are near-time methods
           | of making this harder to fool. And that might be enough for
           | now.
        
         | xen2xen1 wrote:
         | So your answer is .. more DRM?
        
         | Starlevel001 wrote:
         | Applying technological solutions to social problems hasn't
         | worked a single time before, but SURELY it'll work this time
        
           | shockeychap wrote:
           | Encryption and use of signed certificates has certainly been
           | a big help against web fraud. No, it's not perfect, and can't
           | prevent certain kinds of phishing, but it has definitely
           | raised the bar for would-be scammers. It makes it nearly
           | impossible to spoof "amazon.com" in the browser, and it
           | prevents passive snooping on open WiFi.
           | 
           | You can't make it impossible, but you can make it very
           | difficult.
           | 
           | My elderly uncle almost gave $10,000 to a scammer who had
           | convinced him that his nephew was sitting in a jail and
           | needed this money to be paid for his bail. Luckily, he
           | reached out to me for help and I was able to confirm that his
           | nephew was at home, not in jail.
           | 
           | I honestly can't imagine some of the scams that are coming,
           | particularly to the tech-vulnerable, if we don't do SOMETHING
           | to make real-time deepfake video harder than it now is.
        
       | geraldwhen wrote:
       | I've heard of interviews where a different person shows up
       | entirely but claims to be the original person.
       | 
       | Why use deep fakes when you can just not and get the same result.
        
       | xyzal wrote:
       | I wonder how much work would it entail to swap one actor's face
       | for another's in a movie. Just finished watching Fury Road, and
       | Tom Hardy just feels a bit off to me.
        
         | bsenftner wrote:
         | That's "bread and butter" work in VFX. I used to be a stunt
         | double actor replacement specialist. These daze, ML enhanced
         | tools make the work for a face replacement shot exponentially
         | faster and easier - as is needed for the huge number of
         | superhero stunts insurance companies will not let the stars
         | perform.
        
           | WalterBright wrote:
           | So do we really know that Tom Cruise is doing his own stunts
           | as claimed?
        
             | lsllc wrote:
             | The fact that Tom Cruise appears to not have aged in 30
             | years might be telling!
        
       | fattybob wrote:
       | Just ask to zoom into their ear !
        
       | Arkadin wrote:
       | Why not just use the standard Voight-Kampff test?
        
         | aqw137 wrote:
         | it would be good if we could just ask to look up and to the
         | left
        
         | neogodless wrote:
         | The pitfalls have been thoroughly documented.
        
       | night-rider wrote:
       | Signed up for one of those 'neobanks' (that don't have physical
       | branches) and part of the signup required me to turn my head
       | sideways. I wondered why they wanted me to do that. Now I know.
        
         | InCityDreams wrote:
         | Thanks for contributing to the dataset.
         | 
         | www.hownormalami.eu
        
       | baby wrote:
       | Heh, at some point I'm convinced that we'll use both:
       | 
       | * customizable 3D avatars
       | 
       | * customizable voices
       | 
       | to communicate in meetings and in communities (VR Chat style). So
       | the origin won't be associated to your avatar or your voice, but
       | it'll be associated with your account (like in good old chat).
        
       | rochak wrote:
       | Sometimes, I wonder if us humans even know or care that we are
       | taking things too far. I am all for progress and going beyond,
       | but deepfake and all these other recent AI developments are
       | taking us to a dystopian future which I am not super hopeful
       | about.
        
       | 1024core wrote:
       | TIL there are Deepfake video calls... :-(
        
       | EGreg wrote:
       | Whenever a robocall interactive salesperson calls me, I ask them
       | what is today's date or what time is it. They hang up shortly
       | afterwards haha
        
       | AviationAtom wrote:
       | I had an Indian sales rep for a deepfake filter, it creeped the
       | hell out of me when the voice totally did not match up with the
       | pasty white Irish face.
        
       | notum wrote:
       | ...or ask them to stick their tongue out.
        
       | anonu wrote:
       | Is there a "client side" way to detect this? Similar to how we
       | can detect photoshopped still images: checking edges, shadows,
       | pixels, etc...
       | 
       | The benefit is you would not have to rely on issuing commands to
       | the remote party.
        
         | kortex wrote:
         | Media forensics algorithms do work on various forms of
         | rebroadcast, transmission, and compression, so yes this should
         | be possible (for now). look up darpa medifor project. Siwei Lyu
         | (in the article) did a bunch of work in this space. Also see
         | Hany Farid and Shruti Agarwal. They've worked specifically with
         | deep fake detection.
         | 
         | https://arxiv.org/abs/2004.14491
        
       | bearjaws wrote:
       | Serendipitously over the weekend I was thinking about a future
       | where for key sensitive data access (e.g. production main) you
       | may need to have a quick 5 minute call (4th factor, "3D
       | verification") where you would be asked to turn on your camera
       | and be asked to answer some simple question and in different
       | positions...
       | 
       | Main thinking was how out of control it would get, it would
       | probably end up looking like anti-cheat systems where its a
       | constant cat and mouse game due to growing sophistication of deep
       | fake models.
        
         | function_seven wrote:
         | > _Main thinking was how out of control it would get..._
         | 
         | From a job listing, circa 2024:
         | 
         | - Job may require occasional lifting. (No more than 20kg)
         | 
         | - Expected to travel up to 25% of the year.
         | 
         | - Proprietary access control requires users be able to do
         | handstands and/or simple juggling. (Feats subject to change)
         | 
         | - EEOC employer.
        
         | phpnode wrote:
         | in the UK this is already relatively common for online banking.
         | I asked my bank to raise my daily transfer limit the other day
         | for a property purchase and part of the process was recording a
         | video of myself in their app.
        
           | ghaff wrote:
           | And, of course, as barriers are raised it makes it very
           | difficult for some portion of the population (and less
           | convenient for everyone else). I have to change addresses for
           | an older person at a couple of banks and I'm sure it's going
           | to be a nightmare.
        
             | couchand wrote:
             | That being said, if instead of "use our custom app you've
             | never seen to record a video" it's "just talk to a person
             | with some standard video chat" then maybe it makes things a
             | whole lot easier? But I don't see that being how it's
             | implemented these days...
        
               | ghaff wrote:
               | Yeah. It's still going to be a barrier for some people
               | but I'm guessing most could get comfortable with it if
               | they were forced to. But getting my dad to do anything
               | that isn't a voice call is pretty much pulling teeth.
               | (Except for using Amazon. I think a lot of things are
               | more don't want to than can't.))
        
         | comboy wrote:
         | We have PK cryptography you know, yubikeys and such.
        
           | couchand wrote:
           | Yubikeys can be stolen... I take it from the GP's description
           | the access is sensitive enough to require more assurance than
           | that.
        
             | dguest wrote:
             | I assume the other problem is that public key
             | infrastructure doesn't exist in a lot of places, whereas
             | (almost) everyone has a webcam.
             | 
             | I had the same thought as many on this thread: all
             | biometric identification is basically an arms race that
             | moves along as new ways of gathering biometrics become
             | convenient and ways of faking them are developed. But as
             | you say, yubikeys also have problems. At some point it will
             | probably be a hybrid, e.g. require a known acquaintance to
             | digitally sign a video where you appear together.
        
           | ErikCorry wrote:
           | You need some way to activate the yubikey. That could well be
           | an online interview.
        
       | kortex wrote:
       | It's a constant cat-and-mouse game. When I worked in this space
       | (2019-2021), the best defense against deep fakes was looking at
       | the microfacial behavior/kinematics of the "puppetmaster" and
       | comparing against known standards of the deepfake subject. Works
       | even if the fake is pixel-perfect (since it looks at the facial
       | "wireframe" rather than the image itself). The obvious downside
       | is you need sample data of the subject (and usually tons of it).
       | I wonder if that general approach can be optimized. E.g. Deep
       | fakes tend to struggle with certain fine movement/detail, if you
       | had a reflection of the subject, the algorithm would have to not
       | just replicate the main face and the mirror, but also be
       | completely optically consistent.
       | 
       | Was a fun project, but the cat-and-mouse feeling was inescapable.
       | For those curious, look up the DARPA MediFor project. Siwei Lyu
       | (in the article) did a bunch of work in this space. Also see Hany
       | Farid and Shruti Agarwal. They've worked specifically with deep
       | fake detection.
       | 
       | https://arxiv.org/abs/2004.14491
        
       | eesmith wrote:
       | I imagine that asking the caller to use a mirror would also be
       | effective, although with a high error rate as an effective mirror
       | may not be at hand.
        
       | formerkrogemp wrote:
       | I suppose this turning sideways trick will work until it doesn't.
       | 
       | I do appreciate everyone on this site contributing to my
       | knowledge of infosec. I don't work directly in the space, but I
       | feel the contributions on this site help educate those of us not
       | directly working the profession.
        
       | darepublic wrote:
       | > Arguably, this approach could be extended to automated systems
       | that ask the user to adopt various poses in order to authenticate
       | their entry into banking and other security-critical systems.
       | 
       | This approach works until it doesn't. How long before deep fake
       | can handle the 90 degree profile scenario? Not saying its not a
       | valid approach but you'd have to consider the time it takes to
       | implement these other checks and then the time we expect deep
       | fakes to improve in this scenario
        
       | markus_zhang wrote:
       | I'm wondering if all agencies are trying to capture as many pics
       | of high officials of other countries to make the best training
       | set.
       | 
       | Then they can use it during war.
        
       | PKop wrote:
       | Tom Cruise is going to hate zoom calls
        
       | msadowski wrote:
       | I had a call with a Polish government last year to get access to
       | one of the government portals and they were asking me to move my
       | head to the side and also move a palm of my hand very slowly in
       | front of my face.
       | 
       | Interesting times.
        
       | [deleted]
        
       | t_mann wrote:
       | Omg, looking forward to Yoga-based Captchas in the future: That
       | ain't looking like a proper downward dog to me, pal, no access
       | for you
        
         | amelius wrote:
         | Or this:
         | 
         | https://419eater.com/html/tope.htm
         | 
         | > On receipt of the form, we will require a photograph of you,
         | or a trusted representative as proof of identity. You will have
         | to get a NEW photograph taken, holding two symbol of ours. The
         | two symbols we need you to hold are a loaf of BREAD and a FISH
         | (the name of our church). This proves that the person in the
         | photograph is genuine. Passport or other photographs will NOT
         | be accepted.
         | 
         | > (...)
         | 
         | > As dumb as he looks, I'm not happy. I asked for the fish to
         | be on his head AND a loaf of bread. I got neither!
        
         | elygre wrote:
         | No no... this _does_ look like a proper downward dog, but _no
         | way_ you could do that!
        
           | tiborsaas wrote:
           | Please say "I'm not a robot" with a Scottish accent 3 times
           | and do a backflip to login.
        
             | notduncansmith wrote:
             | But voice recognition technology... it don't do Scottish
             | accents
             | 
             | https://youtu.be/TqAu-DDlINs
        
         | TheAceOfHearts wrote:
         | Please drink verification can.
         | 
         | https://m.imgur.com/dgGvgKF
        
           | prettyStandard wrote:
           | Needs more jpeg.
        
       | BoredPuffin wrote:
       | Here we go again... there's a rule that describe this situation
       | where a measuring matrix becomes the standard, the said matrix is
       | no longer indicative.
       | 
       | But please, I don't want to be pointing to a random bus outside
       | my window to prove that I'm not a robot/deepfake...
       | 
       | The degrade of news article quality > the degrade of fact-
       | checking journalism scrutiny > the degrade of written article
       | quality > people rather watch live stream event than reading >
       | degrade of live stream event trustworthiness because of
       | deepfake...
       | 
       | What's next? Heavily scrutinised journal articles which runs
       | check on videos with anti-deepfake AI-based algorithm?
       | 
       | Oh wait we've just gone through the full cycle.
        
       | bob_paulson wrote:
       | deepfakes done in a unethical way is a real threat indeed. This
       | paper shows how to identify some of them. And metaphysics.ai are
       | doing something a bit different. Let's wait and see.
        
       ___________________________________________________________________
       (page generated 2022-08-08 23:00 UTC)