[HN Gopher] Deepfake detector can spot a real or fake video base...
       ___________________________________________________________________
        
       Deepfake detector can spot a real or fake video based on blood flow
       in pixels
        
       Author : sizzle
       Score  : 52 points
       Date   : 2022-11-18 18:47 UTC (4 hours ago)
        
 (HTM) web link (www.zdnet.com)
 (TXT) w3m dump (www.zdnet.com)
        
       | atonse wrote:
       | Love the cat and mouse game!
       | 
       | So then this will be the next target of better deepfake models,
       | right?
       | 
       | We saw that fake pharma tweet that (supposedly, but not really)
       | sent the stock crashing - how long before a fake video of a CEO
       | making an announcement at a fake Davos-like conference stage
       | interview?
       | 
       | As a techie, is this going to make things like digital signatures
       | more important? But more realistically though, most of the
       | audience that would do impulsive things won't care to verify.
        
         | notacoward wrote:
         | HN story a month from now: new deepfake software can evade
         | Intel's detector.
        
       | yieldcrv wrote:
       | That's a great observation to make deep fakes more realistic
       | 
       | I often think about subtleties that throw us off a little, too
       | bad that disclosing this subtlety reduces the ability to discern
        
       | dahdum wrote:
       | Can this technology eventually detect their heartbeat, or is it
       | just looking at slower changes over time? If the latter it sounds
       | much simpler to defeat, if the former that would have many
       | repercussions.
       | 
       | Live heart rate by video analysis would make things like
       | televised court proceedings, congressional hearings, and news
       | interviews much more invasive. Elevated heart rate is a sign of
       | stress, and it wouldn't be long before people were jumping to
       | conclusions over whether someone was lying or hiding their true
       | feelings/intentions.
        
         | sbirch wrote:
         | This has actually been done before, awhile ago:
         | https://people.csail.mit.edu/mrub/vidmag/
        
           | dahdum wrote:
           | Very cool, thank you. I'm honestly surprised this dark magic
           | hasn't been (ab)used yet, unless it has some strong
           | limitations.
        
           | ehsankia wrote:
           | Not sure if it's quite the same, but Google Fit has a feature
           | that gets your respiratory rate from the selfie camera. They
           | also have one where you put your finger on the camera flash
           | and it uses that to see your bloodflow.
           | 
           | https://www.lifewire.com/measure-respiratory-and-heart-
           | rates...
        
       | lattalayta wrote:
       | Simulating blood flow is a technique currently used in high-end
       | VFX animation for movies.
       | https://www.fxguide.com/fxfeatured/maleficent/
        
       | alteriority wrote:
       | If the filter doesn't notice blood flow on a non-deepfaked
       | subject, run.
        
         | Mountain_Skies wrote:
         | Or we find out that certain population groups have different
         | blood flow patterns, which the system incorrectly identifies as
         | proof of fakery. Or perhaps for some, it's simply not
         | detectible even though they are real live people.
        
           | AustinDev wrote:
           | Or we find out some people have dark skin and the blood flow
           | isn't visible to the camera in these situations.
        
           | maxbond wrote:
           | Yeah, neither deepfakes nor deepfake detectors will end
           | epistemology. We'll need to use a multiplicity of tools, with
           | strengths and weaknesses known and unknown, and come to a
           | conclusion based on the preponderance of evidence knowing
           | full well we will sometimes get it wrong.
        
       | johnwheeler wrote:
       | For now...
        
       | phonebucket wrote:
       | Pet peeve of mine: articles using stats like 96% accuracy.
       | 
       | If the test set had 4% deep fakes, and 96% legitimate videos, a
       | model which always predicts legitimate video would score 96%
       | accuracy, even if it were useless.
       | 
       | Stats like precision, recall, F1 scores etc. are important.
        
       | asow92 wrote:
       | enhance blood flow in 3... 2.. 1.
        
       | skunkworker wrote:
       | Won't this be used in the next deepfake as an adversarial network
       | in order to produce more realistic results? It's an endless cat-
       | and-mouse game.
        
         | mumumu wrote:
         | This is probably intended for encoding webcam chat between
         | Intel devices. They can hash the video "frames" to detect
         | interception.
        
         | rogers18445 wrote:
         | > It's an endless cat-and-mouse game.
         | 
         | This is often stated but I think it has to be obviously wrong.
         | This isn't a traditional interactive game such as malware &
         | anti-malware.
         | 
         | You have existing sensors which operate under the constraint of
         | [ real world -> theoretic pixel space -> optics & aberrations &
         | sensor noise -> compression ]. And a single adversary which
         | attempts to fake this chain.
         | 
         | The detection of fakes isn't even an adversary in this game,
         | it's merely a detection of deviation of the faking process.
         | 
         | At some point, probably soon, the faking process will reach a
         | point where any deviation will be drowned out by the noise
         | aspect of optics & sensors & compression.
        
           | halpmeh wrote:
           | One method of generating things via neural networks is called
           | a generative _adversarial_ network. It works by having two
           | models. One that generates content and one that detects fake
           | content. You train them both in parallel. As the fake
           | detector gets better, so does the generative model at
           | generating fakes. It's literally a cat-and-mouse game. If
           | someone came up with a scheme to reliably detect your fakes,
           | you could add it to your discriminator model and retrain the
           | generator to improve the fake generation.
        
             | rogers18445 wrote:
             | My understanding is that it's not quite that simple. GANs
             | have stability problems (and as a result somewhat out of
             | favor atm) and if the fake detection mechanism isn't a
             | differentiable function itself no training can happen.
        
               | sigmoid10 wrote:
               | The fake detection mechanism (aka discriminator) is
               | usually just another neural network and I bet that's the
               | case here as well. So it must be differentiable and thus,
               | if anyone ever gets a hold of it, it could be easily used
               | to train a generator that will eventually fool the
               | discriminator.
        
         | AustinDev wrote:
         | >It's an endless cat-and-mouse game
         | 
         | Yes, this is the way with anything software based that can earn
         | people money.
         | 
         | See: video game hacks, SEO manipulation, etc
        
           | yreg wrote:
           | But especially so when machine learning is involved since a
           | model can train off its adversary.
        
             | BoorishBears wrote:
             | Not really special in the case of ML.
             | 
             | Before deepfakes, if you wanted to claim a video was
             | doctored in court, you'd find an expert on video editing
             | and have them testify.
             | 
             | But the same knowledge that allowed them to identify a
             | doctored video (like 50hz/60hz hum) could be in an
             | adversarial manner to create a very convincing video.
             | 
             | At most deepfakes democratize that "knowledge" in the form
             | of a model, so it still works both ways.
        
           | [deleted]
        
       | squarefoot wrote:
       | I hardly believe it could work on media uploaded on YT and
       | similar platforms, and assuming it does, it would be easily
       | defeated either by over compressing the videos so that subtle
       | chromatic changes are eliminated or applying smoothing filters
       | before reuploading. Should the technology catch on, it's just a
       | matter of time before the appearance of filters that scramble
       | those subtle differences, masking them for example as a grain
       | filter effect, to make it useless.
        
       | lowbloodsugar wrote:
       | For now.
        
       | pestkranker wrote:
       | I'm sure that one day, most of the things we'll see or hear on
       | the web will be filtered by this kind of software.
        
         | progrus wrote:
         | Not likely IMO, the arms race will continue.
         | 
         | Plus, are you sure you're eager to sign up for even more
         | censorship-by-opaque-algorithm?
        
       | mumumu wrote:
       | This is not new. It is news because it's from Intel.
       | 
       | I looked into that a year or two ago and there were papers on the
       | this.
       | 
       | Anyone who is familiar with Euler Video Magnification and with
       | neural network likely though of that.
       | 
       | Does this work in encoded videos? I doubt. Intel probably can add
       | a a feature the video encoder and sell it as an authentication
       | service for webcam communication on Intel Plataform.
        
       ___________________________________________________________________
       (page generated 2022-11-18 23:00 UTC)