[HN Gopher] Stopping deepfake news with an AI algorithm that can...
       ___________________________________________________________________
        
       Stopping deepfake news with an AI algorithm that can tell when a
       face doesnt fit
        
       Author : rustoo
       Score  : 124 points
       Date   : 2020-08-09 12:28 UTC (10 hours ago)
        
 (HTM) web link (spie.org)
 (TXT) w3m dump (spie.org)
        
       | wu-tsy wrote:
       | Can be used as an algorithm for a new discriminator in the GAN.
        
       | peterthehacker wrote:
       | Are deepfake face swaps a real problem yet? I can't recall any
       | major controversies in recent history that were caused by a
       | deepfake face swap.
       | 
       | > This technique can be used to create compromising videos of
       | virtually anyone, including celebrities, politicians, and
       | corporate public figures.
       | 
       | I've read a lot of concerned comments like this but I haven't
       | read about any real world examples of a controversy caused by a
       | deepfake.
        
       | turblety wrote:
       | Now we can use this new AI to train another AI that can defeat
       | it. And so continues the great cat vs mouse chase.
       | 
       | I don't think this is a problem that is ever going to be solved.
       | Deep fakes will become more and more popular, and harder to
       | detect.
        
         | amelius wrote:
         | Antibiotics resistance is also a cat and mouse game. It's still
         | useful to keep playing the game though.
        
           | sildur wrote:
           | The difference is that all the bacteria can become resistant
           | to the antibiotic soon after it is created.
        
             | dijksterhuis wrote:
             | As can deepfakes. All it takes is an additional step to
             | optimise wrt the model that tries to catch it.
        
               | sildur wrote:
               | Yeah, I was dismissing the analogy with antibiotics
               | because usually it takes quite some time between creating
               | the antibiotic and germs being resistant. But with
               | deepfakes the arms race is almost instantaneous. The
               | moment something appears to tell apart deepfakes, the
               | moment people can train deepfakes against that something.
        
               | amelius wrote:
               | Perhaps the deepfake checking should be a third-party
               | service? So you can only check so many deepfakes per day,
               | limiting these attacks (i.e., you can't realistically put
               | the checking inside a training loop). Just an idea ...
        
         | IgorPartola wrote:
         | I feel like it's time to bring in Heinlein's Fair Witness
         | concept:
         | https://www.urbandictionary.com/define.php?term=Fair%20Witne...
         | 
         | If you think about it, when we had relatively few journalists
         | and their reputation was on the line with every story they
         | wrote they in a way were fair witnesses. Since we have moved to
         | a hybrid model of professional journalists and a huge caste of
         | YouTubers, podcasters, bloggers, and twitters, it has become
         | impossible to hold everyone to a high journalistic standard. At
         | the same time Fox News has brought on an era of politicized
         | news so the bias is now inherent. Maybe if we had a few hundred
         | fair witnesses that offered a professional service we could
         | have a way to verify facts. Then again, it's one thing to have
         | a fair witness present facts, it's another to have that
         | witness's likeness be deep faked and broadcast all over the web
         | saying something they didn't actually say.
         | 
         | Another solution may be regulatory: anyone broadcasting a deep
         | fake with a huge disclaimer that it is a fake gets a 20% of
         | their net worth fine. Draconian but I can see it working.
        
           | ben_w wrote:
           | We need something, but Fair Witnesses seem both exploitable
           | and unachievable given the way human memories work.
        
             | dvtrn wrote:
             | "Expert witnesses" then?
        
               | ben_w wrote:
               | "Expert" is a misused term, much like "exponential".
               | 
               | I see the value of the press as being as much of an
               | intelligence agency as the combined CIA, NSA, and FBI,
               | but serving the electorate and investigating all those
               | with power -- politicians, police, military, religious,
               | business, and so on -- so that the electorate can make
               | informed decisions.
               | 
               | I don't know how to get there from here. Heck, I don't
               | even know if we've _ever_ really been there, or if the
               | press has always been comfortably manipulated by those
               | with power.
               | 
               | But yeah, I assume genuine experts are part of the
               | solution.
        
         | nottorp wrote:
         | Ref:
         | 
         | http://www.gutenberg.org/ebooks/29579
         | 
         | Robert Sheckley's Watchbird
        
         | [deleted]
        
       | trott wrote:
       | Looking at the diagram, this appears to be just an LSTM slapped
       | on top of a CNN. If so, I'm failing to see any novelty in this
       | approach. RNNs on top of CNNs have been used before, including
       | for deepfake detection. See for example:
       | https://arxiv.org/abs/1905.00582
       | 
       | The recent DeepFake Detection Challenge threw a lot of manpower
       | at the problem earlier this year, BTW.
        
       | formerly_proven wrote:
       | Isn't this the idea of getting better results using the
       | adversarial network approach? So this would be inherently ill-
       | suited to _stopping_ deepfake news, yeah?
        
         | DarthGhandi wrote:
         | it's great at making them better, yes
        
           | giancarlostoro wrote:
           | If the people trying to stop deepfakes can do it so can the
           | people trying to produce them anyway. The best we can do is
           | some way of digitally signing videos and software displays it
           | as authentic or not but then they just make fake youtube
           | style sites that say its authentic. The problem will go on
           | and on.... Fake news will always spread for one reason or the
           | next.
        
             | the8472 wrote:
             | > but then they just make fake youtube style sites that say
             | its authentic.
             | 
             | Even simpler, someone will build an optical setup that
             | sends deepfake pixels onto a signing image sensor.
        
       | nine_k wrote:
       | Now we need to stop believing our eyes _twice:_ first when we see
       | a natural-looking picture but know it is a deepfake, and second
       | when we see a natural-looking picture we believe is not a
       | deepfake, but the computer tells us it is.
        
       | guscost wrote:
       | Cool tech, but most people will have tuned out what we know as
       | "the news" (or become jaded to its purpose beyond entertainment)
       | before a tool like this is necessary.
        
       | Ijumfs wrote:
       | The deepfakes which support the officially blessed narrative will
       | not receive scrutiny, while authentic videos will be "proved" to
       | be deep fakes by some black box machination.
       | 
       | But we've known not to trust anything we didn't personally see
       | ourselves for many decades.
        
       | harryf wrote:
       | Seems like an AI arms race just begun
        
         | jcims wrote:
         | It's just a slobbering infant trying to roll off the bed at
         | this point.
         | 
         | The battleground of what is real and fake will very quickly
         | move into 'superhuman' realms of sensitivity, leaving us meaty
         | minions as spectators trying to figure out which AI to trust.
        
         | tersers wrote:
         | I used the AI to destroy the AI
        
         | nolaspring wrote:
         | Someone just found the other half of their deep fake GAN
        
       | chiefalchemist wrote:
       | AI novice here. It would seem to me that the detection algorithm
       | can be repurposed into making the original less detectable. A
       | recursion that never ends. An advantage quickly becomes
       | disadvantage.
       | 
       | Worse. All information - even facts and truth - has been
       | subverted. What happens when there is no trust? Is this not a
       | road to the New Darker Ages?
        
         | 1f60c wrote:
         | That's what I'm thinking. I don't want to diminish the value of
         | this research, but this cat-and-mouse game is like a GAN[0]
         | with extra steps.
         | 
         | [0]:
         | https://en.wikipedia.org/wiki/Generative_adversarial_network
        
       | yes_man wrote:
       | The problem will increasingly be "whose algorithm do we believe".
       | Internet has revealed that people believe mostly what they want
       | to. We have seen that a large subset of people are believing Bill
       | Gates is behind the pandemic. Why would masses somehow be more
       | rational in picking the most rigorous and objective neural
       | network to recognize deepfakes, than they are in making sense of
       | the world in general? In the end we will have multiple competing
       | entities claiming to have the best deepfake recognition, all with
       | their own agenda
        
         | biophysboy wrote:
         | This is absolutely right. I've become frustrated with people
         | trying to fight misinformation by just telling anonymous
         | strangers "facts". Algorithms are not going to help win the
         | argument. There are a lot of people who hate those who they
         | perceive to be "elite technocrats". And its not completely
         | unwarranted!
         | 
         | Its a really tricky challenge. The "steady state" of
         | conversations online is mutual distrust. Unless we handle this
         | underlying issue, clever algorithms like this will be ignored.
        
           | slg wrote:
           | I'm reminded of the Jonathan Swift quote:
           | 
           | >Reasoning will never make a man correct an ill opinion,
           | which by reasoning he never acquired
           | 
           | The problem that specifically deepfakes and generally fake
           | news highlights is that the general public does not have the
           | aptitude, knowledge, time, or motivation to be a true arbiter
           | of facts. We have traditionally outsourced that role to
           | journalists. As a society we have lost faith in journalistic
           | institutions so the onus is now back on the individual. We
           | need a way to offload that truth-finding responsibility back
           | onto a third party that can be trusted. Any system which
           | requires more aptitude, knowledge, time, or motivation from
           | the general public is unlikely to work because lacking those
           | is exactly what got us into this mess.
        
             | biophysboy wrote:
             | The key part of your sentence being "a third party that can
             | be trusted."
             | 
             | A trustworthy third party is NOT the group that is "the
             | most smart". People trust locals that they relate to. If we
             | are going to offload responsibility, I personally think it
             | needs to be in the old fashioned democratic pluralism style
             | that we had half a century ago. Local leaders know the
             | people in the area better, and they have the time and
             | motivation to hear the interests of more people.
        
           | Swizec wrote:
           | > trying to fight misinformation by just telling anonymous
           | strangers "facts"
           | 
           | It's even worse when you realize that most mistruths are
           | factual. You can very easily lie with facts.
           | 
           | The texas sharpshooter falaft is a great example.
           | 
           | You get 500 samples, take 10 good samples, and say "Look! An
           | objective truth in 10 samples we have success!". You omit the
           | other 490
           | 
           | You never said a lie, facts only, but the interpretation you
           | lead people towards is a lie.
           | 
           | Or a more common example: Thing Y increases your risk of X by
           | up to ten times!!!
           | 
           | Risk goes from 0.0001% to 0.001%. It's completely irrelevant
           | and you got a nice scary clickbait with objective facts.
        
         | nothis wrote:
         | For me, one reason not to freak out about this is Photoshop.
         | 
         | It's been around for decades. _Perfect_ photographic fakes
         | (literally called  "photoshops") are possible. Yet is there a
         | great crisis of fake photographs taking over the news? Not
         | really. The actual "fake news" barely even bother with
         | photoshop (and those for whom it works, don't care about the
         | quality). It's somehow still fairly easy to get context and at
         | some point, you just have to trust a news outlet, just like you
         | had to trust them for text-based news.
         | 
         | All we see is a trend of making it easier for people to
         | subscribe to a bubble of "news" that fits their world view. The
         | quality of the fakeness barely factors into this.
        
           | hombre_fatal wrote:
           | The other reason not to freak out about this is that we're
           | already far down the path the people think deep fakes are
           | creating.
           | 
           | Reddit and Twitter and the internet prove that people will
           | react to an _image of text_ by the tens of thousands, going
           | straight into their brain. All of us have been guilty of this
           | at some point.
           | 
           | To me it's naive to freak out about Photoshop and video
           | deepfakes because it reveals that you're completely unaware
           | of the degree of "shallowfaking". A screenshot of a headline
           | or tweet spreads in a way that a deepfaked video can't, and
           | it apparently goes right past our bullshit detector in a way
           | that video can't.
        
             | mlyle wrote:
             | The issue with deepfakes is this: at least in the other
             | cases you outline, it's possible to be skeptical and weigh
             | evidence-- including direct photographic and video evidence
             | that is unlikely to be safe (while it may have been
             | possible to fake before, it's been labor intensive and
             | presents more risk that it'd have been messed up). In turn,
             | we can evaluate the credibility of media sources based on
             | how they agree with other evidence that we have directly
             | evaluated.
             | 
             | As it becomes possible to fake more and more stuff, we're
             | going to lose that. Different silos are going to have
             | different, meticulously supported versions of the facts
             | with an equivalent degree of direct-evidential and
             | reputational support. And it's not clear to me as an
             | individual one ever disentangles this.
        
           | Natsu wrote:
           | Maybe, but I've found that a lot of viral images are doctored
           | in one way or another.
           | 
           | Here are some links to recent examples of doctored photos. I
           | know I've seen at least the deceptive image of cops "pointing
           | a gun at children" when they were actually not on the front
           | page of Reddit, so it's not like manipulated images have no
           | effect:
           | 
           | https://www.hackerfactor.com/blog/index.php?/archives/884-Pr.
           | ..
           | 
           | http://hackerfactor.com/blog/index.php?/archives/891-Count-o.
           | ..
           | 
           | That said, remember that not all alterations are digital:
           | 
           | https://www.hackerfactor.com/blog/index.php?/archives/590-Un.
           | ..
        
         | xiphias2 wrote:
         | Aren't digital signatures the best solution we have for this
         | problem?
         | 
         | Commercial entities want to have the priviledge of being able
         | to modify the content submitted by content creators, but the
         | culture of trusting i.e. Twitter over client verification needs
         | to change.
        
           | korla wrote:
           | If the technology is too hard to understand for the masses
           | it's unlikely to provide much resistance to desinformation
           | campaigns.
        
             | TheSpiceIsLife wrote:
             | The technology isn't hard to understand, but even if it
             | were so what?
             | 
             | How many people who watch a streaming video know...
             | anything at all about how that content is delivered, at
             | all, at any layer.
             | 
             | The problem is, as OP noted, the existing gate keeps profit
             | from being able to manipulate us, so convincing them to
             | deliver signed content is extraordinarily unlikely.
        
               | [deleted]
        
           | gruez wrote:
           | Digital signatures would only match of the contents match bit
           | for bit. This means that you can't recompress/remux/resize
           | the video. This will probably provide a very poor ux for
           | mobile or other bandwidth limited users. Also, if the video
           | was used as part of another video (eg. TV broadcast), you'd
           | either have to splice the digitally signed video into your
           | existing stream (not modifying the bits at all), or provide
           | the digitally signed original as an appendix. In either case,
           | you'll need tooling to make verification easy, which doesn't
           | exist today.
        
             | doomrobo wrote:
             | There's some cryptographic work in this direction.
             | PhotoProof allows a photographer to prove that the image
             | they're presenting is, e.g., the cropped version of the
             | true original image they took. Video is still way out
             | there, but at least people are thinking about this.
             | 
             | https://www.cs.tau.ac.il/~tromer/papers/photoproof-
             | oakland16...
        
             | lallysingh wrote:
             | Signature chain for the transformations. The video host
             | signs the pre-recompressed video and provides the original
             | signature.
        
             | vlovich123 wrote:
             | Why can't you have digital signatures of the original and
             | have an ML model that can evaluate whether or not it
             | matches? You could run a cloud service to host the signed
             | originals and continuously update the model.
        
             | xiphias2 wrote:
             | Fingerprinting can take care of the recompression problems.
             | 
             | Remuxing is a hard problem, as it can easily take people
             | out of context, at the same time it's very important to
             | summarize the videos, as people have limited time.
             | 
             | Right now what I see though is that Facebook/Google/Twitter
             | aren't even trying to do the bare minimum end-to-end
             | authentication that Whatsapp/Telegram/Signal already does
             | (create a private key on the end devices, and sign the
             | content to verify the authenticity of the publisher).
             | 
             | Requiring HTTPS was a great first step for tech companies
             | to protect people and from ISPs. But they do nothing to
             | protect people from themselves being compromised (the
             | Twitter incident was a great proof for this).
        
               | gruez wrote:
               | > Fingerprinting can take care of the recompression
               | problems.
               | 
               | That only opens another problem: adversarial attacks that
               | look totally different, but has a similar fingerprint
               | 
               | >Right now what I see though is that
               | Facebook/Google/Twitter aren't even trying to do the bare
               | minimum end-to-end authentication that
               | Whatsapp/Telegram/Signal already does (create a private
               | key on the end devices, and sign the content to verify
               | the authenticity of the publisher).
               | 
               | >Requiring HTTPS was a great first step for tech
               | companies to protect people and from ISPs. But they do
               | nothing to protect people from themselves being
               | compromised (the Twitter incident was a great proof for
               | this).
               | 
               | Mainly because to 99.9% of users, there's no difference
               | between a message that's signed by the author, and a link
               | to a tweet that's made by the author. Even if you're the
               | 0.1% that do care about public key cryptography it
               | doesn't matter because you're trusting the site to do the
               | verification and key management. It's not like PGP where
               | you can get the public keys and verify yourself.
               | 
               | There's also the problem that people simply don't care.
               | Have you seen how many photoshopped tweets end up on
               | social media? If people are willing believe a screenshot
               | without a link (which is trivially easy to add and
               | verify), what makes you think they won't believe a
               | screenshot without a signature?
        
           | Nasrudith wrote:
           | Signing it doesn't mean it wasn't altered at some point and
           | does nothing to actually verify the content - only if a
           | source is who they say they are. And the key distribution is
           | another issue.
        
         | ericlewis wrote:
         | I think religion might have proved that people will believe
         | whatever they want to first. The book Sapiens really opened my
         | eyes to people.
        
           | justforyou wrote:
           | >> people will believe whatever they want to first.
           | 
           | Including Harari. There's more wrong in that book than there
           | is right, aside from basic facts that people should already
           | know.
           | 
           | Chriatopher Ryan (Civilized to Death & Sex at Dawn) explores
           | the few compelling points made by Sapiens in depth and
           | without all of the disingenuity and naive philosphy present
           | in Sapiens.
        
           | paulryanrogers wrote:
           | Well it took me 35 years but I escaped religion despite being
           | indoctrinated from a young age. And leaving was a difficult
           | choice but education and evidence were the keys that freed
           | me.
           | 
           | Piercing cognitive biases is hard, maybe harder if those most
           | in need feel forced.
        
             | elliekelly wrote:
             | > Piercing cognitive biases is hard, maybe harder if those
             | most in need feel forced.
             | 
             | I think about this article (post?) called "How to Change a
             | Mind"[1] a lot. It's about a woman whose husband was in a
             | religious cult and how he finally realized it.
             | 
             | [1]https://forge.medium.com/how-to-change-a-
             | mind-1774681b9369
        
             | TheSpiceIsLife wrote:
             | Be kind to yourself and know that a fair swathe of the
             | early years of life are necessarily of low agency.
        
             | ericlewis wrote:
             | I could have clarified more- I'm saying that even with
             | these sort of deep fakes that breaking people out might be
             | just as hard. Especially if objective proof becomes
             | "malleable" similar to how religion can lack objective
             | proof, manipulating people in a new way means we need
             | methods.
        
               | paulryanrogers wrote:
               | Your point seemed clear enough to me. Perhaps my response
               | drifted too far. I see education as a common solution to
               | the "it's all relative" and "no one can know for sure"
               | shortcuts some people take to hold on to what's more
               | comfortable to believe.
               | 
               | And just as religion may often teach people to turn off
               | their critical thinking skills they can be taught to use
               | them again. And human brains evolved to recognize
               | unhealthy or unnatural facial imagery for millennium. So
               | unmasking the more successful techniques may be enough to
               | break us out of the "anything can be fake" malaise.
        
           | TheSpiceIsLife wrote:
           | This is on point, but we need to be clear on the direction of
           | causality, which I believe you have correct, though I'll try
           | to state it more clearly:
           | 
           | The human mind and body is predisposed for social cohesion
           | through shared narrative, and this exposed a weakness in our
           | psychology: we are, as groups, extraordinarily gullible.
           | 
           | As a result we have religion, conspiracy theories, politics,
           | and crime.
           | 
           | It remains to be seen whether this is an evolutionarily fit
           | strategy.
        
           | Erlich_Bachman wrote:
           | This is a very low-resolution understanding of what a
           | religion is. It has much more depth to it. (The writer of
           | Sapiens actively tries to ignore this by reducing every
           | single belief humans have (including religions) to random
           | meme propagation.)
        
         | fny wrote:
         | Because YouTube, Facebook will flag these videos before they
         | become too widespread.
        
           | puranjay wrote:
           | A video being taken down just corroborates their belief that
           | there is some deep conspiracy behind X event
        
           | pornel wrote:
           | You're missing a sarcasm mark.
           | 
           | But seriously: at best it's going to be a big "maybe, it
           | depends". These services bounce between PR disasters of
           | "complicit purveyors of propaganda" and "oppressive
           | algorithms destroying free speech", so you can't rely on them
           | to be on either side.
           | 
           | As deepfakes get better, the algorithms will have worse
           | detection rate, so it'll end with Zuck testifying to congress
           | that Facebook doesn't want to decide what is real, so
           | everyone can post deepfakes all day long.
        
             | Nasrudith wrote:
             | Perhaps the wild zigzag is a hint that the people casting
             | accusations are utterly full of shit and just want them to
             | bend the knee to their unpleasable whims?
        
             | hellofunk wrote:
             | > You're missing a sarcasm mark.
             | 
             | I don't see any sarcasm there.
        
               | [deleted]
        
           | travisoneill1 wrote:
           | "I trust Google and FB, and I would like them to decide what
           | is true or not for me instead of deciding myself"
        
             | puranjay wrote:
             | Amazes me how people are okay with giving these massive
             | corporations so much power and control over our lives -
             | voluntarily. Most would be rightfully horrified if Walmart
             | or Citibank was to decide what you get to watch or not
             | tomorrow. But somehow, if Alphabet or Apple does it, it's
             | all okay.
             | 
             | The tech world in general needs to wake up about the
             | overreaching power and influence tech companies have on our
             | lives.
        
               | Nasrudith wrote:
               | That dumb meme about "big tech controlling your mind"
               | ironically seems to be doing a far better job of what it
               | accuses others. It has made people /unlearn/ how the
               | internet and private property work.
               | 
               | Even with more egregious examples of moral wrong and
               | stupidity like shutting down child molestation survivor
               | support groups or just bad decisions like buying up
               | Tumblr and trying to sanitize it weren't like but
               | accepted as fundamentally how it works when it is their
               | servers. There is no viable alternative to stop them from
               | setting their rules. Even if they die there is nothing
               | stopping them from taking their ball and going home.
        
               | tomxor wrote:
               | I can only imagine it's because what seems obvious to us
               | is really not noticeable to 99% of people.
               | 
               | You have to remember we are in a little bubble of our own
               | for better or worse, most of HN readers principles are
               | generally aligned against this kind of centralisation and
               | we highlight and amplify anything we see that matches;
               | I'm guessing your average FB user is more focused on the
               | value proposition of these services they are consuming
               | and wont give the corporations behind them much of a
               | thought until it hits big news.
        
               | puranjay wrote:
               | Much of it depends on the way people interact with
               | businesses. Big Banks are bad because people can
               | literally see overdraft fees and all sorts of surcharges
               | being tacked onto their accounts, often in shady ways.
               | Big Oil is bad because they literally pollute the
               | environment and routinely spill millions of gallons of
               | oil in the ocean.
               | 
               | In contrast, for the average person, Google is just the
               | free little search engine that shows you all the answers
               | and also gives you the free phone software. Whatever
               | egregious privacy violations they do are wrapped up in a
               | layer of abstraction that, if you don't get technology,
               | can be hard to grasp.
               | 
               | The next generation is smarter though. They understand
               | technology fundamentally and know that there's no such
               | thing as "free".
        
       | JimiofEden wrote:
       | This seems like test driven development, but applied on a much
       | larger scale.
        
       | bsaul wrote:
       | Maybe advanced DRMs will be a way forward ? Have camera
       | fingerprint / sign the video, and then every editing videos
       | fingerprint & sign the changes performed, and send everything to
       | a ledger ? Only way to make sure a video is actually coming from
       | the real world...
        
         | ed25519FUUU wrote:
         | We don't need this for photographs. Why do we need it for
         | video?
        
       | 29athrowaway wrote:
       | Deepfakes are generated by a generative adversarial network.
       | 
       | There are two networks: a generator, and a discriminator.
       | 
       | The generator generates a result, the discriminator evaluates
       | that result.
       | 
       | This AI that detect fakes faces could be used to train the
       | discriminator so that the GAN generates even better results.
        
       | adfhnionio wrote:
       | I find all this concern a little bit overblown. Yes, it is a
       | problem that we can make extremely convincing fakes, but we've
       | had fakes that can fool non-experts for a very long time. The
       | Soviet Union doctored a great many photographs in a way invisible
       | to me (examples:
       | https://en.wikipedia.org/wiki/Censorship_of_images_in_the_So...).
       | Why are we more concerned about this fakery than about
       | airbrushing?
       | 
       | The solution is the same as it's always been: stick to
       | trustworthy sources and insist that all evidence is traced and
       | corroborated. It remains easy to learn the truth as long as you
       | make a good faith effort to do so. In the worst case we can just
       | stop trusting photographs altogether. We got by just fine before
       | the camera was invented; we can do fine after it becomes
       | obsolete.
        
         | mooneater wrote:
         | Because good fake video is now possible for the first time,
         | good video was nearly impossible to fake. Because people
         | believe certain leaders who are willing to use false evidence.
         | Because the populace is not prepared to be that discerning.
        
       | codecamper wrote:
       | This is getting stupid. People.. aging, cancer, Alzheimer's still
       | exist. Maybe time to reprioritize & organize efforts?
        
         | Nasrudith wrote:
         | Those are completely different deep specalties and best funded
         | in parallel anyway given diminishing returns. Complaining about
         | something that irrelevant to capabilities is like saying brain
         | surgeons suck at inventing green energy.
        
       | mbrumlow wrote:
       | This will just be like anti virus. Peole will start to train
       | their algos to fool the current algo that is is being used. We
       | will be a never ending fight.
        
         | 3wolf wrote:
         | That's GAN training in a nutshell.
        
       | ThomPete wrote:
       | this is the virus killer war all over again
        
       | arielbaz wrote:
       | see also "Training a deep learning model for deepfake detection"
       | ( https://news.ycombinator.com/item?id=22433711 )
        
       | hairofadog wrote:
       | Can god create a fake so deep that they themselves can't detect
       | it?
        
         | neutrallinked wrote:
         | DNA will set them apart
        
       | neutrallinked wrote:
       | We should also look towards simpler alternatives. ( For example
       | some days back I also read about a profile made on a social media
       | platform with a "generated" face )
       | 
       | 1) For image platforms to prevent "generated" profile with fake
       | faces, may be request 2 profile pictures and not one. A task that
       | is difficult to achieve at high resolution.
       | 
       | 2) For deep fake videos .. May be the idea be to fight the
       | "video" part and not the "deep fake" part. By that I mean
       | "signed" content. ABC news should "sign" generated videos and so
       | should other publishing houses. So that any other source claiming
       | forged voice or certain kind of "faked" content are not able to
       | do so.
        
       | dependenttypes wrote:
       | There is a simpler alternative imo: consider any news that do not
       | list all of their sources as fake news. Treat them as if they
       | were scientific papers.
        
       | [deleted]
        
       | olivermarks wrote:
       | The challenge with algos like this is that they could be used to
       | claim events that actually did happen didn't. So as an example
       | credible/convincing footage of Jeffrey Epstein by a pool in
       | Paraguay last week could be 'identified as a deep fake' and
       | discredited, despite other supporting facts and information that
       | lent credence to veracity.
        
       | alkonaut wrote:
       | This seems like it's "adversarial" to deep fakes. I wonder what
       | that can be used for.
        
       | varbhat wrote:
       | What if it gives false positive results? What if it tags non-
       | deepfake news as deepfake-news ?
        
       | Erlich_Bachman wrote:
       | Hey, we all know that this will not be the end of deep fakes. It
       | will just another channel of information to think about. Now we
       | have to care about whether this algorithm is correct, or maybe it
       | is also manipulated by the other political party to claim that
       | the others' picture is fake...
       | 
       | But it had to be created at some point, it had to exist. This is
       | just an inevitable next step of the progress.
        
       ___________________________________________________________________
       (page generated 2020-08-09 23:00 UTC)