[HN Gopher] The road to realistic full-body deepfakes
       ___________________________________________________________________
        
       The road to realistic full-body deepfakes
        
       Author : Hard_Space
       Score  : 164 points
       Date   : 2022-09-22 14:15 UTC (8 hours ago)
        
 (HTM) web link (metaphysic.ai)
 (TXT) w3m dump (metaphysic.ai)
        
       | prox wrote:
       | I have been wondering if using a human 3d model (which are quite
       | real, but not 100% there yet) can be overwritten by better
       | texturing -after the render- for complete immersion. So you use a
       | motion tracked animation of a 3d model (or static for a picture)
       | and then apply a way to make the last bit more convincing with
       | better texture and lighting.
        
         | echelon wrote:
         | I have year old demos on https://storyteller.io.
         | 
         | Some of the others in this space have great results :
         | https://imgur.io/seBTPG8
         | 
         | We've perfected voice replacement and I'll have more to show
         | soon.
        
           | prox wrote:
           | Cool!
           | 
           | The animation, is that a 3d actor with replaced visage by AI?
           | Could you explain what you did there?
        
             | echelon wrote:
             | We're using mocap - both computer vision based and full
             | body. We're also exploring text/audio -> animation, which
             | will be good for quick animation workflows.
        
       | alexose wrote:
       | Maybe a dumb idea, but I wonder if there's a future in
       | cryptographically signing videos in order to prove provenance.
       | I'm imagining a G7 meeting, for instance, where each participant
       | signs the video before it's released. Future propagandists, in
       | theory, wouldn't be able to alter the video without invalidating
       | the keys. And public figures couldn't just use the "altered
       | video" excuse as a get-out-of-jail free card.
       | 
       | It wouldn't solve any of the fundamental problems of trust, of
       | course (namely, the issue of people cargo-culting a specific
       | point of view and only trusting the people that reinforce it).
       | But, it would at least allow people to opt out of sketchy
       | "unsigned" videos showing up on their feeds.
       | 
       | I guess it would also allow people to get out of embarrassing
       | situations by refusing to sign. But, maybe that's a good thing?
       | We already have too much "gotchya" stuff that doesn't advance the
       | discourse.
        
         | rpmisms wrote:
         | Just hash the original video
        
         | kyleplum wrote:
         | As I mentioned in another comment - there is such an effort
         | underway https://contentauthenticity.org/
         | 
         | They don't intend to dictate who can authorize media, only
         | provide a verification mechanism that the media was sourced
         | from the place it claims to have been sourced from and is
         | unaltered.
         | 
         | I think of it as https but for media content.
        
           | tomrod wrote:
           | Had this idea a few years ago. Great to see it getting legs.
        
             | Frost1x wrote:
             | Ditto, although I was thinking of a slightly different
             | approach. I didn't think anyone was actively doing anything
             | but I love when I have ideas that seem independent and they
             | magically become 'realized' from my ignorance because
             | someone else did it already.
        
           | alexose wrote:
           | Woah! I had no idea something like this would be so far
           | along.
           | 
           | It seems like they're on the right track. I think the key is
           | to keep scope creep to a minimum. As soon as someone tries to
           | add DRM, for instance, the whole effort will go up in flames.
        
         | r3trohack3r wrote:
         | I hear this ethical concern raised a lot, usually as some
         | variation of AI being used to distribute "fake news."
         | 
         | The inverse is equally problematic and harder to solve: those
         | in power discrediting real photos/videos/phone-calls as "deep
         | fakes."
         | 
         | Not releasing AI models doesn't stop this. The technology being
         | possible is sufficient for its use in discrediting evidence.
         | 
         | Signing real footage isn't sufficient. You can get G7 to sign
         | an official conference recording, but could you get someone to
         | sign the recording of them taking a bribe?
         | 
         | Generating deep fakes that hold up to intense scrutiny doesn't
         | appear to be technically feasible with anything available to
         | the public today. But that isn't necessary to discredit real
         | footage as a deep fake. It being feasible that nation state
         | level funding could have secretly developed this tech is
         | sufficient. It seems we are quickly approaching that point, if
         | not already past it.
        
         | rlpb wrote:
         | I imagine realtime cryptographic timestamping services combined
         | with multiple videos of the same event taken from various
         | perspectives, by multiple witnesses connected to viewers by a
         | web of trust, with good discoverability of the different
         | authenticated viewpoints.
         | 
         | Combining all of those things would make it impractically
         | difficult to fake a scene without knowing what you want to fake
         | in advance as well as developing credible witness reputations
         | even further in advance.
         | 
         | For example, imagine a car accident caught by dashcams. You'd
         | not only have your own dashcam footage certified to have been
         | produced no later than the event by a timestamping service, but
         | also corroborating footage from all other nearby traffic also
         | certified in the same way but by other, competing services.
         | 
         | It'd be the future equivalent of having many independent
         | witnesses to some event.
         | 
         | Maybe it won't be necessary to go quite as far, but I think it
         | would be possible for recordings to remain credible in this
         | way, should the need arise.
        
       | superkuh wrote:
       | None of the videos on this page really look convincing. In terms
       | of generating static photos existing "photoshops" people have
       | been making for 25 years are far better. I don't see the need to
       | clutch pearls and call for new laws to put people in prison quite
       | yet.
       | 
       | But even the failures at temporal coherence have their own
       | aesthetic appeal. Like all of this stuff has been it's very
       | "dreamy" the way the clothing subtly shifts forms.
       | 
       | Beyond the coolness I'm glad that individual people are getting
       | access to digital manipulation capabilities that have only before
       | been available to corporations, institutions, and government
       | before.
        
         | staticassertion wrote:
         | I imagine that phoshopping videos at this quality or higher is
         | going to take way longer and be a much more specialized skill.
        
       | beders wrote:
       | Now everyone can build their own Star Wars sequel movies! I was
       | wondering about that after the disaster that was TROS.
       | 
       | I didn't think it would be possible to do in this decade, but we
       | seem to be making progress fast now. Very impressive to see. (and
       | scary)
        
       | runeks wrote:
       | How about first making deepfake faces actually believable?
       | 
       | Seems like every AI project does something halfheartedly, ponders
       | _what the world will be like_ once it's perfected, and then
       | starts the next project long before the first project is actually
       | useful for anything but meme videos.
        
         | armchairhacker wrote:
         | Even AIs which have existed for years and been "perfected" are
         | very noticeably not-human. Though they do look believable from
         | far away, up close they are still in the uncanny valley.
         | 
         | For instance Siri and Google Voice: they are clearly
         | _understandable_ but they sound noticeably different than real
         | people.
         | 
         | Or Stable Diffusion which will supposedly put real artists out
         | of business. It is definitely viable for stock photos, but I
         | can usually tell when an image was made by Stable Diffusion
         | (artifacts, incomplete objects, excessive patterns).
         | 
         | thispersondoesnotexist.com faces can also be spotted, though
         | only if I look closely. If they are a profile pic I would
         | probably gloss over them.
         | 
         | In fact, I bet you can make an ML model which very accurately
         | detects whether something was made by another ML model.
         | Actually that's a good area of research, because then you can
         | make a deepfake model which tries to evade this model and it
         | may get even more realistic outputs...
         | 
         | Ultimately I think we will see a lot more AI before we start
         | seeing truly indistinguishable AI. It's still close enough that
         | the ethical concerns are real, as people who don't really know
         | AI can be fooled. But I predict it will take at least a while
         | before a consensus of trained "AI experts" can't agree on
         | authenticity.
        
         | black_puppydog wrote:
         | That's what goes to media. "Engineers scrap last little
         | artifacts off deep-fake still images" just doesn't make for
         | "good" headlines.
         | 
         | Somewhere, someone is working hard to perfect these. In this
         | particular case probably under NDA... le sigh
        
         | BudaDude wrote:
         | I didn't know me and AI had so much in common.
        
         | walls wrote:
         | They're believable enough for video calls:
         | https://www.dw.com/en/vitali-klitschko-fake-tricks-berlin-ma...
        
           | kleiba wrote:
           | As far as I remember those calls were actually not made with
           | deep fake tech, but by reusing video material from a previous
           | call, skillfully edited to be believable enough.
        
         | kabes wrote:
         | The company behind this post recently got into america got
         | talent finals with a deepfake act. It looked pretty convincing
         | to me. Especially compared to state if the art of just 2-3
         | years ago.
        
           | thrown_22 wrote:
           | >It looked pretty convincing to me. Especially compared to
           | state if the art of just 2-3 years ago.
           | 
           | This has been the case for decades now. Much more realistic
           | than x isn't a good enough metric. It needs to be
           | indistinguishable from the real thing.
           | 
           | I'm old enough to remember this being called photo realistic:
           | https://static1.thegamerimages.com/wordpress/wp-
           | content/uplo...
           | 
           | And it was, compared to everything that had come before. Now
           | ... not so much.
        
             | kabes wrote:
             | https://youtu.be/TVezHTlPMw8
             | 
             | This is the act I meant. Judge for yourself, but I believe
             | we're close to bridging the uncanny valley
        
         | jcims wrote:
         | I think the problem we have with deepfake believability today
         | it just takes one weak link to spoil it. It turns out that
         | we're somehow still pretty bad at believable audio and not even
         | close with deepfaked 'presence' in the form of persona of
         | motion. But if you pair a believable impersonation with
         | something even remotely state of the art in the visual, you end
         | up with something pretty compelling:
         | 
         | https://www.tiktok.com/@deeptomcruise
         | 
         | https://www.youtube.com/watch?v=kjI-JaRWG7s
         | 
         | https://www.youtube.com/watch?v=VWrhRBb-1Ig
         | 
         | https://www.youtube.com/watch?v=bPhUhypV27w (Not the greatest
         | visually but funny nonetheless, esp the end)
        
       | overthemoon wrote:
       | That the two splashy examples are hot people in their underwear
       | is pretty telling for what one major use of this will be. Makes
       | me feel weird. I find takes on deepfakes fraying shared
       | epistemology alarmist, people will continue to believe whatever
       | they want to believe and falsifying evidence is still a crime,
       | but the ability to conjure moving images of whatever human body
       | you want without that person's permission feels bad. DALL-E
       | adding protections against sexual or violent imagery is a short
       | term solution, at best, IMO. Maybe I'm being alarmist, too.
       | Perhaps it won't be as easy as toggling a switch next to your
       | friend's photo to take their clothes off.
        
         | r3trohack3r wrote:
         | > Perhaps it won't be as easy as toggling a switch next to your
         | friend's photo to take their clothes off.
         | 
         | Unless an existing reference image exists - whatever the switch
         | does will be a guess. Many motivated folks already do this with
         | photoshop; it's all over 4chan and similar message boards
         | (request threads) and has been that way for at least a decade.
         | 
         | This is already the reality for celebrities with photoshop -
         | their likeness is returned unclothed in image search.
         | 
         | That's not their body
        
           | AlecSchueler wrote:
           | That's not really comparable though, as it's basically
           | composite work. The AI has the ability to infer then to
           | "imagine" with photorealistic results.
           | 
           | There would be small details kept intact between the source
           | image and the output that would make it feel much more
           | personal than even the best manual fakes of today.
        
             | r3trohack3r wrote:
             | I'm not sure I'm convinced.
             | 
             | There is a lot of variation in details between human bodies
             | that are covered by clothing.
             | 
             | You can infer some things, like skin tone and hair color,
             | from other parts of the exposed body with pretty decent
             | accuracy. You can infer general body shape from how the
             | clothes fit. But for things like size, shape, color, hair,
             | birth marks, moles, surgical modifications, etc. of various
             | concealed body parts? All those vary wildly from person to
             | person. Unless you have a reference image that you can use
             | to answer those questions - I can't imagine that you will
             | be able to infer those. If you can't infer those, you
             | aren't getting the real body of the person you are trying
             | to undress. You're getting a dream of what that person
             | might look like if they were to remove their clothes - a
             | dream that is not accurate.
             | 
             | Not to discredit what you are saying: those dream images
             | are definitely going to cause an entire generation of
             | discomfort. But the cat is out of the bag and has been for
             | some time. Artists were already capable of creating images
             | like this without consent - but it required more talent
             | than most humans poses to get that onto paper. Photoshop
             | made it possible too. AI is making it even easier.
             | 
             | Society is weird about nudity. To be fair, I am too. We
             | have all of these constructs built around the human body
             | and concealing it that many of us have bought into.
             | 
             | At it's core, I think the fear of this tech and nudity is
             | that it will be used to "steal dignity" from folks. The
             | question is: can you steal dignity from someone with pencil
             | and paper? Is a photorealistic sketch of your friend
             | unclothed sufficient for them to have lost their dignity?
             | What about photoshop? How about passing your photorealistic
             | sketch through an AI to make it even more photorealistic?
             | At what point have you robbed someone of dignity? Robbing
             | someone of dignity is a social construct, in some ways this
             | form of dignity stealing is something we _allow_ people to
             | do to one another by buying into that construct. I do feel
             | like the narrative we should be pushing is "that isn't my
             | body." If we invest in breaking the construct, my hope is
             | that we can remove the power this holds over people.
        
             | russdill wrote:
             | We all have the ability to infer and then "imagine" the
             | results.
        
               | AlecSchueler wrote:
               | But we don't all have the ability to render our
               | imaginations as photorealistic jpegs
        
               | autoexec wrote:
               | What harm would it cause if we did? If I could imagine
               | you naked and produce a JPG of my fantasy it would still
               | only be fantasy. It doesn't matter if I'm making JPGs,
               | cutting your head out of photos and gluing them to
               | catalogue models, or if I've got a supercomputer making
               | deepfakes. It's still just fantasy... speculative
               | fiction.
        
         | xwdv wrote:
         | That's what you find alarming? Some fake photos of people's
         | clothes coming off? You can already take a bikini photo of
         | someone and make a plausible estimate of their naked body.
         | 
         | What's incredibly alarming is how this tech will eventually be
         | twisted with evil to create child pornography at scale, leading
         | to "conflict-free" porn generated on the fly that pedophiles
         | (aka _minor-attracted person_ for the politically correct here)
         | will use to fuel arguments for acceptance of their sick habits.
        
           | overthemoon wrote:
           | I feel like that was implied by my comment, but yes, believe
           | it or not, I do find that also alarming.
        
             | xwdv wrote:
             | I think you should have been more clear: the most alarming
             | use case of this tech would be some sick pedo taking
             | pictures of your child then using that source imagery to
             | generate fake porn and pleasuring himself all over it.
             | 
             | This should be very illegal.
        
               | maxbond wrote:
               | How about you express your views as an addition to the
               | conversation instead of as a criticism for other people
               | not expressing the particular variety of concern that you
               | have...?
        
               | SanderNL wrote:
               | (Warning: I'm having a very hard time determining if you
               | are trolling or are for real..)
               | 
               | That's not the most alarming use case of this tech. By
               | far. (IMHO)
               | 
               | Also, I find this reasoning very off-putting. Putting
               | child porn into a discussion kills it. All participants
               | are (mostly) willing and basically required to agree and
               | "let's not talk about this further".
               | 
               | The fundamental technology that underpins these
               | achievements is more than capable of destroying
               | civilization if things start to go south - which I
               | believe they will, sooner or later. I find that to be
               | more worthy of discussion than moral jousting about
               | things people do in their private lives that I will -
               | hopefully - never know about.
               | 
               | Let's all use our imagination and see where these kinds
               | of models, both diffusion and transformers can take us.
               | Sure they can generate plausible visual information, but
               | that's not all they can do. Some days ago someone posted
               | about ACT-1, a transformer for actions. People can and
               | will hook up these things in all sorts of complicated
               | pipelines and boy, generating some insensitive imagery is
               | way, way down on the list of things to worry about.
        
               | triyambakam wrote:
               | So you've thoroughly defended against the point about
               | talking about porn, but you give no examples of you say
               | we should "truly worry about". Can you at least explain
               | further? Sounds too hand wavy
        
               | SanderNL wrote:
               | Good point. I _am_ being handwavey, sorry about that.
               | 
               | First, I see "AGI" as a real problem we'll have to face
               | at some point. I believe we will be too late by the time
               | we recognize it as a problem, so let's ignore that
               | "threat" for now.
               | 
               | The more pressing problem IMO is that, to use technical
               | terms, a _shitload_ of people will have to face the
               | reality that a software system is outperforming them on
               | just about anything they are capable of doing
               | professionally. I believe this will happen sooner than
               | later and I am totally not seeing society being ready for
               | that. Already I am seeing these models outperforming me -
               | and my collegues - on quite a few important axes, which
               | worries me and also the fact they almost universally
               | dismiss it because it 's not "perfect". I know it's hot
               | these days to either under- or overestimate AI, but I do
               | feel we have crossed a certain line. I don't see this
               | genie going back into its bottle.
               | 
               | Perhaps I'm still handwavey. I guess I am a handwavey
               | person and I'm sorry about that, but when I see GPT3
               | finishing texts with such grace I can't help but see a
               | transformer also being capable of finishing "motor
               | movements" or something else entirely like "chemical
               | compounds", "electrical schematics" or even "legal
               | judgements". I just found out about computational law
               | BTW, might interest someone. Even just the "common sense"
               | aspect of GPT3 is (IMO) amazing. Stuff like: we make eye
               | contact during conversation, but we don't when driving.
               | Why not? But also stuff like detecting in which room of
               | the house we are based on which objects we see. That sort
               | of stuff is amazing and it's a very general model too.
               | Not trained on anything specific.
               | 
               | I guess the core of what I'm saying is that "predicting
               | the next token" and getting it right often enough is
               | frightenly close to what makes a large percentage of the
               | human populace productive in a capitalist sense. I know
               | I'm not connecting a lot of dots here, but I clearly lack
               | the space, time and perhaps more importantly, the
               | intelligence to actually do that. I fear I might be a
               | handwavey individual - in fact easily replaced by GPT#.
               | Do you now see why am I so worried? :)
        
               | triyambakam wrote:
               | Thanks for explaining, I appreciate it. And it makes
               | sense what you've shared.
        
               | jefftk wrote:
               | I agree it's extremely distasteful, but why should it be
               | illegal? Who is being harmed?
        
               | triyambakam wrote:
               | What if those photos then were shared? Someone might
               | accuse the parents
        
         | mikotodomo wrote:
        
         | londons_explore wrote:
         | > Perhaps it won't be as easy as toggling a switch next to your
         | friend's photo to take their clothes off.
         | 
         | Thats totally a browser extension next year... Right click,
         | remove clothes...
         | 
         | When you think about it, ethically it's in the same ballpark as
         | right click, copy, something you probably also be doing without
         | asking the subject of the image.
        
           | [deleted]
        
       | devenson wrote:
       | Reminds me of the Michael Chrichton movie named "Looker"
       | https://www.imdb.com/title/tt0082677/
        
       | AlecSchueler wrote:
       | It's quite frightening to imagine what this could do when
       | weaponised against women, used for harassment and the creation of
       | nonconsensual pornography based on people's likeness. I wonder if
       | this is one of the first things we'll start seeing legislation
       | relating to.
       | 
       | It's also concerning to imagine the social impact this could have
       | on young boys as well, in a climate where pornography addiction
       | issues become more visible each year.
        
         | welshwelsh wrote:
         | I'm more concerned about censorship. China justifies their mass
         | internet censorship with pornography bans, which have high
         | public support. Will deepfakes push the US over the edge,
         | bringing the free Internet to an end?
         | 
         | I'm not concerned at all about pornography addiction, I don't
         | think that's real. On the contrary, pornography promotes
         | autonomy and independence by making people less dependent on
         | others for sexual stimulation. It's a massive social good, and
         | unrestricted pornography is the sign of a modern society.
        
           | PuppyTailWags wrote:
           | I don't think it's so simple and none of this is black and
           | white. Stigma around pornography is bad because it
           | unnecessarily restricts what adults may freely do with their
           | bodies, but not all pornography is produced with full and
           | uncoerced consent. Making an excuse to ban free speech by
           | banning unharmful pornography is bad, but unrestrictedly
           | producing fake porn of someone without their consent is also
           | bad.
        
           | AlecSchueler wrote:
           | I'm surprised that you so quickly equate the free internet
           | with the internet we have today. We already have widespread
           | suppression of certain types of pornography, most notably
           | that involving children.
           | 
           | The internet we have today is not free. The society we have
           | is not a wholly free one but we rightfully make trade-offs to
           | protect people.
           | 
           | We know that today there is already a huge issue of
           | nonconsensual pornography, revenge porn etc. Why the line of
           | what is "free" drawn at protecting these groups, why do we
           | tolerate open abuse against women but not against children? I
           | wonder if our outlook on women's safety as a society is
           | really as forward thinking as we would hope when we look
           | around the world today.
           | 
           | > "unrestricted pornography" is the sign of a modern society.
           | 
           | Is it though? In another world you could say the same thing
           | about drugs. Some people in America today might say it about
           | gun freedoms.
           | 
           | I don't know. I think there are lines to be drawn and I think
           | we can be open to discussing those without falling
           | immediately into hysterics about state overreach.
        
           | api wrote:
           | Banning porn in the US is a huge issue in the national
           | conservative camp, at least if you listen to a little bit of
           | their discourse around long term goals. If that camp comes to
           | power expect restrictions in the US which would probably
           | require enormous scale Internet clampdowns.
        
             | AlecSchueler wrote:
             | Banning pornography is the most extreme view. The question
             | is should there be regulations to ensure consent from those
             | whose likenesses are involved?
        
               | autoexec wrote:
               | Commercially, I'd agree someone should have compensation
               | for use of their likenesses, but what people choose to
               | draw, imagine, photoshop, or deepfake for non-commercial
               | use is their business and any state that regulates that
               | would be a dystopian nightmare.
        
       | wcoenen wrote:
       | What's going on with the scrolling behavior of this page? I'm
       | getting a very annoying "scrolling with inertia" behavior in
       | Chrome for desktop.
        
       | macrolime wrote:
       | I don't think you need videos with extreme levels of annotations
       | as this article suggests.
       | 
       | If a model is already trained on lots of images and captions, it
       | would probably be possible to just feed it tons of whatever video
       | and let it figure out the rest itself.
        
       | slfnflctd wrote:
       | Funny thing, as a clueless little kid in the 80s whose mind was
       | shaped by popular fiction, I often suspected this kind of thing
       | already existed back then. One of my 'gotcha' questions for
       | adults was, "I've only ever seen him on TV, so how do I know
       | Ronald Reagan is even real?"
       | 
       | Over 30 years later, while I would've never anticipated
       | smartphones... I really thought impersonation technology through
       | video & audio editing (not dependent upon look-alike actors)
       | would've been here sooner. Another example of wildly
       | underestimating the complexity of what might seem like a simple
       | problem.
        
         | whatshisface wrote:
         | In a sense, Ronald Regan was not real. All of his speeches were
         | written by someone else, and he relied heavily on advisors. He
         | was a figurehead for his administration to a greater extent
         | than most presidents before and after. He was one of the few
         | presidents that may have actually been innocent of the bad
         | stuff that went on in the white house during his presidency
         | (Iran-Contra), because he never showed any indication of really
         | understanding it the way Nixon understood Watergate or LBJ
         | understood Vietnam.
        
           | munk-a wrote:
           | Can I briefly and humorously boil your statement above down
           | to "Ronald Reagen was probably innocent by way of sheer
           | ignorance"?
        
           | advantager wrote:
           | Ignorantia juris non excusat
        
         | squarefoot wrote:
         | > "I've only ever seen him on TV, so how do I know Ronald
         | Reagan is even real?"
         | 
         | This made me wondering how many among the newer generations
         | social media addicts would think along the lines of "I've only
         | ever seen him in person, so how do I know he is real?".
        
           | NavinF wrote:
           | Am I on HN or /r/oldpeoplefacebook?
        
             | [deleted]
        
             | deejaaymac wrote:
             | made me LOL
        
         | [deleted]
        
         | robot9000 wrote:
         | Remove the "as a kid" part and you're now a conspiracy
         | theorist, or one of _those_ people.
        
       | skilled wrote:
       | Hmm, this does make me wonder what kind of an effect will
       | deepfakes have on people's general perception of the world?
       | 
       | I might be far fetching here, but wouldn't this lead to people
       | being more mindful of what they watch and interact with? I think
       | all that it will take is a few "state of the art" deepfakes to
       | cause a ruckus and the domino effect should do the rest.
       | 
       | Anyone in the field spent time thinking on this or has had
       | similar notions?
        
         | drc500free wrote:
         | The ability to use this to plausibly deny any real evidence is
         | more chilling than the fake evidence that could be created.
        
         | showerst wrote:
         | Photoshop has been common knowledge for years, and people still
         | buy some very dumb edits.
         | 
         | I imagine that deepfakes will follow a similar path to edited
         | photos -- lots of deception, followed by trustworthy sources
         | gaining a little more cachet, but with many people still
         | getting fleeced. Skepticism will ramp up in direct relation to
         | youth, wealth, and tech-savvy.
        
           | smrtinsert wrote:
           | Even simple video fakes such as slowing down a politicians
           | speech to make them look slow or indecisive has gone viral.
           | It doesn't take state of the art to lie to those who prefer
           | their own echo chamber.
        
             | autoexec wrote:
             | Plenty of people have mislead others online with nothing
             | but text! Ultimately we're going to have to just accept the
             | fact that you can't believe everything you see on the
             | internet.
        
               | kadoban wrote:
               | Video is more effective than text, because people think
               | they've seen whatever event and formed their own
               | conclusions. Those are much stronger than just being told
               | what happened.
        
               | autoexec wrote:
               | I've seen it argued that text is worse because it forces
               | people to read the words with their own inner voice.
               | Somewhere in this discussion is a guy who linked to
               | studies saying you are incapable of reading anything
               | without believing it. (Do you believe me?)
               | 
               | Text, photoshop, special effects, deepfakes they're all
               | just tools for spreading ideas, but we've been dealing
               | (to some degree of success) with folks telling lies for
               | as long as we've had language. I just can't see this
               | fundamentally changing anything except the level of
               | skepticism we give to video which (considering what
               | hollywood has been capable of for some time) we should
               | have been developing already.
        
         | jsty wrote:
         | If you have access to BBC iPlayer, "The Capture" is a really
         | good fictional programme / drama exploring the possible
         | implications re. justice and politics
        
           | wingspar wrote:
           | In the US, it's on Peacock. Enjoyed it very much. I think we
           | had watched it on PBS.
           | 
           | It's a surveillance thriller.
        
         | aimor wrote:
         | We (people) already accept the lies we perceive. I think we
         | choose to accept these fantasies because they're often outside
         | our direct influence and when something is distant from us we
         | have the luxury of turning it into entertainment. I think of
         | beauty in media, every video we see today is processed to make
         | people look pretty. Most play along with the fantasy: admire
         | old celebrities for not aging, complement friends for clear
         | skin. But when a friend says, "I feel so ugly" we move closer
         | to reality and acknowledge the makeup, beauty filters, etc. The
         | same effect happens in politics, news, business, technology:
         | people indulge in fantasy at their convenience.
         | 
         | I don't think people will be more mindful of what they watch
         | and believe, I think the opposite will happen: an attraction to
         | fake content. People will embrace the fantasy and share
         | deepfakes at a scale so large governments will be running
         | campaigns to alert the public that such-and-such video is fake,
         | possibly attempting to regulate how content shared online must
         | be labeled.
         | 
         | That said I still believe when these lies are closer to us,
         | enough for us to care either as professionals or friends and
         | family, that we will be more discerning about reality.
        
         | Swizec wrote:
         | As Abe Lincoln always said: Don't believe everything you read
         | on the internet.
         | 
         | But we do.
         | 
         | There's research. Even if you read something that you know is
         | wrong _you still believe it_. Especially when distracted or not
         | taking the time to analyze. As we rarely do.
         | 
         | https://techcrunch.com/2013/01/24/study-finds-that-we-still-...
         | 
         | https://www.businessinsider.com/why-you-believe-everything-y...
         | 
         | https://pubmed.ncbi.nlm.nih.gov/8366418/
        
           | autoexec wrote:
           | > Even if you read something that you know is wrong you still
           | believe it.
           | 
           | That seems like bullshit to me. I read your words (you even
           | posted links!) so how come I don't just instantly believe
           | you? If it were true, wouldn't it make all fiction inherently
           | dangerous?
           | 
           | Let's see how it holds up in real life... here's a lie: "My
           | uncle works at Nintendo and he told me that Mario (Jumpman at
           | the time) was originally intended to only have one testicle,
           | but the NES (famicom) didn't have powerful enough graphics to
           | show that so they scraped that part of his official character
           | design and have left the number of testicles unspecified ever
           | since."
           | 
           | Somewhere, secretly deep inside you, do you believe that now?
           | 
           | Nah. I think we don't have to worry about people believing
           | everything just because they read it. Reading things can put
           | ideas into your head (have you ever even considered Mario's
           | testicles before today?) but at this point we're straining
           | the hell out of "belief" and going into philosophical
           | arguments. In real life though, we are capable as a species
           | of separating fact from fiction some of the time.
        
           | tsol wrote:
           | I have suspected similarly. Skepticism and critical thinking
           | are useful devices, but they can't always tell a truth from a
           | lie. And even if they could-- humans aren't totally rational
           | beings. Sometimes we believe lies because we want it to be
           | true or because everyone around us does. Hell sometimes
           | people believe things just to win an argument
        
         | bsenftner wrote:
         | Back in '02-'04 I was a former games/graphics programmer
         | working as a digital artist in feature film VFX. One area I
         | specialized in was stunt double actor replacements. Working on
         | Disney's "Ice Princess" I fixed a stunt double replacement shot
         | and realized a method of making the entire process generic, at
         | feature film quality.
         | 
         | By '06 I had an MBA with a Masters Thesis on the creation of a
         | new Advertising format where the viewer, their family and
         | friends are inserted into brand advertising for online
         | advertising. By '08 I had global patents and an operating
         | demonstration VFX pipeline specific for actor replacements at
         | scale. However, it was the financial crisis of '08 and nobody
         | in the general public had ever conceived of automated actor
         | replacements. This was 5-7 years before the term deep fake
         | became known. VCs simply disbelieved the technology was
         | possible, even when demonstrated before their eyes.
         | 
         | Going the angel investor route, 3 different times I formed an
         | investor pool only to have at some point them realize what the
         | technology could do with pornography, and then the investors
         | insist the company pursue porn. However, we had Academy Award
         | winning people in the company, why would they do porn? We
         | refused and that was the end of those investors. With an agency
         | for full motion video actor replacement advertising not getting
         | financing, the award winning VFX people left and the company
         | pivoted to the games industry - making realistic 3D avatars of
         | game players. That effort was fully built out by '10, but the
         | global patents were expensive to maintain and the games
         | industry producers and studios I met simply wanted the service
         | for free. Struggled for a few years. We closed, sold the
         | patents, and I went into facial recognition.
         | 
         | https://patents.justia.com/inventor/blake-senftner
         | https://www.youtube.com/watch?v=lELORWgaudU
         | 
         | I was bitter about this all for a long time.
        
         | e40 wrote:
         | I think it will make it far easier to manipulate dumb people.
         | The same 30% (?) of the people who think the last US
         | Presidential election was stolen. These people will be easier
         | to whip into a frenzy. I worry this will increase the
         | likelihood of violence, above what is already happening.
        
           | tryauuum wrote:
           | (as a non-american) I don't think it was stolen, but the
           | whole "vote via mail" thing made me really suspicious
        
             | fknorangesite wrote:
             | > the whole "vote via mail" thing made me really suspicious
             | 
             | Why? Mail-in voting is hardly unique to the US; what made
             | you suspicious?
        
               | tryauuum wrote:
               | quite unique for my country (Russia). Though they
               | recently started to do some remote "blockchain-based"
               | voting in Moscow, which is widely considered to be a
               | fraud
        
               | foobiekr wrote:
               | I mean.. most things blockchain are.
        
             | diputsmonro wrote:
             | It really is nothing to be suspicious about. Full vote by
             | mail had already been the norm in some US states for years,
             | and most states allowed for it in specific circumstances.
             | The infrastructure, laws, etc., were already there, they
             | just needed to be expanded. Expanding it has always been in
             | the national conversation, it has just been a matter of
             | figuring it out and priority.
             | 
             | So when a global pandemic occurs and we're trying
             | everything we can to isolate and socially distance, that
             | priority changes real quick. People get talking and
             | problems get solved.
             | 
             | Of course, sore losers will complain about anything to
             | justify their loss, and this "new thing" was a prime
             | scapegoat. It was also well known ahead of time that the
             | mail in votes would be largely Democratic (because COVID
             | was VERY politicized and democrats were more likely to
             | follow quarantine guidance and therefore vote by mail). So
             | when the votes came in, they pointed to that imbalance and
             | called it "fraud".
             | 
             | Besides all that, there's no reason to be more suspicious
             | of mail-in ballots than in-person ones. In-person, you mark
             | a paper ballot and then put it in a stack... which then
             | gets mailed somewhere else. If someone is going to be
             | changing mail-in ballots, then they're already in a
             | position to be changing regular ones as well (and every
             | election security professional will tell you that paper
             | ballots are more secure than electronic ones).
        
               | tryauuum wrote:
               | It's true that the one who counts the votes matters and
               | this doesn't change with mail voting / in-person voting
               | 
               | The one advantage of physical voting I can think of is
               | the ability to just be close to voting station on voting
               | day, counting people who go in there, asking people (who
               | are willing to share) for whom did they vote. This allows
               | to independently check if fraud exists.
        
               | gcanyon wrote:
               | Exit polls are notoriously inaccurate. Given the level of
               | fraud thus far demonstrated (minimal) there is zero
               | likelihood of "checking" by exit polls.
        
           | Turing_Machine wrote:
           | As opposed to the dumb people who spent 4 years claiming the
           | last-but-one presidential election was stolen, you mean?
        
             | capitalsigma wrote:
             | I never heard that claim. Only "the electoral college is a
             | bad system" or "voters were influenced by Russian
             | propaganda." Never "votes were impacted by direct fraud."
        
               | notdonspaulding wrote:
               | In terms of claiming the results of the election is
               | illegitimate, "voters were influenced by Russian
               | propaganda" instead of "votes were impacted by direct
               | fraud" seems like a distinction without a difference to
               | me.
               | 
               | https://news.yahoo.com/hillary-clinton-
               | maintains-2016-electi...
               | 
               | In 2020, Hillary Clinton was still casting aspersions
               | regarding the outcome of the 2016 election, sowing
               | discontent about the electoral college, preparing
               | Democrat voters to ignore the results until Joe Biden was
               | declared the winner.
               | 
               | Portraying this game as if it's only being played by one
               | team does not help restore any trust in the federal
               | election process.
        
               | sjsdaiuasgdia wrote:
               | There were fraud claims on the fringe just after the 2016
               | election. The evidence was sparse. It didn't take long
               | for even those pretty angry about the election to realize
               | fraud probably didn't happen, and if it did it was at too
               | small a scale to meaningfully affect the results.
               | 
               | Unfortunately in 2020 the fringe became the GOP
               | mainstream, treating equally soft claims as fact.
        
               | Turing_Machine wrote:
               | No, it wasn't "on the fringe". Note that this poll was
               | taken in 2020, a full four years later.
               | 
               | "Seventy-two percent (72%) of Democrats believe it's
               | likely the 2016 election outcome was changed by Russian
               | interference, but that opinion is shared by only 30% of
               | Republicans and 39% of voters not affiliated with either
               | major party."
               | 
               | https://www.rasmussenreports.com/public_content/politics/
               | gen...
        
               | sjsdaiuasgdia wrote:
               | By fraud I mean actual voter fraud. As in, effort was
               | made to cause invalid votes to be counted or valid votes
               | to not be counted.
               | 
               | Russia absolutely did and continues to push propaganda
               | into elections in the USA and elsewhere. That's not
               | really in dispute at this point so I'm not surprised it
               | polls that high.
               | 
               | Got a poll that shows similar numbers for fraud? I would
               | be genuinely surprised to see that.
        
               | Turing_Machine wrote:
               | There were many claims that voters were illegitimately
               | purged from the rolls, which is pretty much the
               | equivalent.
               | 
               | I should actually note here that I didn't vote for Trump,
               | either time, nor did I vote for Clinton or Biden.
               | 
               | I just hate hypocrisy.
        
               | [deleted]
        
               | Turing_Machine wrote:
               | Then you weren't listening. People were screaming "Russia
               | stole the election" from Day 1, not "just voters were
               | influenced by Russian propaganda". You're spinning.
        
               | bdowling wrote:
               | In 2019, Hillary Clinton, in a CBS News interview, called
               | Trump "illegitimate", claimed that Trump "stole" the
               | election, and accused him of voter manipulation,
               | including "hacking".
               | 
               | https://www.washingtonpost.com/politics/hillary-clinton-
               | trum...
        
               | gcanyon wrote:
               | I don't think she means what you think she means by
               | "hacking." I think she means this:
               | https://www.nytimes.com/2016/12/09/us/obama-russia-
               | election-...
        
             | thedorkknight wrote:
             | I live in a liberal city and didn't hear this from anyone.
             | The was initially a decent bit of "not my president"
             | attitude, but just in a philosophical sense, and even that
             | petered out pretty fast.
        
               | Turing_Machine wrote:
               | Hillary Clinton herself claimed that the election was
               | stolen and that Trump was an "illegitimate President".
               | 
               | But she doesn't count as "anyone", I guess?
        
             | e40 wrote:
             | Did you hear that from Fox News? Because I never heard it
             | once.
        
               | Turing_Machine wrote:
               | Then you weren't listening. Note the quote above from
               | _Hillary Clinton herself_.
        
               | costigan wrote:
               | Stolen is a vague word. If there's evidence she believes
               | there was sufficient fraud to have changed the result, I
               | would be interested. If she was referring to the stolen
               | Podesta emails and Comey's statement right before the
               | election, then those things happened. You may think those
               | things didn't matter, but it's no surprise she does. And
               | then there's the whole storming the capital thing she
               | didn't do.
        
               | gcanyon wrote:
               | I think Clinton was referring to this:
               | https://www.nytimes.com/2016/12/09/us/obama-russia-
               | election-...
               | 
               | In other words: not saying that there was actual fraud
               | sufficient to change the election, not saying the
               | election was "stolen" in the sense people seem to be
               | saying here.
        
             | jacobolus wrote:
             | The last-but-one presidential election was affected by
             | various states illegally throwing large numbers of legal
             | voters off their voter rolls, but it's impossible to say
             | whether it would have made enough difference to alter the
             | outcome, and there's no convincing evidence votes were
             | directly changed. (It would be a good thing to have a
             | verifiable paper trail for every election; in some parts of
             | the USA it is impossible to effectively investigate any
             | alleged shenanigans.)
             | 
             | The bigger problem in that election was Russian-
             | intelligence-stolen (and possibly tampered with) documents
             | being released to the press in the lead up to the election
             | in coordination with the Trump campaign (with the FBI
             | keeping its investigation of that secret), and then the FBI
             | director making an unprecedented and (we found out only
             | afterward) unsupportable statement attacking Clinton
             | immediately before the election, after being pressured into
             | it by a handful rogue FBI agents who were friends of
             | Trump's campaign threatening insubordination.
             | 
             | And perhaps the biggest problem of all, an entirely too
             | credulous mainstream media who didn't put those
             | developments in context, leaving voters to draw mistaken
             | inferences, and giving oodles of free airtime to Trump's
             | rallies without making any effort to dispute outright lying
             | in real time.
        
         | 2OEH8eoCRo0 wrote:
         | I can't wait to just generate pornography on the fly while
         | wearing a body monitor so that it can fine tune female body
         | proportions to my exact specifications.
        
           | AlecSchueler wrote:
           | Does that sound healthy? Personally or socially? I'd worry
           | about how that would affect my view of the real women around
           | me and, in turn, my behaviors towards others.
        
             | 2OEH8eoCRo0 wrote:
             | Definitely not. It actually scares me what future we are
             | heading towards. Supernormal stimulus. Better than a human
             | partner could ever be. Super addicting in a primordial way.
        
             | autoexec wrote:
             | I'm guessing that most people will have very little trouble
             | separating reality from fantasy.
        
         | aaroninsf wrote:
         | Why yes.
         | 
         | Let me quote myself from a discussion I was having this morning
         | with a friend who is a tenured professor of philosophy working
         | on AI (as an ethics specialist his work is in oversight),
         | 
         | we were discussing the work shared on HN this week showing a
         | proof-of-concept of Stable Diffusion as better at image
         | "compression" than existing webstandards.
         | 
         | I was very provoked by commentary here about the high "quality"
         | images produced, it was clear that they could in theory contain
         | arbitrary levels of detail--but detail that was confabulated,
         | not encoded in any sense except diffusely in the model training
         | set.
         | 
         | "I'm definitely inclined to push hard on the confabulation vs
         | compression distinction, and by extension the ramifications.
         | 
         | I see there a very meaningful qualitative distinction [between
         | state of the art "compression" techniques, and confabulation by
         | ML] and, an instrumental consequence which has a long shadow.
         | 
         | The thing I am focused on being, whether or not the fact that a
         | media object is lossy or not can be determined, even under even
         | forensic scrutiny.
         | 
         | There was a story I saw this week about the arms race in
         | detection of 'deep fake' reproduction of voice... which now
         | requires some pretty sophisticated models itself. Naturally I
         | think this is an arms race in which the cost of detection is
         | going to rapidly become infeasible except to the NSA. And maybe
         | ultimately, infeasible full stop.
         | 
         | So yeah, I think we're at a phase change already, which
         | absolutely has been approaching, back to Soviet photo
         | retouching and before, forgery and spycraft since forever... so
         | many examples e.g. the story that went around a couple years
         | ago about historians being up in arms about the fad for
         | "restoring" and upscaling antique film and photographs, the
         | issue of concern being that so much of that kind of restoration
         | is confabulation and the presumptive dangers of mistaking
         | compelling restoration for truth in some critical detail. Which
         | at the time mostly seemed a concern for people who use the word
         | hermeneutics unironically...
         | 
         | ...but we now reach a critical inflection point where society
         | as a whole integrates the notion that no media object no matter
         | "convincing" can be trusted,
         | 
         | and the consequent really hard problems about how we find
         | consensus, and how we defend ourselves against bad actors who
         | actively seek their Orbis Tertius Christofascist kingdom of
         | rewritten history and alternative facts.
         | 
         | The derisive "fake news" married to indetectably confabulated
         | media is a really potent admixture!"
        
           | gcanyon wrote:
           | Once you accept lossy compression, it becomes a question of
           | what level and type of "lossy" you're willing to accept, and
           | how clever the "compression" algorithm can be.
           | 
           | If I want to compress the movie Thunderball -- a sufficiently
           | clever "compression" algorithm could start with the synopsis
           | at https://en.wikipedia.org/wiki/Thunderball_(film) add in
           | some images of Sean Connery, and generate the film.
           | That's...maybe a 100K to 1 compression ratio?
           | 
           | If the algorithm itself understands "Sean Connery" then you
           | could (theoretically) literally feed in the text description
           | and achieve a reasonable result. I've seen Thunderball, but
           | it was years ago and I don't remember the plot (boats?). I'd
           | know the result was different, but I likely wouldn't be able
           | to point to anything specific.
        
         | hombre_fatal wrote:
         | > wouldn't this lead to people being more mindful of what they
         | watch and interact with?
         | 
         | No. We have already run the case study where people on Reddit,
         | Twitter, and other social media will seethe at mere screenshots
         | of headlines and captions under a picture with zero need for
         | verification.
         | 
         | Here on HN we will pile into the comments to react to the title
         | without even clicking the link to read it ourselves.
         | 
         | Deepfakes feel like a drop in the bucket. What does it matter
         | that you can deepfake a president when people will simply
         | believe a claim about the president than spreads around social
         | media? I don't see it.
        
         | wussboy wrote:
         | I think we will get to a point where trust will only come from
         | face-to-face physical meetings. We won't be able to believe in
         | Zoom calls, phone calls, nothing except face-to-face.
         | 
         | Just like it was for millions of years before now.
        
           | MonkeyMalarky wrote:
           | It's already become an arms race. KYC identity services are
           | already adding liveness and deep fake detection features.
        
           | intrasight wrote:
           | In the future, TVs and monitors and smartphones will have
           | built-in "truth meters".
           | 
           | Face-to-face is only applicable with you small social
           | network.
        
         | ilaksh wrote:
         | I actually think that live, full body AI-generated realistic
         | avatars (sometimes imitating celebrities to one degree or
         | another) will become an everyday part of life for many people
         | within the next 5-10 years.
         | 
         | I assume that full-on impersonation will still be illegal, but
         | certain looks that are sometimes quite similar to a real
         | celebrity will trend now and then.
         | 
         | The context for this is the continual improvement in the
         | capabilities and comfort of VR/AR devices. The biggest one I
         | think is going to be lightweight goggles and eventually
         | glasses. But also the ability to stream realistic 3d scenes and
         | people using AI compression (including erasing the goggles or
         | glasses if desired) could make the concept of going to a
         | physical place for an event or even looking exactly like
         | yourself feel somewhat quaint.
        
         | thrown_22 wrote:
         | >Anyone in the field spent time thinking on this or has had
         | similar notions?
         | 
         | Skepticism in general will only be applied to people we don't
         | like and ignored for people we do.
         | 
         | The continued lapping up of blatant Ukrainian propaganda in
         | mains stream media for example doesn't even need photoshop to
         | be believed, just the vague 'sources said'.
        
         | tshaddox wrote:
         | I don't think it will change much.
         | 
         | I think for claims that you think are important to determine an
         | objective truth value for (like who the President of the U.S.
         | is), your determinism mechanism is based on trusting sources
         | you deem reliable and looking for broad agreement among many
         | sources you deem to be independent. You're probably not just
         | looking at a single sourceless video of Ronald Reagan behaving
         | as if he's the president and believing that claim because the
         | video couldn't possibly have been faked.
         | 
         | And for other claim that you _don 't_ think are important to
         | determine an objective truth value for, I don't think you need
         | very high-fidelity evidence anyway. For example, people have no
         | trouble believing claims that corroborate their closely-held
         | ideologies even with very low-fidelity fraudulent evidence, or
         | even claims made with no attempt whatsoever to provide even
         | fraudulent evidence!
        
         | 6gvONxR4sf7o wrote:
         | I'd love that to be true, but I think we can use text on social
         | media as a guide here. It's already as easy to type a lie as
         | typing the truth, and I'm pretty sure lots of made up
         | comments/posts on reddit get taken as truth by tons of people,
         | for example.
        
         | cortesoft wrote:
         | No, people will continue to believe as true videos that match
         | their expectations and disbelieve those that don't
        
         | NavinF wrote:
         | https://xkcd.com/2650/
        
         | kyleplum wrote:
         | There are ongoing efforts to enable digital signatures of
         | online media. The idea is that you (or your browser) can
         | validate that an image or video is unmodified from the source
         | that produced it.
         | 
         | https://contentauthenticity.org/
        
           | russdill wrote:
           | There are so many technical hurdles to something like this, I
           | don't see it as a solution anytime soon or ever.
        
             | kyleplum wrote:
             | It's not as far away as one might think.
             | 
             | There was a demo earlier this year (Jan) showcasing the
             | proposed 1.0 spec working in Microsoft Edge:
             | https://c2pa.org/jan-2022_event/
        
               | russdill wrote:
               | The browser side is not where all the hurdles occur. It's
               | on the capture side and the key/certificate
               | management/revocation side.
        
         | bregma wrote:
         | > wouldn't this lead to people being more mindful of what they
         | watch and interact with?
         | 
         | Have you been paying any attention to what's going on the last
         | several years?
        
         | alecfreudenberg wrote:
         | Yeah a little more skepticism will be nice, but I still
         | personally see myself getting fleeced every now and then.
        
           | jjoonathan wrote:
           | It will also lead to "that's a deepfake!" as an excuse given
           | after getting caught on camera.
        
             | dagurp wrote:
             | Let's just hope that fake detection technology stays ahead
             | of any innovations in this field
        
             | Loughla wrote:
             | Bingo.
             | 
             | It's less about using fakes to push your agenda, and more
             | about being able to (plausibly or implausibly it doesn't
             | matter) claim that whatever video is a deepfake.
             | 
             | The truth is meaningless, and as tools like deepfakes
             | become more and more sophisticated, it's harder and harder
             | to establish baseline realities.
             | 
             | And someone is benefiting from that shift away from
             | reality, I just don't know who.
        
             | autoexec wrote:
             | which will lead to people trusting forensic experts and
             | corroborating data/witnesses. If you were a Karen caught in
             | an embarrassing public meltdown you could absolutely say
             | that the video was deepfaked and you were really just home
             | alone sleeping at the time, but when 7 different people's
             | cell phone videos, multiple security cameras, two dashcams,
             | 14 ring cams, GPS data captured from your mobile device,
             | and one police surveillance drone all agree it was you
             | that's not going to work out so well.
             | 
             | People made the same arguments about photoshop, but it's
             | really not a problem. Almost never is a single video the
             | only evidence of anything and in the cases where it is and
             | that video can't be verified it's probably best not to ruin
             | someone's life over it.
        
       | novaRom wrote:
       | The Great Dictator 2023 with Charlie Chaplin would be great!
        
       | OscarCunningham wrote:
       | TV shows won't need to do casting for extras any more, they'll
       | just have the main cast and then one person who plays all the
       | other characters.
        
       | theptip wrote:
       | > But if you want to describe human activities in a text-to-video
       | prompt (instead of using footage of real people as a guideline),
       | and you're expecting convincing and photoreal results that last
       | more than 2-3 seconds, the system in question is going to need an
       | extraordinary, almost Akashic knowledge about many more things
       | than Stable Diffusion (or any other existing or planned deepfake
       | system) knows anything about.
       | 
       | > These include anatomy, psychology, basic anthropology,
       | probability, gravity, kinematics, inverse kinematics, and
       | physics, to name but a few. Worse, the system will need temporal
       | understanding of such events and concepts...
       | 
       | I wonder if unsupervised learning (as could be achieved by just
       | pointing a video camera at people walking around a mall) will
       | become more useful for these sorts of model; one could imagine
       | training an unsupervised first-pass that simply learns what kind
       | of constraints physics, IK, temporality, and so on will provide.
       | Then given that foundation model, one could layer supervised
       | training of labels to get the "script-to-video" translation.
       | 
       | Basically it seems to me (not a specialist!) that a lot of the
       | "new complexity" involved in going from static to dynamic, and
       | image to video, doesn't necessarily require supervision in the
       | same way that the existing conceptual mappings for text-to-image
       | do.
       | 
       | Combined with the insights from the recent Chinchilla paper[1]
       | from DeepMind (which suggested current models could achieve equal
       | performance if trained with more data and fewer parameters),
       | perhaps we don't actually need multiple OOMs of parameter
       | increases to achieve the leap to video.
       | 
       | Again, this is not my field, so the above is just idle
       | speculation.
       | 
       | [1]: https://arxiv.org/abs/2203.15556
        
       | finnh wrote:
       | This might be an outlier, but I think the benefit of completely
       | outlawing deepfakes is worth the "but freedom!" harm.
       | 
       | I think deepfakes have the power to do much more real, immediate
       | damage to society vs the "threat" of AGI
        
         | ilaksh wrote:
         | What we need are digital identify verification strategies for
         | content, such as associating cryptographic signatures with
         | videos.
        
         | bdowling wrote:
         | Appropriation of name or likeness is already a tort that
         | defendants can be held civilly liable for. Would you also make
         | it a crime?
        
         | mixedCase wrote:
         | I don't see them challenging the veracity of media any more
         | than photoshop and video editing already do, specially since ML
         | can be used to automatically detect tampering. So, what's the
         | damage to society you fear?
        
         | aszantu wrote:
         | I think, the kids got this, they will learn how to live with
         | this and adapt to it. But yes, the older generation who still
         | depend on what they see on the internet, will suffer for a
         | while
        
       | btbuildem wrote:
       | It's interesting to consider the "full body" deepfakes, but
       | wouldn't the limitation of face deepfakes be even more
       | constraining here? The proportions of limbs' length vs torso, hip
       | / shoulder ratio etc -- it seems like a more effective approach
       | (and something already in commercial use) would be mocap + models
       | -- and that's just for still images.
       | 
       | For motion, there's yet another layer of fakery required (and
       | this is something security / identity detection systems tackle
       | nowadays) -- stuff like gait, typical motions or gestures or even
       | poses. To deepfake a Tom Cruise clone, you need to not just look
       | like the actor, but project the same manic energy, and signature
       | movements.
        
         | [deleted]
        
       | Minor49er wrote:
       | The Jennifer Connelly and Henry Cavill demo on that page makes me
       | think of the Scramble Suit from A Scanner Darkly
       | 
       | https://www.youtube.com/watch?v=2aS4xhTaIPc
        
       | munk-a wrote:
       | The road to realistic full-body deepfakes will be through the
       | adult entertainment industry because of course it will. Some
       | academics may begin the discussion but at the end of the day this
       | is one part of AI image generation that has a clear and extremely
       | large profit motive and won't struggle to find funding in any
       | way.
       | 
       | I'm pretty sure Slashdot is willing to put up the money for
       | thousands of renders of "Natalie Portman pours Hot Grits over
       | <thing>" alone.
        
         | bsenftner wrote:
         | No, it is economically infeasible because any such professional
         | service would be a lawsuit engine.
        
       | mochomocha wrote:
       | In a soon-approaching world where all movies have deep-fake
       | actors, popular music is generated etc. how do you approach the
       | economics of creativity and content generation?
       | 
       | Should Tom Cruise heirs receive a perpetual rent 200 years from
       | now when Mission Impossible 57 staring their ancestor is airing?
       | 
       | What regulation should be put in place / would be effective in a
       | world where any teen with the latest trending scoial media app on
       | their phone can realistically impersonate a celebrity in real-
       | time for likes?
        
         | [deleted]
        
         | nightski wrote:
         | Technology is an enabler. Your hypothetical scenario of Tom
         | Cruise's legacy lasting to Mission Impossible 57 is not
         | probable imo. People get bored.
         | 
         | Instead we'll probably see a bunch of crap, but on top of that
         | crap it will allow people who never would of had a chance
         | before (no connections, money, etc..) to be discovered who have
         | true talent. It lowers the bar to content creation
         | significantly.
        
           | intrasight wrote:
           | I think that we will have some immortal actors - but not too
           | many. I don't think Tom Cruise will be one of them.
           | 
           | > to be discovered who have true talent.
           | 
           | How so?
        
       ___________________________________________________________________
       (page generated 2022-09-22 23:00 UTC)