[HN Gopher] Show HN: This Food Does Not Exist
       ___________________________________________________________________
        
       Show HN: This Food Does Not Exist
        
       Author : MasterScrat
       Score  : 164 points
       Date   : 2022-07-20 16:02 UTC (6 hours ago)
        
 (HTM) web link (nyx-ai.github.io)
 (TXT) w3m dump (nyx-ai.github.io)
        
       | rkagerer wrote:
       | My partner is very impressionable when she see's food in a TV
       | show. Immediately has a craving for it. This thing is like,
       | limitless porn for her gluttony.
        
       | fxtentacle wrote:
       | I'm honestly surprised that they trained a StyleGAN. Recently,
       | the Imagen architecture has been show to be both easier in
       | structure, easier to train, and even faster to produce good
       | results. Combined with the "Elucidating" paper by NVIDIA's Tero
       | Karras you can train a 256px Imagen* to tolerable quality within
       | an hour on a RTX 3090.
       | 
       | Here's a PyTorch implementation by the LAION people:
       | 
       | https://github.com/lucidrains/imagen-pytorch
       | 
       | And here's 2 images I sampled after training it for some hours,
       | like 2 hours base model + 4 hours upscaler:
       | 
       | https://imgur.com/a/46EZsJo
       | 
       | * = Only the unconditional Imagen variant, meaning what they show
       | off here. The variant with a T5 text embedding takes longer to
       | train.
        
         | gwern wrote:
         | Or, since they are comparing to Craiyon, why not just finetune
         | Craiyon itself? Craiyon already exists, just take it off the
         | shelf, you don't need to retrain it from scratch, so the cost
         | to train it from scratch on everything (which is indeed quite
         | large) is not relevant to someone who just wants to generate
         | great food photos.
        
       | Animats wrote:
       | Coming soon to the restaurant site generator of some large
       | delivery service.
       | 
       | ("Picture is only for illustration purposes")
        
       | derbOac wrote:
       | The Cake is a Lie meme was never so relevant.
        
         | mike_hock wrote:
         | And the Science gets done and you make a neat gun.
        
       | beej71 wrote:
       | We're looking at the complete collapse of the stock photography
       | market.
        
         | kerblang wrote:
         | To what extent are we ripping off the photographers? Weren't
         | the models trained on their hard work?
         | 
         | Have we reached a point where we've bounded art within the data
         | models are trained on?
         | 
         | Have we imposed a limit on ideas as a realm of "what came
         | before" and implicitly decided that any "after" is a pointless
         | exercise without knowing whether that's even true?
        
           | nathanaldensr wrote:
           | Excellent questions, and I was thinking the same thing. In my
           | opinion, AI-generated art or images are not as impressive as
           | they might seem at first purely because _there is no real
           | imagination involved_. It 's an _art simulacrum_.
           | 
           | A more accurate title would be "This picture of food does not
           | exist."
        
         | gnicholas wrote:
         | This will also democratize the market for comics. It used to be
         | you needed to be able to draw to be a comic. Now you can just
         | have ideas, and use This Comic Does Not Exist (which does not
         | yet exist) to generate the imagery.
        
         | echelon wrote:
         | It's way more than that.
         | 
         | Anyone can be an artist, musician, photographer, writer.
         | 
         | It's going to result in more content being created, which will
         | change the economies of content. Rate, scale, and volume of
         | production will increase by orders of magnitude.
         | 
         | Disney thinks IP is a war chest. That's an old way of thinking.
         | 
         | Star Wars won't be special to the new kids growing up that can
         | generate "Space Samurai" and "Galaxy Brouhaha" in an afternoon.
         | 
         | We're going to hit a Cambrian explosion of content.
        
           | smaudet wrote:
           | "It's going to result in more content being created"
           | 
           | Is it, though? This model took over a month, on extremely fit
           | hardware, to even create.
           | 
           | Lets say for a second, in some hypothetical future, that
           | anyone can access/use/update these models (by anyone, I mean
           | someone with both low amount of resources as well as little
           | to no programming skill), why are they creating content?
           | 
           | "Rate, scale, and volume of production will increase by
           | orders of magnitude."
           | 
           | If by production you mean "paid creation", I'm not so sure
           | about that. In this world where everyone creates content from
           | thin air 1) there is little to no monetary value to the
           | content anymore (as monetary value inversely correlates with
           | scarcity) 2) So there is less incentive to create anything,
           | because there is no monetary value to doing so.
           | 
           | In fact, by definition we can pretty much prove that not much
           | of anything will happen in this regard, because content is
           | already limited by budget - the budget has not gone up, and
           | the return has only gotten worse (in this hypothetical
           | scenario).
           | 
           | What I think is more likely to happen - a few, "blessed"
           | individuals will have out-sized content creation
           | capabilities, without much need to innovate. The rest of us
           | will have almost no incentive to create anything as a result.
           | 
           | Disney will use these systems, and they will use them to
           | churn out more garbage, faster, on average, most kids will
           | not be generating any movies in an afternoon.
        
           | nixpulvis wrote:
           | Friggles and bop, produce nothing.
        
           | Victerius wrote:
           | I have some extremely detailed imaginary images and clips in
           | my head that I just don't want to devote the thousands of
           | hours it would require to become proficient enough in drawing
           | and visual effects to create them.
        
             | mysterydip wrote:
             | Agreed. If I can even get close with some of these
             | generators, and hand-modify from there, I'll be happy.
        
           | jeffreygoesto wrote:
           | It's just that they don't "create" anything and pressing some
           | buttons to get more and more of the same, biased crap quickly
           | gets boring.
        
         | fxtentacle wrote:
         | Partially, yes. I certainly predict that DALLE-like models will
         | ruin the prices for some stock photos.
         | 
         | But on the other hand, Adobe is pushing their CAI hard:
         | 
         | https://helpx.adobe.com/photoshop/using/content-credentials....
         | 
         | And the core benefit of "authentic" content is that it can't be
         | generated by an AI. Only humans can own copyright.
        
           | imachine1980_ wrote:
           | dall-e is start giving full commercial license , i think in
           | the end both will converge you will prom something the AI
           | will make 100 prototypes and you will improve the one that
           | you like whit help the AI. the line will blur, is not against
           | the machine but whit the machine. the problem maybe is that
           | will probably need less people to do the same works. is "fun"
           | maybe is better be construction worker than artist, becouse
           | the second will get obsolet for most case.
        
         | gffrd wrote:
         | "Smiling happy family holding cell phones above their heads
         | standing in a field of grass wearing all white"
        
           | dalmo3 wrote:
           | That's actually a great example. Just think of the aggregated
           | human-hours wasted to bring those people together, create
           | that setting, photograph, edit, publish... All for a
           | meaningless flyer or landing page.
        
           | Cockbrand wrote:
           | Yields subtly nightmarish results:
           | 
           | [1] https://i.imgur.com/YqNkaGj.png
           | 
           | [2] https://i.imgur.com/EQL7pqw.png
        
         | gffrd wrote:
         | "Boardroom full of attractive business people gathered around a
         | laptop with one of them pointing at the screen, all wearing
         | suits, whiteboard in the background"
        
           | robotresearcher wrote:
           | "Woman laughing alone with salad"
        
             | gffrd wrote:
             | "Elderly man sitting at laptop looking puzzled, holding
             | credit card"
        
               | mysterydip wrote:
               | "teen in hooded sweatshirt wearing sunglasses and gloves
               | typing on a laptop in a dark room"
        
               | lelandfe wrote:
               | similarly, "criminal emerging from computer monitor
               | holding crowbar"
        
               | DonHopkins wrote:
               | "The feeling of drifting slowly through a field of moving
               | vehicles."
               | 
               | "Once there were parking lots, now it's a peaceful oasis.
               | This was a Pizza Hut, now it's all covered with daisies."
               | 
               | "Green grass grows around the backyard shit house. And
               | that is where the sweetest flowers bloom."
               | 
               | "This ain't no party, this ain't no disco, this ain't no
               | fooling around."
        
         | corrral wrote:
         | Thinking bigger: I'm pretty sure the combo of a relatively free
         | global Internet, liberal democracy on large (much bigger than
         | city-state) scales, and cheap, customized, on-demand generation
         | of totally fake text + photo + video propaganda based on simple
         | prompts, cannot all co-exist. At least one of these isn't going
         | to survive alongside the others. If we just let things keep
         | going the way they are, I expect "liberal democracy on large
         | scales" is the one we'll lose--and whatever follows probably
         | won't let the fairly-free, global Internet keep existing,
         | either, so we'll lose that too.
        
           | whatshisface wrote:
           | I have heard this before, but I have several reasons to think
           | it is not going to be a problem above what already happens:
           | 
           | 1. AI lets you generate an enormous number of lies, but what
           | is really dangerous is one well-placed lie within a trusted
           | stream of many truths. CNN will retain a power to mislead far
           | in excess of Twitter bots.
           | 
           | 2. Democracy averages over everyone's confusion, which means
           | lies are only dangerous when large numbers of people believe
           | them at the same time. Hordes of bots generating spurious
           | lies won't move a democracy in any specific direction, but
           | again, mass media will retain its power to mislead everyone
           | at once in the same, effectual, direction.
           | 
           | 3. People have never respected the veracity of random tweets.
           | In the same way that trust in mainstream media outlets is
           | reaching record lows due to their consistently biased
           | reporting (they might not all have the same bias, but I can't
           | think of any I'd consider free of bias), everyone will learn
           | to adjust their incredulity to match the true quality of
           | random tweets.
           | 
           | 4. Companies like Twitter and Google are known to be shaping
           | their results and algorithms according to their own
           | "political views" (broadly construed) so at worst this would
           | represent a partial shift of power from the old masters to
           | new masters quite like them (social media companies). In many
           | ways trimming the front page to reflect editorial opinion is
           | echoed in the way Twitter trims its feeds to reflect their
           | own editorial opinions.
           | 
           | All taken together, it seems like the media is afraid that
           | equally large companies with similar business models
           | (content, attention, advertising) might end up eclipsing
           | them. The same old model where the TV station is afraid to
           | upset its advertisers, thereby giving a voice to business
           | interests, is well-recorded in the recent history of YouTube.
           | Not so much will change, although seeing it in its old and
           | newer forms might shed light on how it works.
        
           | blueflow wrote:
           | Whats your reasoning on this? Because i dont see why liberal
           | democracy would cease to exist... life would go on if we all
           | know that all pictures can be fabricated. I think this is
           | already the case without AI.
        
             | smaudet wrote:
             | Apparently you missed the problem Deep Fakes posed...
             | 
             | If you cannot distinguish reality (well), and in fact it
             | becomes possible that most things you see do not exist,
             | then there is nothing to stop a bad actor from producing a
             | fake version of events in which they are elected, control
             | everything, etc.
             | 
             | So, democracy would cease to exist, because democracy
             | relies ultimately on a choice - if you have no choice then
             | you do not have democracy, only a dictatorship.
        
               | vehementi wrote:
               | Right but today we have that problem already. We know
               | that a bad actor journalist can write a fake story. We
               | therefore require sources. If deepfakes come along we
               | will know videos can be fake and so we will be skeptical,
               | as we are today, and look to proper sources. We will
               | easily come up with some way to validate sources via
               | cryptography or org reputation (e.g. we might trust the
               | NY Times to not just fabricate things)
        
               | corrral wrote:
               | This is already barely holding together with mostly human
               | actors doing the astroturfing and creating bullshit "news
               | organizations" expressly to spread propaganda. Automation
               | is going to overwhelm a system that's already teetering.
        
               | vehementi wrote:
               | Yeah but we will get the word out that none of that is
               | trustworthy, then. There will be countermeasures and
               | reactions to this just like previous things. It will
               | certainly be effective to some degree - propaganda is
               | effective for sure - but it won't just be, oh, there are
               | deepfakes, everyone will now just unthinkingly accept
               | them.
        
               | corrral wrote:
               | 1) This effective backlash/education-campaign has _not_
               | already happened despite there already being significant
               | problems with this kind of thing, and most of it not
               | being _that_ hard to spot, even, and 2) I think the more
               | likely effect is the destruction of shared trust in _any_
               | set of news sources--we 're already pretty damn close to
               | this being the case, in fact. "It's all lies anyway" is a
               | sentiment that favors dictators more than it does
               | democracy.
        
               | phpnode wrote:
               | This is already possible today and we don't need AI
               | generated stock photos to do it. A bad actor can already
               | spin events to fit their narrative, suppress dissent and
               | control their population. Dictators have been doing it
               | for centuries and we're seeing it in real time in the
               | form of Putin's Russia right now.
        
               | mythrwy wrote:
               | If it were only Putin's Russia making stuff up we'd be in
               | good shape.
               | 
               | Otherwise, I fully agree with your point.
        
               | phpnode wrote:
               | indeed, it's just the most prominent example at the
               | moment
        
               | corrral wrote:
               | Sure, but being able to do the same thing at 100,000x the
               | scale for the same price seems like a pretty big
               | difference. Throw in the ability to target narrow
               | constituencies with custom messages via modern ad
               | networks, automation-assisted astroturfing, et c., and
               | the whole thing looks like a powderkeg to me.
        
               | mythrwy wrote:
               | People already produce all kinds of fake news and
               | doctored photos and false flags and all kinds of things.
               | This has been going on since we developed language and
               | photography I suspect.
               | 
               | People already have trouble telling propaganda from fact.
               | That has been going on since forever.
               | 
               | At the end of the day I don't see this being a game
               | changer. If anything, now video and photos are less
               | evidence for/against something as the potential falseness
               | becomes well known. Congressman X: "no, that wasn't me
               | you saw leaving the hotel with the prostitute, my slimy
               | opponent obviously is deep faking stuff".
               | 
               | And people will continue to believe what they want to
               | believe, in spite of all evidence to the contrary, just
               | like they do right now.
        
               | corrral wrote:
               | There seems to me a huge difference between a few
               | organizations being able to produce & distribute a total
               | of X amount of self-serving bullshit with some limited
               | reach, and anyone with a bit of money being able to
               | produce 100,000 * X amount of self-serving bullshit and
               | deliver it to exactly the people most likely to respond
               | to it the way they want, anywhere in the world (save,
               | notably, China and North Korea and such) while making it
               | very hard to tell who it's coming from.
               | 
               | An environment in which 90% of the information is
               | adversarial is really bad. It's a severe problem and very
               | challenging to navigate. An environment where 99.9999% of
               | it's adversarial and it's even harder than before to sort
               | truth from fiction, functionally _no longer has any flow
               | of real information whatsoever_.
        
               | mythrwy wrote:
               | Another thought:
               | 
               | Maybe liberal democracy is not the final outcome of human
               | civilization. You like it and I like it (presumably we
               | were both raised to believe this way) but perhaps it's
               | not really true.
               | 
               | Just to question a base assumption here.
               | 
               | It seems to me, if all the things that are claimed to
               | threaten liberal democracy actually do, liberal democracy
               | might be much less robust and long lived then previously
               | believed.
        
               | corrral wrote:
               | Oh, absolutely. I've even come around to thinking that's
               | _likely_. But one can hope.
               | 
               | [EDIT] One thing I no longer think has any realistic
               | future is the open, semi-anonymous Internet. We're either
               | losing it because despots take over and definitely won't
               | permit that threat to remain unfettered, or we're losing
               | it (in perhaps a gentler-touch way) because we _have to_
               | to prevent authoritarian take-over and vast civil strife.
               | I don 't think we're getting to keep that no matter what
               | happens.
        
               | mythrwy wrote:
               | Yep I think you might be right. It's ultimately too much
               | of a risk to all sorts of powers to have open unfettered
               | real time communication and mass dissemination.
               | 
               | Even the "good guys" will call emergency that will never
               | end.
               | 
               | Oh well, it was nice while it lasted. An intellectual
               | Cambrian explosion. And all that porn!
        
               | yellowapple wrote:
               | > And all that porn!
               | 
               | On that note: I can't wait for the resulting
               | proliferation of photorealistic tentacle hentai. Imagine
               | the possibilities!
        
               | mysterydip wrote:
               | Take it a step further: Can you be arrested for having
               | porn that would be illegal in your country if it was
               | real, but instead it's a thousand generated
               | images/videos? How blurred will those lines get?
        
               | unfunco wrote:
               | Photoshop has existed for years and humans have been
               | manipulating photos for longer, what's the difference,
               | really?
               | 
               | If I see a photo in the Guardian newspaper (or any other
               | reputable news outfit) I'm going to presume it's real,
               | and I expect journalists to verify that for me. If I see
               | a random photo that doesn't look quite right on a 4chan,
               | I'm not going to immediately assume it's news.
        
               | gadtfly wrote:
               | > For a Linux user, you can already build such a system
               | yourself quite trivially by getting an FTP account,
               | mounting it locally with curlftpfs, and then using SVN or
               | CVS on the mounted filesystem. From Windows or Mac, this
               | FTP account could be accessed through built-in software.
        
               | corrral wrote:
               | > Photoshop has existed for years and humans have been
               | manipulating photos for longer, what's the difference,
               | really?
               | 
               | Scale, cost, and reach.
        
               | unfunco wrote:
               | Reach is no different, bots and humans are able to post
               | to social media, and cost is probably no different at the
               | moment either since AI isn't perfect, some human
               | interaction is probably needed to make it believable, and
               | because of that, scale is the same too. I think we're
               | approaching all of those things but it's probably still
               | quite some time away until a machine can be trusted to
               | manipulate the public on its own.
        
           | konschubert wrote:
           | You can put letters in any order you want and make them say
           | any damn lie.
           | 
           | This was not an impediment to liberal democracy.
           | 
           | I am as concerned as the next guy but throwing the towel
           | already seems a bit premature?
        
             | corrral wrote:
             | > You can put letters in any order you want and make them
             | say any damn lie.
             | 
             | You can run a web server by responding to every request by
             | hand-typing the response, too. But you couldn't
             | realistically run one-one-millionth the modern Web that
             | way. You can't have global-scale e-commerce that way, et c.
             | Some things that _technically could_ work that way, can 't
             | actually--it's too slow, too expensive. This is very much
             | one of those "quantity has a quality all its own" things.
             | Increase the productivity of every astroturf-poster or
             | propaganda-front-news-site manager a few hundred times and
             | that's a _big_ difference.
             | 
             | > I am as concerned as the next guy but throwing the towel
             | already seems a bit premature?
             | 
             | Where'd you get throwing in the towel? I do think we're
             | (especially the US) really unlikely to do what we need to
             | in time, in part because measures that are _probably_
             | necessary to defend against this are themselves risky and
             | rather unappealing. But we might.
        
         | Kye wrote:
         | Shrinking microstock rates already killed it.
        
       | notamy wrote:
       | I was really hoping that this would be never-before-seen, AI-
       | generated recipes or something similar ):
        
         | sovok wrote:
         | That would be the OpenAI Recipe creator (eat at your own risk)
         | https://beta.openai.com/examples/default-recipe-generator
        
       | croes wrote:
       | You are killing Instagram influencers
        
         | scifibestfi wrote:
         | Seriously, won't this combined with GPT-3 flood the influencer
         | market?
        
           | bergenty wrote:
           | Yes. Images will lose all authenticity.
        
             | xbar wrote:
             | I think they have, some time ago. It seems like motion
             | video is now on the chopping block.
        
         | minimaxir wrote:
         | If this doesn't, DALL-E 2 will:
         | https://twitter.com/minimaxir/status/1549761827969544192
        
           | raphar wrote:
           | One twit there, made me happy: This is going to break
           | pinterest
        
             | Kye wrote:
             | Pinterest is great and useful and rad. Whoever's pushing
             | them to chase KPIs and ruin search is not.
        
         | Spivak wrote:
         | You mean supplying. Imagine running a food IG that didn't even
         | need to make the food.
        
           | croes wrote:
           | Imagine being one of millions, your food IG will look fake
           | even if the photos are real
        
           | ch4s3 wrote:
           | Hold my "beer".
        
             | ge96 wrote:
             | Now we just need to connect this to ffmpeg, add some fake
             | recipe scripts, upload a video on YT, multiply by 100
             | videos, 100 channels make about $2.00 nice.
        
       | hammycheesy wrote:
       | I tried to use the linked Colab notebook to generate my own, and
       | it appears to have been successful, but I don't see any way to
       | view the generated images via the notebook interface. I'm not
       | familiar with the notebook tool - have I missed something?
        
         | sireat wrote:
         | If the result is standard numpy 3d matrix then something like
         | Pillow should be able to display the images.
         | 
         | Something like                 from matplotlib import pyplot as
         | plt            plt.imshow(matrix)            plt.show()
        
       | gus_massa wrote:
       | Are you using the same model for cookies and cheesecakes? Do you
       | get sometimes a cookiecake?
        
         | MasterScrat wrote:
         | We currently train each model independently, ie we first gather
         | a cookie dataset, train a cookie model then restart from
         | scratch for the next one.
         | 
         | That's actually something we're investigating: can we train a
         | single class-conditional model for multiple types of food? Or,
         | can we finetune cheesecakes from cookies?
        
           | TuringNYC wrote:
           | >> ie we first gather a cookie dataset,
           | 
           | Is there a chance your dataset provider makes a claim that
           | they have derived data rights over your model generated
           | images? Would you have sufficient confidence, say, to sell
           | your images on a stock image site?
        
             | zorgmonkey wrote:
             | It is still somewhat unclear, but it seems that images
             | generated by a machine learning model are not copyrightable
             | (to quote the US Copyright Office, generated images "lack
             | the human authorship necessary to support a copyright
             | claim"). Whether the model itself is copyrightable is less
             | clear to me, but [0] seems to suggest that it be. All of
             | this depends on the country, but much of the world tends to
             | eventually mimic US copyright law.
             | 
             | [0] https://law.stackexchange.com/questions/19981/who-can-
             | claim-...
        
         | dylan604 wrote:
         | Well, now _I_ want a cookiecake.
        
           | ch4s3 wrote:
           | The dream of the 90s is alive in StyleGAN!
        
             | dylan604 wrote:
             | I think the 90s version would be icecreamcookiecake
        
               | ch4s3 wrote:
               | I think both. I strongly remember those giant pizza sized
               | cookies at the mall in the early 90s.
        
       | MasterScrat wrote:
       | We have trained four StyleGAN2 image generation models and are
       | releasing checkpoints and training code. We are exploring how to
       | improve/scale up StyleGAN training, particularly when leveraging
       | TPUs.
       | 
       | While everyone is excited about DALL*E/diffusion models, training
       | those is currently out of reach for most practitioners. Craiyon
       | (formerly DALL*E mega) has been training for months on a huge TPU
       | 256 machine. In comparison our models were each trained in less
       | than 10h on a machine 32x smaller. StyleGAN models also still
       | offer unrivaled photorealism when trained on narrow domains (eg
       | thispersondoesnotexist.com), even though diffusion models are
       | catching up due to massive cash investments in that direction.
        
         | goldemerald wrote:
         | I don't suppose you have a way of converting these models into
         | a pytorch usable version, do you?
        
       | andrewmcwatters wrote:
       | Darn! I was hoping for other-worldly foods that don't actually
       | exist being generated from real food attributes. I suppose I
       | should have known better.
        
       | JadoJodo wrote:
       | OP: Forgive me if this is out of place. Also, please know that my
       | question is genuine, not at all a reflection on the author/their
       | project, and most certainly born out of my own ignorance:
       | 
       | Why are these kinds of things impressive?
       | 
       | I think part of my issue is that I don't really "get" these ML
       | projects ("This X does not exist" or perhaps ML in general).
       | 
       | My understanding is that, in layman's terms, workers are shown
       | many, many examples of X and then are asked to "draw"/create X,
       | which they then do. The corollary I can think of is if I were to
       | draw over and over for a billion, billion years and each time a
       | drawing "failed" to capture the essence of a prompt (as deemed by
       | some outside entity), both my drawing, and my memory of it was
       | erased. At the end of that time, my skill in drawing X would be
       | amazing.
       | 
       | _If_ that understanding is correct, it would seem unimpressive?
       | It's not as though I can pass a prompt of "cookie" to an
       | untrained generator and, it pops out a drawing of one. And
       | likewise, any cookie "drawing" generated by a trained model is
       | simply an amalgam of every example cookie.
       | 
       | What am I missing?
        
         | [deleted]
        
         | [deleted]
        
         | bee_rider wrote:
         | For the longest time it was assumed that creativity was an
         | almost magically human trait. The fact that somebody can, with
         | a straight face, say "I don't get why it is impressive, I could
         | draw these images too" is actually indicative of the wild
         | change that has occurred over these last couple years.
         | 
         | I guess it is true that more than a couple demos like this have
         | been shown, so some of the awe might have worn off, but it is
         | still pretty shocking to lots of us that you can describe the
         | general idea of something to a computer and it can figure out
         | and produce "what you mean," fuzzy as that is.
        
           | mikkergp wrote:
           | I will say that the images included have not show to be
           | particularly creative, unless I missed a wider galaxy of non-
           | existent food items. It's not entirely convincing that the
           | generated images aren't just glued together pieces of other
           | images with some fading between them.
        
           | JadoJodo wrote:
           | > The fact that somebody can, with a straight face, say ...
           | 
           | To be clear, I'm not trying to devalue this at all; In fact,
           | as I noted above, I am certain I'm missing something and that
           | was what my comment was aimed at. In any case, thank you for
           | taking the time to reply (seriously).
        
             | bee_rider wrote:
             | Probably expression "with a straight face" has been used
             | sarcastically too often, so maybe it looks sarcastic in my
             | comment too. In this case I should have picked a phrase
             | more unambiguous phrase. I wasn't using it sarcastically or
             | anything, "with a straight face" = in good faith/honest in
             | this case.
        
       | herpderperator wrote:
       | Is there a way to trigger a fresh image on demand? That's kind of
       | what I expect when I see a does-not-exist site.
        
         | CSMastermind wrote:
         | There's a link on the page:
         | 
         | https://colab.research.google.com/github/nyx-ai/stylegan2-fl...
        
       | georgeburdell wrote:
       | This is the most disturbing "does not exist" yet. A food blog
       | could write itself
        
         | hbn wrote:
         | They already pretty much are. Top recipe hits on Google seem to
         | always be from like "Southern Mama Cooking Tips" or something
         | generic like that, and you have to scroll past 8 paragraphs of
         | context for why this person is writing a recipe and why they
         | like it so much, totally not to hit all the SEO sweet spots,
         | and the full life story of this "Southern Mama" that's totally
         | not a guy in India or a robot scraping together blurbs of text
         | from other website.
        
       | ComputerCat wrote:
       | Everything looks delish!
        
       | wyldfire wrote:
       | Are there any analysis techniques that can easily distinguish
       | between these and real photographs? Do simple things like edge
       | detections or histograms reveal any anomalies?
        
         | daveguy wrote:
         | Neural networks can be trained to identify the difference, but
         | I don't know how specific that is to the generating model. In
         | fact, the GAN technique, at a high level is two networks -- one
         | trying to distinguish the difference and one trying to create
         | images that cannot be distinguished. That is the "adversarial"
         | aspect.
         | 
         | It is an interesting question that there may be some simple
         | pre-processing techniques (edge detection, Fourier transform,
         | etc) that more easily distinguish the image as a fake.
         | Something like a shortcut from training a network to make the
         | distinction.
        
       | golergka wrote:
       | And here I was, hoping for new, never seen before dishes.
        
       | gffrd wrote:
       | I like the thought that, years from now, we're all drinking
       | eating weirdly-presented food / drinking weird cocktails because
       | AI synthesized the images of drinks around the web and decided
       | `cocktails always include fruit` and `all food must be piled high
       | on plate`
        
       | twic wrote:
       | This computer has pretty poor taste in cocktails.
        
       | spacemanmatt wrote:
       | Somewhere in that data set is found an Eigencookie. I want the
       | recipe.
        
       | tmountain wrote:
       | Aggregate "does not exist" website for anyone who's interested.
       | 
       | https://thisxdoesnotexist.com/
        
       | forgotusername6 wrote:
       | I wonder how close the nearest match from the training data is.
       | Was there a cheesecake that looked almost like these generated
       | images?
        
         | layer8 wrote:
         | Maybe the ML model effectively implements a lossy image
         | database with minor randomization. :)
        
           | fxtentacle wrote:
           | Since GANs are effectively one class of denoising auto-
           | encoders, your summary is spot-on. This type of ML model
           | learns to effectively compress and decompress natural images
           | by representing it as a hierarchy of convolutional features =
           | shape templates.
        
         | waynesonfire wrote:
         | exactly.. where is the chocolate chip cheesecake?
        
         | nh23423fefe wrote:
         | This feels like when I made a drawing in elementary school, and
         | someone asks if I traced it. It just feels like looking for a
         | way to downplay what was made by making an appeal to
         | "creativity" or "originality".
         | 
         | But the tide never goes out on AI and computing. The
         | capabilities will only grow more and more impressive and
         | unassailable.
         | 
         | When the chatbot is completely convincing is someone going to
         | ask, "I wonder close the responses are to training text" even
         | though no one even blinks when fathers and sons act alike. No
         | one demands children invent new languages to prove they aren't
         | just "randomizing samplers"
        
           | mikkergp wrote:
           | Is this just a search engine to find relevant content and
           | remix it a bit, or can you actually create new content. These
           | two things don't solve the same problem, and you may run into
           | lots of copyright problems.
        
           | yellowapple wrote:
           | > No one demands children invent new languages to prove they
           | aren't just "randomizing samplers"
           | 
           | I sure as hell do. No son of _mine_ will be comprehensible to
           | other humans until he 's at _least_ two years old.
        
       | WalterSear wrote:
       | No hot dogs?
       | 
       | Nice work.
        
       | Sebbecking wrote:
       | How big was your training dataset?
        
       | jdthedisciple wrote:
       | Looks impressive but I can't escape the notion that surely some
       | of the generated images will be very close to the some of the
       | training images?
       | 
       | How am I to assess how original the generated results really are?
        
         | danuker wrote:
         | Image search, I guess. No results, it's original enough.
        
       | dylan604 wrote:
       | When will we see this as a contestant on Is It Cake?
        
       | xg15 wrote:
       | At least with DALL-E you can be sure the food has a name. For a
       | moment I was worried this would produce vaguely food-like images
       | where on closer look you realise you have no idea what you're
       | looking at - like a lot of other "this X does not exist" projects
       | seem to do.
       | 
       | Also a bit of cultural bias in the training is shown I think. The
       | "pile of cookies" prompt seems to mostly generate American
       | cookies, while e.g. a German user might be disappointed they
       | didn't get this:
       | https://groceryeshop.us/image/cache/data/new_image_2019/ABSB...
       | :)
        
         | fxtentacle wrote:
         | I thought DALL-E uses a sentence-piece encoder for the text
         | that goes into CLIP, which would suggest that you can recombine
         | the syllables from existing words and it'll "understand" that.
         | 
         | So both "banana chocolate cookies" and "banacoochoconakieslade"
         | should work.
        
         | dalmo3 wrote:
         | I don't think a German user writing "pile of cookies", in
         | English, would be disappointed with "English" results. Is that
         | any different than what you get on, say, Google?
         | 
         | Try prompting craiyon for "Ein Stapel Kekse"* :)
         | 
         | * Google-translated
        
       | munificent wrote:
       | I love cheesecake with strawraspcherries on top.
        
       | waynesonfire wrote:
       | and what was the licensing for the training data that you used?
        
       | n4bz0r wrote:
       | The food looks great! I suppose these models could use some extra
       | training with dishes, though. The plates and glasses look wobbly,
       | which is an instant giveaway. Otherwise, I can see this being
       | used by food posters! Maybe not as a primary source, but as a
       | "filler" -- for sure.
        
       ___________________________________________________________________
       (page generated 2022-07-20 23:00 UTC)