[HN Gopher] Bunny AI
       ___________________________________________________________________
        
       Bunny AI
        
       Author : oedmarap
       Score  : 116 points
       Date   : 2022-12-22 19:22 UTC (3 hours ago)
        
 (HTM) web link (bunny.net)
 (TXT) w3m dump (bunny.net)
        
       | ilrwbwrkhv wrote:
       | Fantastic. I use all of bunny's services across all of my
       | companies and can vouch for the absolutely fantastic service they
       | provide at the best cost. Use them blindly for all your needs.
        
       | ilaksh wrote:
       | Dumb question.. can you only generate images of bunnies?
        
         | wellthisisgreat wrote:
         | I saw 2 images of pandas in the examples
        
           | fancyPantsZero wrote:
           | It's well-known that pandas are just large bunnies, though.
        
         | twelvechairs wrote:
         | No. You can change the wording in url of the example images [0]
         | from 'rabbit' to something else and it will generate for you on
         | the fly
         | 
         | [0] https://bunnynet-
         | avatars.b-cdn.net/.ai/img/dalle-256/avatar/...
        
       | magic_hamster wrote:
       | I can't understand the need for this kind of thing as there are
       | so many options for using Stable Diffusion for very cheap (or
       | free) and of course Dall E has its own UI. What's the point of
       | using a service like this (besides getting free compute while
       | they are launching)? Do we really need another service
       | aggregator?
        
       | etaioinshrdlu wrote:
       | The commoditization of image generation has been shockingly fast.
       | Now our CDN provider provides low-cost generation.
        
         | forrestthewoods wrote:
         | Kind of. They all run Stable Diffusion because they released
         | fully open source.
         | 
         | There's still competitive advantage to owning, training, and
         | gatekeeping access to models. MidJourney and DallE are both
         | superior to Stable Diffusion along many axes.
         | 
         | Monetizing models is tricky because it's so cheap to run
         | locally but so expensive in the cloud. Except if you release
         | your model such that it can run locally all advantage is lost.
         | 
         | I wonder if there is a way to split compute such that only the
         | last 10% runs in the cloud?
        
           | fshbbdssbbgdd wrote:
           | Why is it expensive to run in the cloud and cheap to run on a
           | device?
           | 
           | 1. Commodity hardware can do the inference on a single
           | instance (must be true if a user device can do it).
           | 
           | 2. It's apparently possible to run a video game streaming
           | service for $10/month/user.
           | 
           | 3. So users should be able to generate unlimited images (one
           | at a time) for $10/month?
           | 
           | Maybe the answer is the DallE/Midjourney models running in
           | the cloud are super inefficient and Stable Diffusion is
           | better. So the services will need to care about optimizing to
           | get that kind of performance. But it's not inherently
           | expensive because they run it on the cloud.
        
             | rileyphone wrote:
             | Nvidia's business model makes it inherently more expensive
             | to run on the cloud.
        
               | fshbbdssbbgdd wrote:
               | Ah, do you have to contract when you buy the cheap GPUs
               | that you might use then for game streaming but you won't
               | do AI inference?
               | 
               | Makes me wonder if you could first-sale-doctrine your way
               | out of that problem by buying the GPUs on eBay and not
               | making any agreement with Nvidia.
        
               | sneak wrote:
               | The software is proprietary and is governed by the
               | license. It's not the hardware.
        
             | forrestthewoods wrote:
             | I wouldn't assume those $10/mo gaming services are
             | profitable.
             | 
             | It's not that running in the cloud is more expensive. It's
             | that people already have a $2000 laptop or maybe even $1600
             | RTX 4090. If I've got that I don't want to pay $20/month to
             | 6 different AI services.
             | 
             | Sam Altman said ChatGPT costs like 2 cents per message. I'm
             | sure they can get that way down. Their bills are
             | astronomical. But the data they're collecting is more
             | valuable than the money they're spending.
             | 
             | Stable Diffusion isn't super fast. It takes 30 to 60 GPU
             | seconds. There's minimal consumer advantage to running in
             | the cloud. Id run them all locally if I could.
        
           | TuringNYC wrote:
           | >> Monetizing models is tricky because it's so cheap to run
           | locally but so expensive in the cloud.
           | 
           | Can you expand on this a bit? The way i'm thinking, that is
           | only the case if you need low-latency. And in that case, it
           | seems you just need to charge to cover compute.
           | 
           | We're running Stable Diffusion on an eks cluster and it evens
           | out the load across calls and prevents over-resourcing.
           | 
           | If latency isnt an issue, it can be run on non-gpu machines.
           | If you're looking for someone under $300 or $400/mo, then I
           | agree it may be an issue.
           | 
           | On that note, I havent checked whether there are
           | lambda/fargate style options which provide GPU power, to
           | achieve consumption based pricing tied to usage, but that
           | might be a route. Can anyone speak to this?
        
             | slig wrote:
             | >On that note, I havent checked whether there are
             | lambda/fargate style options which provide GPU power, to
             | achieve consumption based pricing tied to usage, but that
             | might be a route. Can anyone speak to this?
             | 
             | https://lambdalabs.com/service/gpu-cloud
        
               | TuringNYC wrote:
               | Thanks for this. This is nice and the prices are
               | great...but I was specifically curious about something
               | where consumption can be tied to cost (e.g.
               | lambda/fargate style where you pay by the call)
        
       | KaoruAoiShiho wrote:
       | I'm a bunny user, I'm kinda confused where is the documentation
       | for using this? There's no link from this blog post.
       | 
       | Edit: Found it, really should've been in the blog post...
       | 
       | https://docs.bunny.net/docs/bunny-ai-image-generation
        
       | seqizz wrote:
       | Looks like it is generating on-the-fly. No? Second request for
       | each generation (unique number) takes no time.
       | for a in `seq 1000 2000`; do wget "https://bunnynet-avatars.b-cdn
       | .net/.ai/img/dalle-256/avatar/email-${a}/rabbit.jpg?width=128&hiE
       | bunny=is_this_secure_though" ; done
        
         | jchw wrote:
         | Almost assuredly, it is generating on the fly, then caching.
        
         | [deleted]
        
       | sieabahlpark wrote:
       | [dead]
        
       | Havoc wrote:
       | Thinking the next logical step - chatgpt at edge - could be even
       | more useful.
       | 
       | Though I guess that still has the underlying limitation of
       | compute shortage so could take a while
        
         | ilaksh wrote:
         | OpenAI has very similar models available in their API.
        
         | m00x wrote:
         | There's a huge difference between diffusion models that were
         | built to be run on commodity hardware and the huge
         | autoregressive models like GPT. You can't even run GPT3 on the
         | cloud without some specialized interconnect.
        
           | birdyrooster wrote:
           | Wait you have to peer directly with their network or
           | something?
        
       | jonplackett wrote:
       | Any one know the pricing? When you go to their pricing page it
       | only has info about standard CDN stuff.
        
         | slig wrote:
         | https://docs.bunny.net/docs/bunny-ai-image-generation#suppor...
        
         | nbgoodall wrote:
         | Towards the end:
         | 
         | > Bunny AI is currently available free of charge during the
         | experimental preview release and is enabled for every bunny.net
         | user. We want to invite everyone to have a look and play
         | around, and share the results with us. Bunny AI is released as
         | an experimental feature, and we would love to hear your
         | feedback.
        
       | gingerlime wrote:
       | Feels a bit gimmicky to me, but maybe I'm missing some need in
       | the market.
       | 
       | I wonder about auto-generated captchas perhaps? or are these
       | going to be easy to reverse?
       | 
       | On a side note: I'd love to switch from Cloudflare to bunny, but
       | it's missing a WAF. We were promised it from bunny for a long
       | while, but didn't see it yet. Personally I would imagine it being
       | a more core feature for a CDN than AI bunnies on the edge, but I
       | guess I'm old and boring.
        
       | Beefin wrote:
       | If you need a method of indexing and searching these pictures,
       | give mixpeek a try: https://mixpeek.com/
        
       | zzzeek wrote:
       | Saw a great toot yesterday.
       | 
       | Startups and media business are looking to make a windfall on AI
       | generated art, music, code, writing, and other services. The
       | payment models will be subscriptions, pay per use, and other
       | models that make more money the more content is produced.
       | 
       | But there's still no AI (with associated mechanics) that can fold
       | laundry.
       | 
       | (I think the latter would be really useful.)
        
         | [deleted]
        
         | kache_ wrote:
         | >There's still no AI that can fold laundry
         | 
         | We're actually really close to general robot agents that
         | operate in your home. Check out googleAI's saycan & RT-1
         | systems
         | 
         | https://ai.googleblog.com/2022/12/rt-1-robotics-
         | transformer-....
        
           | mnutt wrote:
           | Also: https://kottke.org/10/04/the-robot-who-considers-towels
        
           | causality0 wrote:
           | That's gonna be great until someone hacks it and has it stab
           | me to death in my sleep.
        
             | seanw444 wrote:
             | More worried about government backdoors.
        
             | danielheath wrote:
             | Much like my fears about bluetooth connected cars being
             | hacked to crash on the highway, it turns out that - by and
             | large - nobody wants to kill me (or at least, not badly
             | enough to do anything about it).
        
       | MinaMe wrote:
       | [flagged]
        
         | adenozine wrote:
         | What in the world...
         | 
         | This is such a schizo comment. I feel sorry for you and your
         | cousin. I hope it works out somehow in life
        
       | swamp40 wrote:
       | 10,000 Unique Bunny NFT's in 3, 2, 1...
        
       ___________________________________________________________________
       (page generated 2022-12-22 23:00 UTC)