[HN Gopher] AI Seamless Texture Generator Built-In to Blender
       ___________________________________________________________________
        
       AI Seamless Texture Generator Built-In to Blender
        
       Author : myth_drannon
       Score  : 234 points
       Date   : 2022-09-19 16:50 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | skykooler wrote:
       | Is there a way to run things like this with an AMD graphics card?
       | Every Stable Diffusion project I've seen seems to be CUDA
       | focused.
        
         | habibur wrote:
         | That's because Stable Diffusion is built with PyTorch. Which
         | isn't optimized for anything but CUDA. Even the CPU is a second
         | class citizen there. Let alone AMD or other graphics.
         | 
         | Not saying PyTorch doesn't run on anything else. You can but
         | those will lag and some will be hackish.
         | 
         | Looks like Nvidia is on its way to be the next Intel.
        
           | mrguyorama wrote:
           | Part of this is simply that AMD does a TERRIBLE job of
           | writing tooling and software for anything that isn't just
           | "render these triangles for this videogame". Doing raw
           | compute things with AMD GPUs just seems limited to those
           | building supercomputers apparently. Their promised "cross
           | GPU" solution in ROCm is only available on a tiny fraction of
           | the GPUs they make, seemingly without architecture excuses
           | for why it isn't available on 5000 series cards, it took them
           | YEARS to provide a backend that blender could actually use,
           | productively and without crashes and bugs, and their drivers
           | are just in general very fragile.
           | 
           | It's weird to me how much lip service AMD puts into making
           | cross platform, developer friendly, free and open GPU compute
           | standards, and then turn around and just not do that.
        
           | westurner wrote:
           | From the Arch wiki, which has a list of GPU runtimes (but not
           | TPU or QPU runtimes) and arch package names: OpenCL, SYCL,
           | ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :
           | 
           | > _GPGPU stands for General-purpose computing on graphics
           | processing units._
           | 
           | - "PyTorch OpenCL Support"
           | https://github.com/pytorch/pytorch/issues/488
           | 
           | - Blender re: removal of OpenCL support in 2021 :
           | 
           | > _The combination of the limited Cycles split kernel
           | implementation, driver bugs, and stalled OpenCL standard has
           | made maintenance too difficult. We can only make the kinds of
           | bigger changes we are working on now by starting from a clean
           | slate. We are working with AMD and Intel to get the new
           | kernels working on their GPUs, possibly using different APIs
           | (such as CYCL, HIP, Metal, ...)._
           | 
           | - https://gitlab.com/illwieckz/i-love-compute
           | 
           | - https://github.com/vosen/ZLUDA
           | 
           | - https://github.com/RadeonOpenCompute/clang-ocl
           | 
           | AMD ROCm: https://en.wikipedia.org/wiki/ROCm
           | 
           | AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on
           | NVIDIA and AMD GPUs:
           | https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-
           | learni...
           | 
           | RadeonOpenCompute/ROCm_Documentation:
           | https://github.com/RadeonOpenCompute/ROCm_Documentation
           | 
           | ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-
           | Developer-Tools/HIPIFY :
           | 
           | > _hipify-clang is a clang-based tool for translating CUDA
           | sources into HIP sources. It translates CUDA source into an
           | abstract syntax tree, which is traversed by transformation
           | matchers. After applying all the matchers, the output HIP
           | source is produced._
           | 
           | ROCmSoftwarePlatform/gpufort:
           | https://github.com/ROCmSoftwarePlatform/gpufort :
           | 
           | > _GPUFORT: S2S translation tool for CUDA Fortran and
           | Fortran+X in the spirit of hipify_
           | 
           | ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-
           | Tools/HIP:
           | 
           | > _HIP is a C++ Runtime API and Kernel Language that allows
           | developers to create portable applications for AMD and NVIDIA
           | GPUs from single source code. [...] Key features include:_
           | 
           | > - _HIP is very thin and has little or no performance impact
           | over coding directly in CUDA mode._
           | 
           | > - _HIP allows coding in a single-source C++ programming
           | language including features such as templates, C++11 lambdas,
           | classes, namespaces, and more._
           | 
           | > - _HIP allows developers to use the "best" development
           | environment and tools on each target platform._
           | 
           | > - _The [HIPIFY] tools automatically convert source from
           | CUDA to HIP._
           | 
           | > - * Developers can specialize for the platform (CUDA or
           | AMD) to tune for performance or handle tricky cases.*
        
         | chpatrick wrote:
         | You can run it on an Intel CPU if that helps:
         | https://github.com/bes-dev/stable_diffusion.openvino
        
       | cdiamand wrote:
       | Very cool. There are some really interesting opportunities to
       | integrate stable-diffusion into many creative apps right now.
       | It's neat to see it all happening at once.
       | 
       | Another interesting example of how stable-diffusion could be
       | integrated into a workflow:
       | https://www.reddit.com/r/StableDiffusion/comments/wys3w5/app...
       | 
       | So many applications...
        
       | owenpalmer wrote:
       | Love this idea!
        
       | hyperific wrote:
        
       | Severian wrote:
       | Ha, I knew this would come out sooner or later based on my own
       | experiments with Stable Diffusion. It does very well with
       | textures.
        
       | jcmontx wrote:
       | It makes me so happy to see FOSS providing cutting edge tech for
       | all of us :)
       | 
       | This is absolute gold for indie game devs.
        
       | shabbatt wrote:
       | holy cow! this is insane. should be possible to create a mesh,
       | get stable fusion to generate a UV texture map too!
       | 
       | later we should be able to use prompts to generate 3d meshes with
       | full uv texture map as photogammetry picks up pace.
       | 
       | first they came for 2D, then they will come for 3D.
        
         | spyder wrote:
         | Yep:
         | 
         | https://github.com/NasirKhalid24/CLIP-Mesh
        
         | JamesBarney wrote:
         | Yeah, I'm excited about what this means for indie games.
        
           | zactato wrote:
           | Aren't indie games already dangerously close to the commodity
           | space? Steam is overwhelming these days. There are dozens of
           | games I can build bridges in or new interesting strategy
           | games. How is any one developer supposed to capture enough
           | market share to make any money of their work? I am worried
           | that tools like this will just lower the barrier even more.
           | 
           | Maybe its a good thing because it will allow indie devs to
           | spend less time/money on art.
        
             | [deleted]
        
             | JellyBeanThief wrote:
             | > How is any one developer supposed to capture enough
             | market share to make any money of their work? I am worried
             | that tools like this will just lower the barrier even more.
             | 
             | In an ideal society, everyone has time, energy, and
             | resources to create art themselves just because it makes
             | them happy, as opposed to having to turn a profit.
        
             | myth_drannon wrote:
             | I don't think indie devs are in for the money. That ship
             | sailed a decade ago (90's were the shareware era, 00's were
             | the indie devs era).
        
               | somenameforme wrote:
               | Yip. I think a large motivation for many games is not to
               | make money but to make something that you personally want
               | that isn't already out there that, where the money is
               | just a nice perk.
               | 
               | "UnReal World", for the most extreme example I know of,
               | was released and has been in development for more than 3
               | decades. It's still receiving regular updates, with the
               | dev kind of mixing game and life. It's a game about
               | surviving in the Finnish wilds, by a dev who lives out in
               | the middle of the Finnish wilds.
        
               | postsantum wrote:
               | And 10's were the saas era
        
               | [deleted]
        
             | wongarsu wrote:
             | The barrier of entry has been on the floor ever since Steam
             | discontinued Greenlight and started just allowing everyone
             | on the platform. But at the same time they invested a lot
             | in better content discovery: personalized recommendations,
             | the discovery queue, curators you can follow, etc.
             | 
             | If you're building the next rehash-of-popular-concept, this
             | asset generator at best saves you a couple minutes shopping
             | the Unity Asset Store, and selecting the right store-bought
             | texture in blender. But it will raise the bar of what's
             | possible with new, innovative settings, which I'm really
             | looking forward to.
        
           | moogly wrote:
           | Maybe finally the pretending-my-programmer-art-is-a-super-
           | opinionated-stylistic-choice-to-go-with-retro-pixel-art-and-
           | not-just-because-it's-so-much-easier-not-to-hire-an-artist
           | fad can be -- if not laid to rest -- perhaps toned down a
           | bit.
        
             | badsectoracula wrote:
             | These tools wont replace artists or needing some sort of
             | artistic sense - there are several indie games that had
             | professional artists working on the assets but the
             | developers behind them completely massacred their art.
             | 
             | As an example check out Frayed Knights on Steam - i really
             | like the game and think it is both very fun and a very
             | competent freeform blobber RPG, but despite the author
             | having help from artists (and he even worked in some
             | beloved PS1 games himself so he wasn't an amateur at it),
             | the visuals are downright ugly - the UI even looks worse
             | than the default Torque theme! The fact that the game was
             | shipped with what it looks like a rough placeholder made in
             | MS paint for the inventory background, tells me that the
             | only reason for that is that the developer (whom, do not
             | get me wrong, i otherwise respect, just not when it comes
             | to visuals) is blind when it comes to aesthetics (which is
             | a shame because the actual game is both very humorous and
             | has an actually deep character system - but due to the
             | visuals it was largely ignored).
             | 
             | This wont be solved by AI, at the end of the day someone
             | will have to decide that something looks good and someone
             | will have to integrate whatever output the AI creates with
             | the game.
             | 
             | What will actually happen is that people with some artistic
             | skills will be able to do things faster than they were able
             | before - it will improve several types of games (i.e. those
             | whose art styles fit whatever the AI creates), but it wont
             | let someone without artistic skills suddenly make high
             | quality art assets.
        
       | BobbyJo wrote:
       | I can't wait until these kinds of tools are usable live. I'd love
       | open worlds with unique character interactions and scenery. I'm
       | always incredibly disappointed when I've exhausted a game's
       | content or when portions of content are obviously built on some
       | simplistic pattern, either visual or interactive.
        
       | nullc wrote:
       | Now-- someone figure out how to setup the boundary conditions so
       | that it can fill in penrose or Jarkko Kari's aperiodic wang
       | tilings to efficiently get aperiodic textures.
       | 
       | If you fill in a set of these tiles with the right edge rules,
       | then you can just formulaically fill a plane and get a non-
       | repeating texture without generating unreasonable amounts of SD
       | output.
        
       | jonplackett wrote:
       | Anyone know if there are non-blender-specific versions / prompt
       | hacks to get seamless textures out of stable diffusion?
       | 
       | Whenever I ask for something like 'seamless tiling xxxxxx' it
       | kinda sorta gets the idea, but the resulting texture doesn't
       | _quite_ tile right.
        
         | duskwuff wrote:
         | If you're using a recent version of stable-diffusion, it's
         | exposed as the --seamless option.
        
       | khangaroo wrote:
       | I wonder if it would be viable to have a model that generates
       | other components like normal maps based on the generated texture
       | too.
        
       | anon012012 wrote:
       | I'm currently trying to put 1000x wallpaper seamless textures
       | into UE5 Marketplace. I'm saddened to see this news.^^. Well,
       | fuck money anyway right? Here's a tip, you can produce all you
       | need if you follow this guide:
       | 
       | https://rentry.org/voldy#-guide-
       | 
       | Just check what this stuff can do:
       | 
       | https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
       | 
       | This is the best page on the internet right now. The hottest
       | stuff. Better than Bitcoin.
       | 
       | You can get guidance and copy business ideas here:
       | https://lexica.art/ https://laion-aesthetic.datasette.io/laion-
       | aesthetic-6pls/im... http://stable-diffusion-guide.s3-website-us-
       | west-2.amazonaws...
       | 
       | For textures: Once you have generated the color map (diffuse)
       | from StableDiffusion, you can use CrazyBump to BATCH create the
       | normal map and displacement map. I'm currently at my 200th file
       | iteration. http://www.crazybump.com/ CrazyBump, all the cool kids
       | are using it.
       | 
       | Now this is where I'm at. Call me crazy. I'm forgetting stuff
       | surely, but it's the best I can do. Go and change the world.
       | 
       | PS: You can Batch Upscale the 512 to beautiful 2k+ with this
       | link: https://github.com/xinntao/Real-ESRGAN
        
       | DonHopkins wrote:
       | Blender is the perfect platform for this kind of stuff, since
       | it's all scripted in Python, which is the lingua franca of
       | machine learning.
        
         | robertlagrant wrote:
         | It makes sense that it is, as Python has if statements.
         | 
         |  _ducks_
        
       | Animats wrote:
       | Oh, it generates from a text prompt, not a sample texture. I
       | thought this was just a tool to generate wrapped textures from
       | non-wrapped ones.
       | 
       | The licensing is a mess. The Blender plug-in is GPL 3, the stable
       | diffusion code is MIT, and the weights for the model have a very
       | restrictive custom license.[1] Whether the weights, which are
       | program-generated, are copyrightable is a serious legal question.
       | 
       | [1] https://github.com/lstein/stable-
       | diffusion/blob/61f46cac31b5...
        
         | nullc wrote:
         | Pretty ironic to assert copyright on the weights while ignoring
         | it on the training data the produces the weights. Are AI
         | practitioners foolish enough to tempt that fate?
        
         | naillo wrote:
         | The weights are not particularly restrictive. You're totally
         | allowed to use them to generate things you sell for instance.
        
         | kache_ wrote:
         | >very restrictive custom license
         | 
         | restrictive in what sense? Doesn't seem restrictive to me
        
         | wongarsu wrote:
         | Shouldn't the arguments for applying copyright to photographs
         | apply nicely to applying copyright to ML weights too? Sure, the
         | output is generated by a machine, but the output is also
         | created by the creative choices of the machine's user.
         | 
         | If anything, it would seem to me that photographs had a much
         | better case to being uncopyrightable, with them being
         | mechanistic reproductions of existing reality.
        
         | brailsafe wrote:
         | It does seem to allow for providing a sample texture
        
         | htrp wrote:
         | You can assert any license you want, but good luck enforcing it
         | in court.
        
       | yesimahuman wrote:
       | As a developer and past indie dev that creates awful art for any
       | projects I take on, this is incredibly exciting. We're getting
       | closer to the reality where an engineer can build a full game or
       | experience completely on their own and maintain a high level of
       | artistic quality.
        
         | jlundberg wrote:
         | Another recommendation for any aspiring solo or small team game
         | developer is the megascans library:
         | 
         | https://quixel.com/megascans/
         | 
         | Included in the Unreal Engine license after Epic purchased
         | Quixel. :)
        
       | ByThyGrace wrote:
       | Unfortunately none of the three textures shown as examples in the
       | README are seamless pattern textures. That would have completely
       | driven the point home. I really like the idea though.
        
       | ramesh31 wrote:
       | I'm kind of overwhelmed by this stuff at the moment.
       | 
       | On the one hand, it's very clear by now that this new generation
       | of AI is absolutely game changing. But for what? It feels a bit
       | like discovering oil in 1850.
        
       | camjw wrote:
       | I think it's so interesting and positive that Stable Diffusion
       | has come out and absolutely destroyed DALL-E as a product. What
       | are the best examples of DALL-E being integrated into a product?
       | Are there any at all?
        
         | mrtksn wrote:
         | OpenAI decided keep it for themselves, their tech was
         | impressive but they didn't have a killer app and they tried to
         | prevent the inevitable by restricting the use of their machine.
         | 
         | StableDiffusion might be inferior to DALL-E in some aspects but
         | they build a community with full access to the tooling and that
         | community is much more likely to find a killer app for this
         | impressive tech.
         | 
         | It's kind of ironic that OpenAI is losing out due their closed
         | ways and desire for control.
        
         | skybrian wrote:
         | We haven't seen any integrations but that doesn't mean we have
         | any idea how many users DALL-E has. Stuff showing up on Hacker
         | News isn't a good proxy for this.
        
           | O__________O wrote:
           | Stable Diffusion easily has 100x more active users that
           | Dall-e; this based on stats OpenAI released and private
           | sources I been able to dig up. Rumored that Stability AI is
           | in process of raising additional funding at over a billion
           | dollar valuation. Unless OpenAI rapidly changes course, only
           | matter of time before they are footnote in history of AI in
           | my opinion, since Stability will likely rapidly go after
           | every single current offering they have, including
           | GPT3/CoPilot.
        
             | skybrian wrote:
             | Well, I guess it's fortunate that OpenAI the company is
             | owned by a nonprofit that's devoted to research. As long as
             | they get funding they can keep doing research.
        
         | kache_ wrote:
         | ""Open""
         | 
         | ""AI""
        
         | thorum wrote:
         | OpenAI is planning to release an API for DALL-E. Once they do,
         | you will see more applications being built with it.
         | 
         | https://help.openai.com/en/articles/6402865-is-dall-e-availa...
        
           | jowday wrote:
           | Whatever API they release is going to be way more restrictive
           | than what people can do with Stable Diffusion. I doubt we'll
           | see anywhere near the same amount of integrations unless
           | OpenAI just lets you download the weights and run Dalle
           | locally.
        
         | peppertree wrote:
         | This is like bitkeeper vs git all over again. DALL-E is the
         | early innovator but stable diffusion is going to clean the
         | table.
        
           | naillo wrote:
           | The people who first originated the clip guided diffusion
           | approach (rivershavewings around this time last year) are now
           | working for stable diffusion so it's somewhat arguable that
           | dalle wasn't actually first (just first to make a user
           | friendly saas for it).
        
             | GrantS wrote:
             | OpenAI announced and released CLIP on GitHub on January 5,
             | 2021: https://github.com/openai/CLIP
             | 
             | You need CLIP to have CLIP guided diffusion. So the current
             | situation seems to trace back to OpenAI and the MIT-
             | licensed code they released the day DALL-E was announced. I
             | would love to be corrected if I've misunderstood the
             | situation.
        
               | naillo wrote:
               | You're totally right, OpenAI released CLIP in january.
               | But I mean CLIP isn't an image generator, it's just a
               | classifer. If we restrict the question to actual text to
               | image generators (ignoring deep dream or some of the
               | 'kinda cool but far from the coherency of post-2021
               | generators') then clip guided diffusion is kinda the
               | first.
        
         | pdntspa wrote:
         | I am soooooooooo glad this happened. OpenAI has defaulted on
         | their promise of openness and it seems a lot of models are
         | gatekept by profiteering and paternalistic moralism
         | 
         | All it takes is one good, _actually open_ project to sidestep
         | all of their chicanery.
        
           | scifibestfi wrote:
           | It begs the question: Did they believe there would be no
           | competition?
        
             | capableweb wrote:
             | That's usually how _innovation stagnation_ happens. Tons of
             | examples all around. Intel and AMD were fighting, at one
             | point Intel got a solid lead on AMD but eventually they
             | became to overconfident and became lazy. Same will probably
             | happen with AMD eventually, just a matter of how long the
             | cycles are.
        
           | yazzku wrote:
           | ActuallyOpenAI
        
           | mhuffman wrote:
           | Yes, and the end of the world they predicted if text-to-image
           | technology got into the hands of bigots or if
           | celebrities/politicians were deep-faked into odd situations,
           | etc ... just has not materialized. I really think (and always
           | thought) that the whole ethical reason for withholding access
           | was bullshit! Same with generative text from prompts.
        
             | serf wrote:
             | > ... just has not materialized.
             | 
             | I don't think we comfortably know that yet.
             | 
             | For one, software takes a bit to diffuse into the public
             | usage, secondly if you were the victim of blackmail or some
             | other such criminal activity that was perpetrated with the
             | use of such a system you wouldn't raise your hand
             | immediately -- you'd get clear of the problem -- and then
             | afterwards depending on how embarassing the situation is
             | you'd speak out publicly. Many victims will never identify
             | the methods used, and most will never speak out publicly.
             | 
             | In other words, systems like this _could_ be being used to
             | harass people and it 'd take a bit of time before 'the
             | public' was ever very aware of it being an ongoing problem.
        
             | TigeriusKirk wrote:
             | There are some people making images that almost anyone
             | would find offensive. But I haven't noticed those images
             | having any impact at all, at least not yet.
        
             | yoyohello13 wrote:
             | I think those ethical concerns are very real. It's just
             | inevitable that this technology will be used for nefarious
             | purposes. But withholding access to the tech, can't do
             | anything to avoid the inevitable.
        
       ___________________________________________________________________
       (page generated 2022-09-19 23:00 UTC)