(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . The Human Supremacist Position [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.'] Date: 2023-12-17 A fairly common statement regarding AI goes something like: "AI cannot be creative; all they do is re-mix existing stuff probabalistically." I characterize that as "The Human Supremacist" position, because it's implying that there is some sort of "true creativity" which only humans are capable of. There are a lot of problems with that position, which I will attempt to explore, herein. After that, I will describe some of my thoughts on how I experience the creative act, and compare it with how AIs implement creativity. Midjourney AI and mjr: "a paper napkin diagram of how generative adversarial networks function." The human supremacist position is fairly easy to refute, from several lines of argument. I'll try two, here. The first is perhaps a bit of commonsense and the other is a bit more philosophical. First off, have you ever known a dog, a horse, or (arguably) a cat? (Let alone an otter, a crow, or a badger) If you have spent a significant amount of time around some of these creatures, you'll notice that, like humans, they display creativity. I don't mean that a horse is going to grab a paintbrush in its teeth and kick out masterpiece in the renaissance style, but rather that these creatures exhibit a sense of fun, perhaps a sense of humor, and they solve problems. Philosophically, the human supremacist position fails if there is one credible instance of an AI or non-human exhibiting creativity, so the argument is almost done, right there. But examples are fun: I once saw two crows sitting on a jersey barrier and whenever a semi came by, they'd set their wings, levitate a bit into the air, then settle back down when the blast of air from the truck was gone. Those crows were exhibiting creativity: they had encountered a phenomenon, understood it, and figured out a way to use it for their amusement. I knew a horse, once, who literally lived to create trouble for his owner. The horse would do things like nuzzle his mommy while simultaneously stealing things out of her jacket pockets. Again, the animal was encountering a phenomenon, understanding it, and figuring out a way to use it for their amusement. My dog, one time, communicated quite clearly to me regarding an innovative problem - he had a rabbit's skull stuck where he couldn't pass it after swallowing a rabbit whole. He would run a little ahead of me, crouch like he was trying to defecate, look at me, and whine. I was able to figure out what he was saying, got him to the vet, and a vet-tech removed the skull by crushing it with needle nose pliers, earning thereby a significant tip. The point of that is not to be disgusting, but to illustrate that a non-human was able to encounter a new situation (that was the first and only time that ever happened to a dog I know, which is why I used that example), understand it, and figured out a way to resolve the problem by communicating to its person. In that example, I believe the dog exercised creativity in several ways. I suppose I should also add that I've seen humans who are less effective at problem-solving than some dogs, and problem-solving is the ultimate creative process. You don't need to make a big leap from "solving a problem" to "painting a painting in the renaissance style" - painting a painting is a collection of smaller problems, solved together. So is sending a rocket to the moon, or inventing firearms. Next, we argue against the human supremacist position philosophically. Consider the great game of Chess. Having said that, I could stop right there, but I won't. Chess is one of mankind's great achievements: a day to learn and a lifetime to master. The game is a distillation of strategy - players sit there and analyze the board, think about their opponents moves, and try to come up with the best counter-moves that lead toward a victory. Strategizing is a creative process: you have to assemble a sequence of moves many moves ahead, rejecting the bad ones and choosing the good ones. Of course I am not playing fair: I used chess as an example because the human supremacist position used to be that humans could still beat computers at chess. Until, they couldn't. There will never be another human chess champion. Then, it was Go. etc. We are forced to either reject the idea that playing chess is a creative process, or come up with an argument that somehow differentiates human chess from AI chess. Be careful, that's a trap! In discussions with human supremacists, I've often heard the rejoinder that chess-playing computers are not creative because they a) look farther ahead in moves than a human can and b) embed massive lookup-tables of what the best move is, in what situation. The AI is not creative, they say, it's exhaustive rote-learning. Unfortunately, that ignores the fact that human chess masters spend a huge amount of time studying opening moves, and prior games by other masters, and it's widely understood among masters that being able to look more moves ahead makes a critical difference. If human chess masters are not showing creativity, then what are they doing? One possible thing that they are doing is eliminating all the most-bad moves before them. If you had, say, a way of eliminating the most-bad, then scoring and weighting the likely value of the remaining candidate moves, you'd wind up with a pretty well-played chess game! You know, like the old: "how to sculpt Michaelangelo's David - take a big piece of marble and cut away everything that does not look like Michaelangelo's David." That, by the way, is a fair high-level descriptions of how AI image generators work. Midjourney AI and mjr: "a paper napkin diagram of how generative adversarial networks function." Suppose the human supremacist wishes to argue that an AI art generator is not showing creativity, because it is simply probabalistically regurgitating a deep and vast model of existing art, the rejoinder must be, "Michaelangelo studied art-making and practiced and was deeply familiar with art, too!" David, after all, was not just some guy, Michaelangelo was referencing thousands of years of mythology and a whole lot of greco-roman statuary. He no more created David out of whole cloth than an AI does. If an AI is merely derivative, so is Michaelangelo. We don't really know what creativity is, and our lack of knowledge is showing. One final part of the philosophical argument is that, perhaps, creativity is something that can surprise us. That's a fairly typical reaction to a masterpiece, in fact, "how the heck...? wow!" Well, if you look at some of Michaelangelo's early sculpture, it's not great but it shows that he's got talent that will refine into master-class eventually. [the virgin of the steps] But, if you ask a human or an AI to create something (with rough direction) and are surprised by what it comes up with, you are possibly experiencing creativity. Midjourney AI and mjr "michaelangelo's david hip hop version" The way that masters like Michaelangelo happen is a generative feedback loop; they are not born knowing how to be insanely creative. They make something, like The Virgin of the Stairs, and critics, paymasters, and other artists offer helpful advice like "her hands look like boxing mitts." etc. The chess player who will eventually be a master is also in a generative feedback loop; they try different moves and lose. Or sometimes someone explains something better and gradually they stop losing. Their desire to succeed keeps them trying, remixing their past efforts to create future work, and for every Michaelangelo there are thousands of humans who just never made it as sculptors. That is also a generative feedback loop: some artists never blossom to their full potential because they got too much negative feedback like, "for a sculptor, you're a pretty good brick-layer." It's the concept of generative feedback loops that is crucial to this whole problem, and now it's time to take a look at the experience of being creative. I'm going to have to assume that you've had the experience of being creative, even if only being creative in problem-solving. In fact, to make the argument simple, let's consider problem solving and artistic creativity to be the same thing: you're trying to do something - perhaps you have direction from outside ("can you fix the wiring in the network closet?") or from inside ("I'm going to sculpt a figure based on the bible.") Your first question to yourself is, "Self? Where do I start?" and that is the beginning of the generative feedback loop. "Maybe I should rip all the wires out and replace them" or "Maybe I should do a sculpture of the farmer who owned the manger that Jesus' family took over" - something, from our cultural background or existing knowledge bubbles up a few suggestions. Where do they come from? I submit that is the beginning of the great engine of creativity. When we learn to problem-solve or sculpt or whatever, we learn to rapidly hypothesize candidate ideas, which we rapidly accept or shoot down as we consider the information we have about the problem before us. So, if you're the sculptor considering investing months of your life to produce a marble statue of a grumpy farmer, you might shrug "that's a bad idea" and then realize "how about Mary Magdalene?" And then "hey what is she's the only person crying and everyone else is just dealing with the politics of the incident?" Similarly, the wiring closet: you look inside and realize that there are critical systems in there you can't take offline, so "just rip it out" won't work. And then "hey maybe I just trace out the critical systems and replace everything else?" The point is that past knowledge and experience are applied against our first candidate ideas, and we refine them iteratively. That's the generative feedback loop. Let's generalize the example: suppose that you have a "hypothesizer" that takes your past experience and information about what you are doing, and rapidly burps out a whole bunch of half-baked ideas. I used the term "half-baked" right there because, in fact, we humans have terms like that which expose how we think about our own creativity - that tells us something. So the half-baked ideas begin to swarm out of the hypothesizer and they hit our first-order bullshit filter. We may not even pay a lot of attention to it, as it flits through the sea of options, going "nope" "nope" "nope" "hey maybe." The "hey maybe" suggestion is the "how about Mary Magdalene?" or some other approach. It's my suspicion that if you assign yourself, right now, a problem, and assess how you think about it, it may match the experience I am describing. If it does, congratulations, you're being creative. But don't pat yourself on the back. Midjourney AI and mjr: "a marble statue of Mary Magdalene by Michaelangelo" AIs knowledge-bases are big trees of probabilities that are contingent on other probabilities, etc. For example, an AI art knowledge-base might give certain activation levels if you match for "biblical art", with a certain probability that "biblical art" is followed by "Michaelangelo's David" but it also has other probabilities for "biblical" that have different activation levels around "Mary Magdalene" - as the human supremacists point out, it's a probability game. The technique worked well in the 1980s for optical character recognition - computers could learn to recognize the pixels that make up an '9' or whatever. It's just a couple steps more bigger technology and knowledge-bases that allow for a recognizer that is good at probabalistically recognizing pretty much any image. The AI art generator does not have pixel maps in it somewhere that define how it sees Mary Magdalene. It actually uses the same approach Michaelangelo used: it takes a bitfield full of noise and starts iteratively reducing the values of pixels that don't match "Mary Magdalene by Michaelangelo." If you keep iteratively removing marble, you wind up with a sculpture and the quality is governed by the precision that can be achieved in the match. So, that's how that's done. But where's the creativity? Imagine you have a generative feedback loop: there's a hypothesizer that throws out 'ideas' (or call them something else) and there's another trained matching knowledge-base that has passing familiarity with all human art and also what people like. Where might that knowldge-base come from? Well, one proxy for "what is good art?" would be if it's in a human museum. Or, maybe if it's highly-rated on an art site like deviantart. But it may not even be necessary to have a proxy for "what is good art?" because the knowledge-base is going to get more training versions of Michaelangelo's David than of Ferd Burfle's Lego version of David. No offense, Ferd, it's just a numbers game. So the hypothesizer throws out ideas at a speed that only a computer can manage, while the art critic, shoots down the worst ideas (the ones resulting in lower activations) and perhaps it iteratively keeps pruning away bad ideas until it has one remaining idea. Is the result a good idea? Well, by definition, there's a high probability that someone will like it. What I have just described is a Generative Adversarial Network or GAN. I argue that that's how computers can be creative, and that is how humans are creative. I think that if you consider your own creative experience honestly, you'll see that this is how it is. We absorb feedback into our creative processes approximately the same way the AI does. We see this all the time, everywhere. Have you ever listened to Stevie Ray Vaughn and thought, "wow, you can see how Jimi Hendrix' playing influenced him" (even when he's not outright playing Jimi Hendrix songs)? Have you ever seen a photo by some big name photographer and thought, "that's reminiscent of some other big name photographer"? AIs are not just probabalistically regurgitating from Caravaggio any more than you are. In fact, just as I wrote that, I thought, "it'd be fun to add an illustration there of, um... what about Yusuf Karsh's portrait of Churchill as painted by Caravaggio"? Where is the creativity in that scenario? I have been told I am a highly creative person (I agree, I think, though age has slowed my creativity down and perhaps sharpened it) and I am able to do the Generative Adversarial part of having an idea so fast that it's almost subliminal. But, as I was coming up with that idea I distinctly thought, "I hate Churchill." and then "I love Karsh's portrait of Churchill." And, of course, I am a huge fan of Caravaggio. The fact that I worked Caravaggio into this example at all is because I am a huge fan of Caravaggio and my creative engine's hypothesizer is likely to burp out Caravaggio whenever I am reaching for an idea involving great art. I sure as hell wouldn't think of Jackson Pollock, for a portrait of Churchill, because the first-order bullshit filter goes "nope." Midjourney AI and mjr: "a jackson pollock version of karsh's portrait of winston churchill"[ It's all just probabilities in action. My argument, then, is that Generative Antagonistic Networks is how humans and AIs both create. Ours is a knowledge-base curated by our growing up in a particular civilization, and the AI's is a knowledge-base curated by ... growing up in a particular civilization. We share the same proxies for "what is good" too, the fact that the AI sees more pictures of Jimi Hendrix than Stevie Ray Vaughn affects its creative output in somewhat the same manner as it does our own. The human supremacist has a serious problem, here. If they wish to maintain that there is something, somehow special, about human creativity, they have to be able to define "creativity" in some way that an AI cannot do it. Obviously, I'm a reductionist who does not believe in "souls" or spirits or anything magically special about humans, and I certainly don't think humans are different from, say, orangoutans - our near cousins, who are also self-aware and creative. That brings me to a final point, which is that AI networks were not creative until fairly recently, when some researchers sat down and implemented a process of human creativity in software. It may be tempting to hold out that humans are special because we're self-aware, but I predict that won't be a particularly high bar to jump over. Like creativity, we don't fully understand self-awareness, but if an AI is programmed to react indistinguishably from a self-aware being, it's going to become a point of pointless contention whether it's really self-aware or not. Human supremacists will have to face the inevitable challenging question, "if you don't know what self-awareness is, why do you think that you are self-aware?" [A note to anyone who wishes to assert humans have a soul, please go ahead — but present evidence that there is such a thing, in the context of a semi-complete theory of ensoulment.] This is cross-posted from my regular blog over at freethoughtblogs. [stderr] [END] --- [1] Url: https://www.dailykos.com/stories/2023/12/17/2212295/-The-Human-Supremacist-Position?pm_campaign=front_page&pm_source=more_community&pm_medium=web Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/