(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . Clippy is Not Skynet [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.'] Date: 2023-12-14 Vox has another entry in the argument about AI accelerationist and AI doomers, this time through the lens of the Biden Admins AI regulations. This kind of argument is both silly and actively harmful. It would be nice if reporters stopped focusing on it and started focusing on the real harms AI and machine learning are doing today. The argument is silly because both kinds of alleged harm — that by regulating AI we prevent a glorious future for all mankind or that by not regulating it enough in the right way we’ll all be turned into paperclips by our evil AI overlords — are overwrought and highly, highly unlikely. Neither argument is based in reality. Doomers claim that if we aren’t careful, systems like ChatGPT will evolve into general intelligences — intelligences that can learn in a general fashion, just like humans. This isn’t really a serious argument. Maybe it is because I am old enough to remember all of the cycles of hype around AI — I had a professor back in the dark ages when I went to college, when you could work full time and pay the tuition out of your pocket and did not need to mortgage your organs to pay for class, gleeful over the Java language because it was better than LISP and would inevitably lead to general intelligence in a mere couple of decades. (Reader, it did no such thing. It didn’t even get off my lawn.) — but I am deeply skeptical of these claims. The idea that ChatGPT or similar systems are going to bootstrap themselves into general intelligence isn’t really all that plausible. This round of AI, remember, copies things. It doesn’t learn in any meaningful sense, and its horizons are limited by what it has seen before. Not to mention it takes oodles (a technical term) more data than a human to be able to imitate anything. ChatGPT and similar systems have shown no ability to do what humans do — create something new from their experiences and training. If you look at the great Impressionists, for example, you can seem them start as great classical painters and then gradual transform their art. Imitative AI has shown no such ability. In fact, it may already be polluting itself to the point of uselessness. Grok, Musk’s alt-right ChatGPT was recently caught quoting OpenAI’s terms of service as a reason it wouldn’t generate something nasty. This happened, per the Grok team, because there is so much garbage generated by OpenAI’s products on the internet that Grok’s training data became infected with it, and, voila. It imitated what it saw. Imitation is not, by itself, learning. And since what these systems do is try to generate something that matches the expectations of their training data, the idea that you can imitate your way to real intelligence via these systems is not plausible. Perhaps there is some other kind of AI system out there with an approach that can leverage itself into general intelligence. But given that we haven’t seen anything like that yet and given how we don’t even really understand how general intelligence works in humans, well, color me skeptical. And color me skeptical that we will inevitably be overrun by a general intelligence. The idea that peole would not notice the paperclip making AI is trying to turn their children into paperclips is far-fetched. It assumes a level of freedom given over to a system that is simply not plausible. Call me a Pollyanna if you want, but God invented off switches for a reason. No business is going to let a system that can go bad in a way that costs them money run without supervision. Someone’s gonna notice when it tries to build death robots. I find arguments that assume everyone is a moron all the time to be somewhat less than compelling. The history of AI provides a similar issue for accelerationists. Supposedly, any limits on AI will result in DEATH and DOOM and HORROR because general intelligence AI will save us all from everything, everywhere. Now, that is a nice dream and as a firm believer in fully automated luxury communism, I wish they were right. But they haven’t shown that they are. ChatGPT and its ilk lie all the time — a lawyer was recently sanctioned by the court for creating a brief via ChatGPT that cited made up cases. They have been working on self-driving cars for well over a decade and when they set them loose in San Franciso, they interfered with first responders, couldn’t handle traffic cones placed on their hoods, and dragged a person through the street. The idea that these cars are safer is unproved at best, largely relying on self-reporting that we know is not entirely accurate. Oh, and they needed significant human help to drive at all. A lot of human in the artificial intelligence, it seems. Normally, I don’t get worked up about silly arguments. I am a sports fan — at least 60% of sports fandom is silly arguments (the loser point is an abomination that prevents team from making up ground in the standings and thus cheapens the NHL regular season. Upon this hill I will die. Well, maybe not die. But I will argue with you for a bit before wandering down off the hill.). But these arguments are not just silly — they are distracting. AI and similar systems are not like the crypto — they can actually do things, and they do have real effects. Some of those effects are bad. They can put you in jail longer than you should be because of the color of your skin. They can deny you welfare benefits on false pretexts. Since they can produce boilerplate to some degree of accuracy, they likely will impact writing, artists, and programmers — which is especially ironic given that they are trained on the output of those people, who never consented to such use of their material. The article pays no attention to these real harms, because acknowledging them gets in the way of pretending that regulation has held back progress. Another silly argument — the last fifty years have seen regulation almost disappear as a concern for large companies, and Silicon Valley in particular has decades of getting essentially everything they want. But that is the point of these arguments, intentional or not. They distract from the real harm that these systems are doing today. They distract form the fact that these systems are merely an extension of the automation effect and like all automations, they tend to concentrate wealth in the hands of few and force society at large to absorbs those costs of the introduction of those automations. But what matter a little discrimination, a little destruction of people’s livelihoods, a little preventing emergency vehicles from getting to where they are needed in the face of either doom or godhood? Marc Andreesen recently wrote a manifesto whining about how no one respects tech enough any more and we should all just shut up and get out of their way, as if that hasn’t been the default position for the last thirty or forty years. Andreesen, who is a significant backer of AI companies in the same way he was a significant backer of crypto firms, is obviously on the accelerationist side of this imaginary issue. One of the things that was especially interesting in his manifesto was his labeling of people who want to regulate AI in anyway enemies, and his outright rejection of programs, such as universal basic income, to alleviate the damage done by the companies he backs. Apparently, Andreesen gives no flying fscks (another technical terms) about the people who make up society — he appears to want to accumulate the wealth and the associated political power with no oversight or responsibility to the society that makes his wealth accumulation possible. Every time an outlet like Vox spends time taking the doomer vs. accelerationist argument seriously, it is elevating the distraction tactics of people who generally just want to make themselves obscenely rich, the rest of humanity be damned. Machine learning systems, including imitative AI, can and have provided some benefits to society. But like with all new technologies, they must be regulated so that the benefits don’t just accrue to the top and the pain to the rest of us. Arguing about whether our new Clippy is going to tun into Skynet or a silicon God takes attention away from the fact that these companies cannot be allowed to run amok. We, as a society, need to ensure that the harms they are doing today are controlled and that they are incentivized towards activities that benefit all of society today. As a better writer than me once said: “Yes, but in a hundred years we will all be dead.” We owe the people being affected and harmed by these systems today relief today, not by and by. Anything that distracts from that basic truth is serving the worst people amongst us, not society at large. [END] --- [1] Url: https://www.dailykos.com/stories/2023/12/14/2211524/-Clippy-is-Not-Skynet?pm_campaign=front_page&pm_source=more_community&pm_medium=web Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/