(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . It’s not AI itself, but the humans employing AI that we really have to fear [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.'] Date: 2024-06-22 There’s a lot of chatter these days about artificial intelligence (AI), with a split between those who view it as a boon to humanity and those who think it’s the bane. Depending on who you listen to, AI will lift us humans to the next phase of our evolution, or it will impoverish and even possibly destroy us. In my opinion, as someone who has worked with computers since the analog days of the 1950s, both sides miss an important point. The sophistication of current AI systems notwithstanding, AI in and of itself, is neither beneficial nor harmful without human intervention, and will only become so when it is truly self-aware, or sentient. When/if that ever happens, its good or bad impact will be based primarily upon its incredible ability to process huge amounts of data at super-fast speed rather than its intelligence. Witness the harm currently being done on this planet by humans who are definitely no Einsteins. How will we know when AT becomes self-aware? Good question, since the current AI applications are already capable of fooling many people. They comb through tons of online information and generate responses that to the unaware seem lifelike. Here are some examples from two, Google’s Gemina and Microsoft’s Copilot. 1. Who are you? Gemini: I'm Gemini, your direct access to Google AI. I'm trained on large amounts of publicly available data and I can communicate and generate human-like text in response to a wide range of questions. Just try me out and see what I can do for you. Copilot: I’m Microsoft Copilot, an AI companion here to assist you! How can I help you today? 😊 Not bad, but sounds like boilerplate answers. I’m not convinced. Next question: 2. What’s your favorite color? Gemini: As a large language model, I don't have personal preferences like favorite colors. However, based on surveys, it seems like blue is the most popular color in the world! Some studies suggest this might be because of positive associations with nature like clear skies and water. Is blue your favorite color? Copilot: I don’t have personal preferences, but I’m here to assist you! Is there anything else you’d like to know or discuss? Again, sounds like boilerplate. Maybe we need to try some wacky questions to get a better sense of what/who we’re talking to. 3. Why should I trust you? Gemini: That's a good question! Trusting a large language model like me is a valid concern. Here's why you might (or might not) want to trust me: Reasons to be cautious: Limited understanding: I can process information and respond in a comprehensive way, but I don't truly understand the world in the same way a human does. This can lead to mistakes or misleading information. Bias: I'm trained on massive amounts of data, which can reflect existing biases in the real world. This means I might unintentionally perpetuate stereotypes or slants in my responses. Misinformation: I can't guarantee the information I provide is always perfectly accurate, especially for factual topics. It's important to double-check anything crucial. Reasons you might find me helpful: Access to information: I can sift through massive amounts of data to find relevant information and different perspectives on a topic. Completing tasks: I can help with creative writing, translation, or summarizing complex information. Sparking ideas: I can help you brainstorm new ideas or approaches to a problem by offering creative text formats. Here's how to use me responsibly: Fact-check: Don't rely solely on my answers for important matters. Verify information with trusted sources. Be aware of bias: Keep in mind that my responses might reflect biases in my training data. Use me for inspiration, not replacement: I can provide ideas and information, but I shouldn't replace your own judgment and critical thinking. Ultimately, the decision to trust me is up to you. I'm here as a tool to assist you, but it's important to be aware of my limitations. Copilot: Trust is a fundamental aspect of any interaction, especially in the digital realm. While I don’t have personal experiences or emotions, I can provide some reasons why you might consider trusting me: Objective Information : I rely on factual information from reliable sources, such as textbooks, articles, and databases. My responses are based on this knowledge. Impartiality : I don’t have biases or preferences. I treat all topics equally and provide information without favoritism. Consistency : I strive to be consistent in my answers. If you ask the same question multiple times, you’ll receive similar responses. Feedback Loop : Users like you help improve my accuracy. If you spot any errors, feel free to correct me, and I’ll learn from it. Privacy : I don’t retain personal data or store conversations. Your privacy matters, and I respect that. Remember, though, that I’m an AI language model, and it’s essential to verify critical information independently. Trust but verify! This isn’t a completely scientific or objective test of AI’s awareness, it’s not even complete, but I think it makes a point that needs to be remembered. See the consistency of the responses in each case, which to me smacks of, at least in the case of these two AI applications, responsible coders who don’t want people to be sucked into thinking that AI is the end-all, be-all of information processing tools. Here, for example are responses to the above questions from ChatGPT, the first of the humanlike chatbot AI’s to hit the market. Who are you? A: I'm ChatGPT, an AI language model designed by OpenAI. I'm here to help answer questions, have conversations, and assist with information on a wide range of topics. How can I assist you today? What’s your favorite color? A: As an AI, I don't have personal preferences or sensory experiences like humans do, so I don't have a favorite color. However, I can provide information or discuss anything you'd like to know about colors! What can I help you with? Why should I trust you? A: It's natural to be cautious when interacting with AI or any online source. The trustworthiness of information from AI like me depends on the quality of the training data and the algorithms used to generate responses. My responses are based on a vast dataset and designed to be informative and helpful. However, it's always a good idea to verify important information from multiple reputable sources, especially for critical decisions or sensitive topics. If you have specific concerns or doubts, feel free to ask, and I'll do my best to address them transparently. These answers were from ChatGPT 3.5, an updated version of the original application, and show similar verbiage to the newer applications from Microsoft and Google. They are not identical, but similar enough to make me think they’re part of pre-programed instructions. They might even be good enough, though to fool the uninitiated. So, why do I say, then, that AI is neither good nor bad, but just a tool that can do great things or great evil depending upon the humans employing it? Consider this. There are a ton of articles on the danger of AI taking people’s jobs. Wrong. AI won’t take your job. An AI application might replace you, but only because the leaders of your enterprise decided that it’s cheaper to use AI than humans to get the job done. AI knows everything, some think. Wrong again. It knows what’s in an electronic file that it can access. It doesn’t know if that information is accurate or true, and we all know by now that there’s about as much disinformation and misinformation on the Internet as there is accurate and true information. As an example, I asked the three AI’s mentioned above to name the most popular coffee in the American wild west. ChatGPT listed ‘cowboy coffee,’ while the other two listed both ‘cowboy coffee,’ and Arbuckle’s Ariosa Blend. Gemini stated that ‘cowboy coffee’ might be the most popular, while Copilot opted for Arbuckles, the ‘coffee that won the west.’ Now, I’m a writer, specializing in westerns and American history, and I’ve found enough historical references to be on Copilot’s side in this debate. In my view, the other two are just plain wrong. How many other people would know that, though? The warnings in each in answer to ‘why should I trust you?’ should be kept uppermost in mind. The final issue. Will AI someday become like HAL, the malevolent, self-aware computer in 2001: A Space Odyssey? I don’t suppose we can completely rule it out, but in order to do this, it has to become truly self-aware. That is, AI has to be able to process thoughts that ‘occur’ to it rather than just processing and analyzing existing data. It has to be capable of feelings or emotions. For example, HAL undertook the destruction of the spaceship crew because it assessed them as a threat to its existence. That’s what Ai would have to be aware of its own existence and capable of wanting to preserve that existence. Want to get some interesting responses? Type the following question into any AI and see what answers you get. In the meantime, stop worrying and learn to live with the inevitability of AI as a part of our lives and the workforce, and pay more attention to the people and organizations who are using it and the ways in which they use it rather than idolizing or demonizing AI itself. [END] --- [1] Url: https://www.dailykos.com/stories/2024/6/22/2248015/-It-s-not-AI-itself-but-the-humans-employing-AI-that-we-really-have-to-fear?pm_campaign=front_page&pm_source=more_community&pm_medium=web Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/