(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . An FDA For Algorithms? [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.', 'Backgroundurl Avatar_Large', 'Nickname', 'Joined', 'Created_At', 'Story Count', 'N_Stories', 'Comment Count', 'N_Comments', 'Popular Tags'] Date: 2023-04-10 By now, you have likely seen that ChatGPT falsely accused Johanthon Turely of sexually harrasing his students. Now, Turely is a putz, but these accusations have no basis in reality. They quoted a made-up article about events on a trip that Turely never took. Turley, to be clear, has never been accused of sexually harassing his students. This is being painted as a problem with generative (we really should call these imitative) AI programs. But it's not, really. Or, more precisely, it is merely one class of the general problem of unregulated algorithms. It bears repeating that imitative AI programs are not artificial intelligence in any meaningful sense. They are simply predictive algorithms -- autocorrect on steroids. Yes, they are good at sounding like human speech or making drawings that looks like specific artists, but that is simply because they have a LOT of processing power and a LOT of training data, much of it of dubious provenance. It may be a very good program, a difference of degree, but it's not a difference of kind. And that is the problem. Algorithms are used all over our world, in ways that we don't always fully grasp, and they have been harming us for years. Algorithms unfairly label poor and minority offenders as higher risks to repeat offend, thus earning them longer sentences. Algorithms discriminated against hiring female programmers. Algorithms discriminated against African Americans in hiring. Managment by algorithm is making economic inequality worse. Hell, an algorithm won't even let you wash your hands if you are Africa American. And those are just the easy cases. Algorithms can be biased in all sorts of more subtle ways. What happens when an algorithm looking for child abuse, for example, mainly has data from poor people because poor people are the ones who mainly interact with public services? Wealthy people who have drug or alcohol problems, for example, can have them treated out of the public eye. Poor people are more likely, obviously, to be food insecure -- how does an algorithm handle those situations. Is it abuse or is it parenting while poor? These aren't abstract questions. Governments are already using algorithms to cut off health and other welfare benefits and to decide who gets investigated for things like child abuse. The results are not encouraging -- people are suffering, up to and including dying. These systems are often opaque, keeping their code and input secret. And when they are not, they are often imposed upon people with no one responsible for vetting the training date, the assumptions, or the decision-making code itself. Companies fight the idea that their "secret sauce" should be examined by anyone, of course. The head of OpenAI even says that open sourcing their code and research would be dangerous. This is self-serving twiddle, of course. Secrecy protects their business, not the public. It is the difference between medicine and supplements. Medicine is regulated in this country. You have to prove that it does what you claim it does, cures what you claim it cures, is better than doing nothing at all, and causes less harm than the disease the medicine is treating. The system is not perfect. It can sometime be slow, and you must guard against corruption and other forms of bias. But welcome to dealing with people in a capitalist society. And in about a century, we have gone from being sold heroin as a cure-all to my wife surviving a form of cancer that, even ten years ago, would have been an almost death sentence. It has worked pretty well. Supplements are not regulated. Not all supplements are harmful -- most are just useless. They don't have to prove that they are effective, they don't even have to list everything in them. If they are mislabeled, for example, the best the FDA can do is ask the companies to voluntarily withdraw the product -- something that the companies do less than half the time. I know which system I prefer. Today we treat algorithms such as ChatGPT as supplements instead of medicine. Frankly, we don't even go that far. It's not entirely clear, for example, that Turely can sue for defamation and no one, certainly, has the power to force ChatGPT off the market until it stops being a lying liar who tells lies about law professors who are also putz's. That is a recipe for disaster and disaster is what we have been getting for the last decade or more. We need an FDA for algorithms. In certain areas -- health, welfare, employment, information -- your product should have to, at a minimum, open source its code, open source its training data and research, and be subject to recall if it is proven to do a certain level or type of harm. For systems in very sensitive areas, such as hiring, loan approvals, healthcare, etc., the algorithm must be shown to be bias free and improve societal outcomes, not business outcomes. We can, if necessary, carefully modify the patent system to ensure that the companies that develop such algorithms can still benefit from them even if their secret sauce is known. Some companies will scream about slowing down innovation. Well, I am not impressed with innovation that increases discrimination or reinforces poverty or gets people killed. We've been sold programmatic heroine as medicine for too long. It is long past time we get some cops on the algorithmic beat. If these things are actually good for us, if they are actual medicine, then we all benefit. If not, nothing of value is lost. Want more oddities like this? You can follow my RSS Feed or newsletter. [END] --- [1] Url: https://www.dailykos.com/stories/2023/4/10/2162972/-An-FDA-For-Algorithms Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/