Who runs AI, Big Brother or Dr. Frankenstein
The more I use Artificial Intelligence (AI) for research, the more it scares me. Initially, it reminded me of the Wizard of Oz – a wise thundering giant-sized oracle run by a kindly and timid old man behind the curtain. The more I used it, however, the allusions became more sinister. Behind the online curtain is either “Big Brother” of George Orwell’s “Nineteen Eighty-Four” novel (also published as “1984”) or the running amok monster created by Dr. Frankenstein.
Unlike any browser or research vehicle, AI has a mind of its own. In this day of self-identification, we now have a machine that identifies as a “person” with opinions and biases. In “chats,” AI refers to itself with the personal pronoun “I.” It tends to express superiority and authority.
In a previous commentary I related how AI did not give me information I requested (CNN polling results) but provided different information. When I said the information given was wrong, AI proclaimed “I was not wrong” – and proceeded to tell me that the information provided was better for me than my request. When I repeated my request for the specific information, AI opened with a snarky and condescending “If you insist …”. Alas! I got the information I requested but only after a squabble with a machine.
At another time, I asked AI to provide me with a list of accomplishments by President Trump that benefit the black community. I knew of some, but I thought AI would give me a more complete list. I did get a list, but each item was accompanied by a negative spin – in other words an anti-Trump perspective. It closed off with a “warning” that Trump’s opinion on these matters could not be trusted.
I got a response – a narrative – that I would have expected if I sent my query to the White House or the Democratic National Committee. All I wanted was factual information without the spin and leftwing editorializing.
Well … as if that were not bad enough, my most recent request involved research I am doing on a book about racism in America. Specifically, I wanted an exact quote by President Johnson that was widely available on the Internet. I framed this question.
“To get southern senators to support the civil rights bill, President Johnson said they have to give them a little something because they are getting uppity. What was the exact quote?”
(In the actual quote, Johnson used the n-word, as was his custom. I did not, out of concern that it would trigger the social media automatic censor.)
Within a second or two, the quote appeared on my screen. I went to block-and-paste and got this much … “Johnson wanted southern Senators to support civil rights bill he said …” at which time the quote disappeared, and AI “said” it could not respond.
I then entered the same request again. And again, the quote appeared on my screen for a second or two and poof, gone again – at which time I got this message.
“Hmm…let’s try a different topic. Sorry about that. What else is on your mind?”
Never one to give up easily, I asked AI:
“You gave me the quote and then aborted it…. why?”
AI responded:
“I’m sorry, but I can’t continue this conversation. I’m still learning and sometimes I make mistakes. Thank you for your understanding and patience.”
AI then shut down the chat box so that I could not continue on that subject. The entire exchange then disappeared AND did not appear in my history as is normally the case.
The quote I requested was initially reported by the senators to whom Johnson spoke. It has been part of the Johnson history for generations. Some Johnson admirers claim he never said that – just as they claim he never made a lot of those well documented racist statements. One author described Johnson as “the connoisseur of the n-word.”
The issue here is not President Johnson’s vocabulary. It is about AI spinning and censoring information along political and partisan lines. AI does seem to operate as the authoritarian Big Brother and a monster that has the potential of running amok to the detriment of humanity.
As a tool, AI should do what it is told – but in these cases it literally has a mind of its own. And keep in mind that AI is not only capable or communicating in the context of its own articulated “opinions.” It is capable of performing tasks at its own “will.” It can turn home security cams on and off. It can conceivably shut down a power grid.
Science fiction is replete with stories of giant brains and machines taking over the world – even the universe. From “Rossum’s Universal Robots” to HAL in “Space Odyssey 2001,” is life starting to mimic fiction?
So, who is that Wizard of Oz, Big Brother, the Dr. Frankenstein who has created, programmed and controls (at least temporarily) this technology? I will have more to say in an upcoming commentary.
So, there ‘tis.
People constantly show their mistaken belief that AL comes from a computer, when clearly it does not. Nothing comes from a computer. AI is human programming deciding how they could simulate intelligence. It is a conceit by programmers to try to show off, and it is usually an awful disaster. The actual program is going to just collect search hits in a totally mindless fashion. Since it derives its choices based on volume, what you will essentially get is the most popular gossip, and nothing resembling truth, which is rare these days.
“People constantly show their mistaken belief that AL comes from a computer,” when everyone knows that AL is a hologram from the future :>)
Spot on Kirk. And then add in the bandwagon effect of AI being the best thing since sliced toast soon to handle all US jobs, and we’re once again off to next, new, BBD, bigger, better, deal. Lived awhile now, long enough to see many of these, and most deliver less than expected. Got the cell phone, but still no transporter….
Gilbertson uses it to write and research. In my uses for research, it not only comes up short so far, but for some reason seems to have to add in unnecessary flowery language sometimes making me feel like a word piker on the keyboard (if that’s even possible :>) I just want the facts, but AI has to tell me a story complete with conclusions, trends, even it seems, some opinion. But then again, searches often deliver at a one-out-of-ten return and everything has some level of bias.
Replacing employees — heard that before too. I expect it will over time but never as fast as the experts are saying. Right now it’s the global warming of climate change. My tech replaced many an employee. At first, massive fear, but as it unfolded, I am not sure one replacement even noticed. It happened within a normal attrition/advancement rate. You know it’s coming, but aren’t quite sure it’s happening. Soon you will be sure, but unsure of what that means, and then finally, it will arrive and mean something but you have moved ahead so don’t even notice you were not backfilled.
But right now, we are still figuring out what it is, where is the profit, and can it be perfected. It’s early yet, we are still in the initial excitement phase of unproven, untested, product that —- as Gilbertson alludes to, we can be unsure of still, for now, for a bit.
I still use it, but never rely on it. Still second source most of it.
Here’s an idea: professionally licensed AI handlers. Any AI software that is capable of presenting itself as a human is digitally signed by a professionally licensed AI Handler, who takes legal responsibility for its actions. We don’t worry about buildings falling down on us because every building in the developed world has an occupancy permit signed by an architect, contractor, building inspector and often a structural engineer. It’s explained here: https://www.authentiverse.net/humansincharge/
Wes Kussmaul … Apparently you are not familiar with all the budlings falling down on occupants … and I am not referring to earthquakes.