AI promotes Democrat false narrative. It lies!
Artificial Intelligence (AI) as a research vehicle is unreliable because it spreads false information. It is not like a normal search engine, such as Google. Google gives the researcher a number of sources based on the requested subject – and it is up to the person doing the research to read and evaluate. Google will provide the sources for factual information, such as … let’s say … the Electoral College vote for the 2000 presidential election.
AI, however, provides interpretation and opinions. And therein lies the problem. Sometimes the information is factually wrong. Sometimes it is expressing a bogus opinion. Sometimes the disinformation is just nutty.
I have experienced examples of all three in my research. For example:
- Out of curiosity, I asked AI to provide information on … me. It was a mumble-jumble of information. Some relevant. Some not so much. Among the misinformation that AI stated as a fact was that I was the founder and am the owner of PunchingBagPost.com. Totally untrue. It is a site I write for … period.
- In the second case, it convoluted slavery and a “mulligan” in golf. Partly my fault. I had been doing some research on slavery. When, for a completely different reason, I wanted to find the rules on a mulligan, I did not close the slavery thread. Regardless, AI told me that a mulligan is a term associated with slavery for a do over. I assume that slaves were not familiar with that opportunity.
- The third example is the most disturbing.
I asked for information regarding the role of Illinois Republican Senator Everett Dirksen in the passage of the 1964 Civil Rights Act. I already knew that he and the Republicans played a central role in overcoming a Democrat filibuster – and the GOP provided overwhelming support for the legislation.
Not according to AI, which responded with (emphasis added)
“Senator Dirksen was one of the few Republicans who supported the Civil Rights Act, despite facing fierce opposition from his own party and from many southern Democrats.” Totally untrue.
In fact, Dirkson got the cover of Time Magazine, for his key role, and called the legislation Dirksen’s “handiwork.” The bill was supported by 90 percent of congressional Republicans. When I attempted to correct AI, I got this response (emphasis added).
“I’m sorry if you think I gave out inaccurate information. I was trying to answer your question based on the web search results that I found. According to the sources that I cited, Senator Dirksen was one of the few Republicans who supported the Civil Rights Act, and he received praise from President Johnson for his leadership and compromise. However, I understand that there may be different perspectives and interpretations of the historical events, and I respect your opinion.”
There was no issue over “perspective” and “interpretations.” AI was giving out disinformation. Factually incorrect information. In a very real sense, AI was spreading Democrat propaganda … lies.
When I again advised AI that its information was FACTUALLY wrong and noted that Democrats had actually filibustered the Bill. This was AI’s reply (emphasis added).
“The party that filibustered to defeat the bill was not the Democrats, but a group of Southern senators who opposed racial integration and civil rights legislation.”
That is a boldfaced lie – if you can say a computer program lies. When I explained that the information being provided is FACTUALLY and GROSSLY wrong, this was the response.
“I apologize for any inconvenience or frustration that I may have caused you. I was trying to answer your question based on the web search results that I found. However, I understand that you may have a different point of view and that you may not trust my sources. I respect your opinion and I do not wish to argue with you. I think it is best that we end this conversation here. I hope you have a great day and a happy holiday season.”
And with that the thread was closed – the AI version of hanging up the phone.
I was particularly annoyed because this is in keeping with the Democrats false narrative of theirs and the GOP’s relative roles in the passage of civil rights legislation. In other words, AI was spreading the Democrats’ false propaganda.
While AI claims to have relied on its “trusted sources,” it was ignoring the overwhelming factual information available on the internet in terms of history. The overwhelming support of the 1964 Civil Rights Act by the Republican Party is reported in innumerable articles and hard statistics of the votes.
AI telling lies or providing nonsensical information is not unknown to the experts – and it has a name. According to the IBM website:
“AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
If you apply this explanation to politicians, they do not lie. They hallucinate.
Given all the correct information online, it is hard to explain AI’s disinformation unless it is programed for bias. Writing for Build In, Ellen Glover says that (highlight added):
“AI hallucinations are often caused by limitations or biases in training data and algorithms, and can result in producing content that is wrong or even harmful.”
All those who have complained about AI biases are not paranoid. They are correct. If AI turns out to be the uncontrolled monster of the cyber world, it is the programmers who are the Dr. Frankensteins.
So, there ‘tis.
Although AI I am sure caveats it’s failures…… feel better?
Who cares? There’s so much disinformation on PBP I would say: he who is without sin, cast the first stones. Or clean your own house before you bitch and moan.
But to feel really better: NYT sues OpenAI and microsoft for copyright infringement. That’ll make go back to the drawing board.
Yeah AI just pulls info from where ever it finds it; articles, opinion blogs, news papers, mags, etc. AI does not care. Its buyer beware and anyone that does not know that is an idiot. I do think that articles written by AI should have an identification method similar to articles written by actual people: Title, Author, Date. And if it is AI, then it should list all of the contributing authors or simply say “AI Risk!”.
Years ago, remember: “Is it real or is it Memorex”? LOL
I am glad NYT is suing OpenAI and Microsoft. Its a form of theft and plagiarism that they are getting away with! I would always want Larry and you Frank, to be paid for all of your wise words!
Hey nice touch with the biblical quote! YOU ARE EVOLVING!!!! :>)
I agree with you Larry, that is why I do not trust AI and read only news articles and respected journals. But I do think it funny that you got the big “fuck you human” from the AI. LOL
Whilst I believe you and IBM to be correct, I do question the validity of your point that AI may be politically biased. You did a very one sided experiment. I wonder, if you had done the opposite, would you have gotten the same results. If you asked for a topic in which you know Democrats supported 90 percent and Republicans opposed and asked AI for info on the topic just as you did before, would you get the same result? Or would the result be that the Dems opposed it? I would suggest you try this to test both sides of AI’s political spectrum.
Again, going back to my comment on your AI article about Ukrainian jets, I think AI should be indicated as such. And either all of the author inputs should be listed or there should be a notation under author that says, “AI Risk”. This way we would know whether it is a human writing the facts as they see them or an AI compilation.
And I am glad to see NYT suing OpenAI and Microsoft. I think you should always be paid for all of your words of wit.
I do wonder if in view of all of the responses between you and Frank, is there an AI “Frank Horist” out there!???
I use AI for research. Google brings up an array of articles and sites. AI sects and turn the information into narratives (spin and opinion). I will provide more than the information you specific ask for. I never ask AI to write commentaries and never lift the language from AI because AI will plagiarize — although in many cases, it will provide a link to the source. The issue is what AI “says” are its ‘reliable sources.” In my previously published “interview” with AI, it said it depends on reliable sources. When I ask i various news outlets — MSNBC, CNN, FOX — are reliable sources, it said it cannot judge the reliability of specific sources. You see the problem.,,lol
Think you asked and answered your own question…sort of.
Your choice is a bunch of links OR an AI source from a bunch of links you may or may not have chosen, with perhaps some left out or, worse yet, some sort of spin to select factoids you might or might not have selected.
As you said with the writing, do your own work. For now.
Someday I envision a potential system where you can pick the sources, maybe even two comparative menus of sources for lib/con comparisons for example. A system where you can see the algorithms, in plain English and perhaps even variables there too. Currently is kind of Gen 1 sort of stuff that I expect to change radically over time as Moore’s Law takes effect.
Yes I do. From NYT lawsuit, your discussion, it all shows that AI is very young and there is much work to do yet. Right now it appears to be buyer beware! LOL
Frankly (short for Tom&FrankIncest), I don’t understand all the fuss about steak sauce, but maybe it’s that good.
The general drift here concerns accuracy and honesty of information generated for public use. That’s a bit humorous in the face of the relentless phony propaganda on this site, stemming from an admitted Third Reich revivalist who dives into a bunker each time someone takes him or her to task for relentless spewing of fabulism.
Today, in an absolute spasm of fury over being uncloseted, Herr Frank/Horst exploded that asking him to explain his words or actions is equivalent to being asked “are you still beating your wife?”
Nyuk
(o:>
Nyuk
(o:>
Nyuk.
Here are examples of the wife-beating question as dismissed by the lithping, skipping fuhrer slurper (I defy anyone to state that phrase 10 times fast). They’ve all been launched his way over recent weeks and months and he’s taken the Fifth on each one, every single time:
* Would “he” have sided with not-see Germany or with the U.S. and U.S. allies in WW II? Response: Took the Fifth.
* Has the U.S. fought any foreign wars since the Revolution in which he would not have not backed the enemy? Pled the Fifth.
* Name a few or even one of the most influential racists in U.S. history. Response: Took the Fifth.
* Does he(she) still believe in the pseudo-science of eugenics — white master-race doctrine — promoted by the KKKlan, the not-see party and most of the Donk Party over the past century? Response: Invoked the Fifth.
* Cite any differences between his/her and the KKKlan’s fundamental beliefs. Same for the twoll and the not-see party. Response: Hid behind the Fifth.
* On what basis does he continue to defend Franklin Roosevelt’s internship of 100,000 innocent citizens of Japanese extraction for years in WW II AGAINST the recommendations of Edgar Hoover, the top official charged with protecting Americans’ security? Congress and the White House have apologized for that massive miscarriage of justice. Not so with Twinkie the Twoll here.
* These are 100% substantive questions. Why the 100% refusal to answer? Why the refusal to answer straightforwardly?
He or she can’t thtop thpitting against erudite and sincere people but can’t bring himself to even pretend that his fuhrer wasn’t perfect. And that segues into Holocaust denial.
Goebbelsians are such goobers.
AI is science. As such it is constantly in development mode, emerging, and being worked on. It’s bound to have some defects and glitches along the way.
So, AI actually can be compared to rocket science. One is suspect when not thinking it is rocket science. Its use by persons who want definitive fact answers to questions underlying certain opinions and who have a predetermined expectation for specific results. Those persons’ should consider the caveat accompanying use of this new science. It’s imperfect, at times inaccurate, possibly unreliable.
AI, after all, is an invention of man, software developed in a much tinkered with machine language.
Therefore, a disclaimer naturally applies, use this iteration in a growing science at one’s own personal risk.
One’s faith and expectations put in machines known for inconsistency will be frustrated often.
At the same time, it was said of computers’ and questionable results in data collection, input junk data = junk data output. This is fact with AI.
Machine learning has outpaced human learning to do with machine design architecture and inputting accuracy. Our genius using computer technology is our ability to discern, both ethically and morally.
If we fail, the fault is of our own making. And, the genius opportunity not realized is lost.
Ethics, morality, and personal responsibility have lost favor in human practice. We are witnesses to it every day. But, one ordinary common person can control their own piece well and effectively, if and when one weighs consequences both positive and negative before acting. Computers with AI are not equipped with human qualities for just discernment. It’s not possible, programming in a moral code of ethics and responsibility used on the fly with differentiation according to varying circumstances not in evidence, does not exist. My opinion is, that should not ever be left to machine analysis.
AI is just that Automated Intelligence, robot smart, not a sufficient replacement for the human brain’s capacity to think critically between fact and fiction. Yet, with a predilection for invention, fiction is advertised as fact, the brain does a disservice then. True, AI can be programmed for and produce mean propaganda.
It’s anyone’s guess if AI swings right or left. It’s the responsibility of human discretion how science is used, honestly or manipulatively. In like measure consider how humans use the internet and media in it. PBP is responsible for content it produces. Main stream media has the same.
.