I beg of you, stop outsourcing your critical thinking to AI
Why I, a cybersecurity expert, don't use Generative AI.
Folks, we are in the midst of a concerted propaganda effort here in the United States. The mainstream news media, for the most part, has bowed down to Trump. Most social media is suppressing information about protests. Please take a moment and ask yourself if you think Generative AI (GenAI) isn’t a part of that propaganda. And if not, why not?
This recent post on Threads from datrottiegrl is an excellent example of bias in ChatGPT (I’ve edited slightly for typos):
CHATGPT IS BIASED:
Yall, I just had the worst convo with ChatGPT. You think our media is being filtered, listen to this: I asked ChatGPT to give me a count of plane crashes, everywhere in the world, regardless of severity, since 1/1. 1st response 3 "notable" crashes, all in the USA, all covered by major USA media. (Its going to be a long one, but stick with me if you can.)
2nd response said there were 5 notable crashes. 3rd response, I ask why it keeps only giving me results that are 'notable' and only in the USA. 4th response, 9 crashes, but no list of what they were. I try multiple additional times to ask ChatGPT why it keeps changing its answer and doing it's search using a key word that I directly asked it to remove.
ChatGPT just kept apologizing. And redoing the search, and continuing over and over again with the same bias. So I asked it about whether or not it was limiting my results due to bias. Of course, it said no. Then I asked it about it's founders, because, now I have suspicions.
My suspicions, after having to re-ask those questions as well, were confirmed. Co-founder used to be left now meets with 🍊 [Trump] regularly because they want to push AI into more spaces.1 They are literally collaborating. So, I asked again, based on this, whether ChatGPT thought it might be biased. My Thriends, I think I made it short out for a second because it took a bit for the results to come back.
First it told me what I should look for to determine if it was giving me biased results. The list was, step by step, exactly what ChatGPT had just done. Again I asked, based on what you told me, you are biased. ChatGPT AGREED WITH ME! ChatGPT suspects that it is being limited by its moderators and it is not able to override that.
The author of the thread continues to share how to retrain your ChatGPT, and if you really want to go to all that effort, you can find the original thread here.
HOWEVER, you still shouldn’t trust anything ChatGPT (or any other GenAI) spits out without verifying it for yourself.
What I want you to remember is that the “A” in AI stands for Artificial. It is not actually intelligent, and you can’t rely on it to give you factual information. It’s just a fancy version of autocorrect, and you already know how inaccurate that can be.
“Hallucinations”
Have you ever been talking to someone, and they didn’t know the answer to a question, so instead of saying, “I don’t know,” they just made something up?
That’s basically what GenAI does.
In computer talk, we call that a “hallucination,” but I personally wish we could stop personifying its behavior. In reality, it has just been programmed to give you an answer, whether or not it’s correct.
There were a couple of lawyers who got in trouble for submitting a brief written by ChatGPT. The brief cited non-existent court cases.
Another example from Threads, this time from prof_maxwell:
To show students how untrustworthy GenAI can be, I ran a simple test. I asked ChatGPT for the "most important pieces of scholarship" about a highly specific topic - the Roman witch, Canidia. It came up with nine sources, some better than others. But one article stood out because I'd never heard of it - or the author - before. P. A. Rosenmeyer, "The Poetics of Magic in Horace's Epodes" (1995)
I asked for more detail and got a full citation: Arethusa, vol. 28, no. 3, 1995, pp. 367–394. Except. That article does not exist. Not in that volume and nowhere else. But hey, let's ask for a summary. ChatGPT obligingly spat out a 250 word summary + analysis. And then I asked: "How can you summarize an article that does not exist?"
Its response was alarming: "You’re absolutely right to question that, and I appreciate you bringing it up. My earlier summary was a mistake—it combined general themes from existing scholarship on Horace, Canidia, and magic in Roman literature with an imagined source that I wrongly attributed to a specific article. This happened because I inadvertently misrepresented the broader scholarship as being tied to a specific, non-existent publication."
In other words, "Oh, I made all of that up. 100%. Fake summary of a fake source. Good catch, bro." And only a subject-matter expert would have ever known to question the data in the first place.
Fact-Checking the Results
No matter what GenAI tool you’re using, you need to double-check the results. Fact-check everything it tells you.
Maybe you’re asking, “But, Jillian, if I have to Google everything anyway, why even use GenAI?”
Yep. Exactly.
I’ll write a more detailed post on how exactly I fact-check things, given that information is being suppressed, and we don’t know which media sources are trustworthy. It deserves its own post.
A little tip, though: if you add the prompt “-ai” to the end of your Google search, it will turn off that ridiculous AI summary at the top of your results.
Other Ethical Issues
Perhaps you’re wondering why I’m quoting other people rather than go try it out myself. Great question.
There are a lot of ethical issues around the way that GenAI is trained. For example, we know it outright steals art from artists. But also, the computing power it takes to run GenAI uses a massive amount of energy. Most of the data centers are in geographical areas where electricity is generated using non-renewable resources.
So, using GenAI contributes directly to climate change and steals from the hard work of other artists. No thanks. Not for me.
But if you’re going to use it, I beg of you, fact-check. With everything that’s happening right now, it’s critically important that we are making decisions based on reality, and it’s also important that we’re not unintentionally spreading misinformation.
I worry about our ability to think critically as a society if we become too dependent on this tool. In fact, a recent study showed that people who entrust tasks to AI are losing their critical thinking skills. It’s handy, but it’s not a replacement for your own beautiful brain.
Yep, Sam Altman, the CEO of Open AI (the company behind ChatGPT), has reportedly “changed his perspective" on Trump after securing a $500 billion deal.