Fallacies

Published on:

Are people actually too trusting of AI? Or is this actually just clickbait? (maybe both.)

News Article reading:
“92 Percent of People Don’t Check Their AI Answers, a New Report Warns”

Discussion Questions

The main point this article attempts to make is stated broadly in it’s introduction - “despite people’s knowledge of AI hallucination problems and theeir own skepticism regarding these tools, only a measly 8 percent actually check the answers they get from AI themselves.”

I found this article to stand out immediately. I thought it was fairly common knowledge that AI wasn’t the most trustworthy. But after reading the article, it’s pretty clear that the data is represented in a very specific way: P1: AI is notorious for hallucinating false answers to questions, even including fake links and blog posts. P2: There are multiple instances of companies losing customers, or lawyers getting cases thrown out, because consumers don’t fact-check the information they get from AI. P3: Over 40% of people rarely or never click through AI overviews to verify the source material, and instead just take it at face value. C: People almost never verify the information AI gives them.

I think the way the author formed this article is very misleading, and the points themselves feel completely unrelated to the title itself. It frustrates me especially, considering I’m also someone frequently frustrated by the rate people consume misinformation online. Although theres a large percentage of those who don’t verify AI’s sources, it doesn’t necessarily mean that the consumers actually believe what the AI is presenting. At least, that’s my interpretation from how the author explains the situation.

The author of this article doesn’t fully explain the implication of the statistics they choose to present. This leads to two problems that the original article relies on: ambiguity and appealing to fear. So I offer this rebuttal to the original argument:

P1: AI doesn’t gather information the same way humans do when researching. AI generates responses based on any information it’s provided, and it can’t distinguish right from wrong a lot of the time. P2: 42.1% of users find AI overviews to present “inaccurate or misleading information,” 35.8% of users find AI overviews to “miss important context,” and 16.8% of users have found “unsafe or harmful advice” from AI overviews (according to the same article above). P3: There are many cases where AI “hallucinations” have caused real-world problems for companies, legal experts, politicians, etc. R: AI misinformation has real-world consequenses, regardless of how few (or many) people are fooled by it.

I’m still very sick as I’m writing this, so I’m not entirely sure how to phrase my alternate argument for the original article. But I would propose something along the lines of:

C2: Over 75% of users report flaws with AI overviews. Due to the ease of access of AI, and the varying levels of trust that people have for AI, this can lead to a wave of misinformation that harms both companies and individuals.