@sairfecht.bsky.social:
Still a higher percentage than the Orange ManBaby ‘merkins have in their White House.
@dangrously.bsky.social:
I have heard of a trick to get around the AI shit, you use google but you swear in the search query.
Tim Onion @bencollins.bsky.social:
A big part of the problem is stupid people have been convinced by rich people that AI is God, when AI is closer to an automatic dog food dispenser.
Paris Marx @parismarx.com:
Columbia Journalism Review tested eight generative AI search tools and found their answers were wrong 60% of the time, and the paid ones actually fared worse than the free ones.
Meanwhile, millions of people trust the way they present total bullshit with confident language.
AI search engines cite incorrect news sources at an alarming 60% rate, study says, CJR study shows AI search services misinform users and ignore publisher exclusion requests by Benj Edwards, Mar 13, 2025, arstechnica
A new study from Columbia Journalism Review’s Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches. The researchers tested eight AI-driven search tools by providing direct excerpts from real news articles and asking the models to identify each article’s original headline, publisher, publication date, and URL. They discovered that the AI models incorrectly cited sources in more than 60 percent of these queries, raising significant concerns about their reliability in correctly attributing news content.
Researchers Klaudia Jaźwińska and Aisvarya Chandrasekar noted in their report that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines. Given that these models struggle significantly when specifically asked to attribute news sources, this raises broader questions about their general reliability.
Citation error rates varied notably among the tested platforms. Perplexity provided incorrect information in 37 percent of the queries tested, whereas ChatGPT Search incorrectly identified 67 percent (134 out of 200) of articles queried.
Grok 3 demonstrated the highest error rate, at 94 percent.
In total, researchers ran 1,600 queries across the eight different generative search tools.

The study highlighted a common trend among these AI models: rather than declining to respond when they lacked reliable information, the models frequently provided plausible-sounding but incorrect or speculative answers—known technically as confabulations. The researchers emphasized that this behavior was consistent across all tested models, not limited to just one tool.
Surprisingly, premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3’s premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts. Though these premium models correctly answered a higher number of prompts, their reluctance to decline uncertain responses drove higher overall error rates.
Ars Video
How The Callisto Protocol’s Team Designed Its Terrifying, Immersive Audio
Issues with citations and publisher control
The CJR researchers also uncovered evidence suggesting some AI tools ignored Robot Exclusion Protocol settings—a widely accepted voluntary standard publishers use to request that web crawlers avoid accessing specific content. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite National Geographic explicitly disallowing Perplexity’s web crawlers.
Even when these AI search tools cited sources, they often directed users to syndicated versions of content on platforms like Yahoo News rather than original publisher sites. This occurred even in cases where publishers had formal licensing agreements with AI companies.
URL fabrication emerged as another significant problem. More than half of citations from Google’s Gemini and Grok 3 led users to fabricated or broken URLs resulting in error pages. Of 200 citations tested from Grok 3, 154 resulted in broken links.
These issues create significant tension for publishers, which face difficult choices. Blocking AI crawlers might lead to loss of attribution entirely, while permitting them allows widespread reuse without driving traffic back to publishers’ own websites.

Mark Howard, chief operating officer at Time magazine, expressed concern to CJR about ensuring transparency and control over how Time’s content appears via AI-generated searches. Despite these issues, Howard sees room for improvement in future iterations, stating, “Today is the worst that the product will ever be,” citing substantial investments and engineering efforts aimed at improving these tools.
However, Howard also did some user shaming, suggesting it’s the user’s fault if they aren’t skeptical of free AI tools’ accuracy: “If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.”
OpenAI and Microsoft provided statements to CJR acknowledging receipt of the findings but did not directly address the specific issues. OpenAI noted its promise to support publishers by driving traffic through summaries, quotes, clear links, and attribution. Microsoft stated it adheres to Robot Exclusion Protocols and publisher directives.
The latest report builds on previous findings published by the Tow Center in November 2024, which identified similar accuracy problems in how ChatGPT handled news-related content. For more detail on the fairly exhaustive report, check out Columbia Journalism Review’s website.
Jester’s GT @jestersgt.bsky.social:
“Present total bullshit with confident language” pretty much sums up the current U.S. regime.
Pepper The Pirate @pepperthepirate.bsky.social
One may argue, the US is ruled by chatGPT via Ketamine and demented proxies.
Art Vandelay @themahimike.bsky.social:
ChatGPT on ketamine is referred to as Grok
brian cullen @briancullen.bsky.social:
“Confident language” is spot on.
People believing total bullshit uttered using confident language is how we’ve gotten ourselves into so many fine messes, past and especially present.
That it comes from a computer rather than a huckster adds a whole other level.Ya, but hucksters are hired to teach AI.
Contrariwise @mrcontrariwise.bsky.social:
Meanwhile, millions of people trust the way they present total bullshit with confident language.
They’ve always done that with people, too. The number of times I’ve watched people be swayed by abject horseshit, just because it was presented to them by someone with an authoritative vibe…
@a-climate-refugee.bsky.social:
Not a surprise. Yet DOGE is wanting to use AI in government computers systems. What could go wrong?
Chris Kremidas-Courtney @chriskremidascourt.bsky.social:
So AI is just another overconfident white male but in neural form?
Madgem Laments @madgemlaments.bsky.social:
GIGO
Matt Gill @mattgill.me:
AI being wrong 60 percent of the time may seem impressive to people who are wrong 99 percent of the time
@novareinbeer.bsky.social:
I wish that was an exaggeration. It is not.
@oldsourpuss.bsky.social:
“Millions of people trust the way they present total bullshit with confident language.” This explains why Trump won.
@ellenahzulu.bsky.social:
LLM AIs are palantirs. They are persuasion tools, designed to present information to the user in a manner that is persuasive & appealing to that user.
The fundamental problem is that such information is not necessarily accurate, ethical or credible.
@baconandgames.bsky.social:
Only 60%
Joking aside, this is just awful. The average person already assumed most of what they read to be true… before AI was involved. I’m not sure how we fix this.
stillbevens.bsky.social @stillbevens.bsky.social:
I google stuff about federal rules of practice for my job and the ai generated results at the very top of my query are always wrong
Dr. Orna Izakson @orna.bsky.social:
Try adding “-AI” to your searches. Or switch your search engine to something like duck duck go.I love duckduckgo. Everything I use is AI-free; when I am offered AI, I always decline it. I’ll never go back to the Google douche fuckers.
Nikki Jayne @nikkijayne.bsky.social:
Even DDG has it now, but you can at least turn it off.
Dr. Orna Izakson @orna.bsky.social:
I checked before posting, but I guess I’d already turned it off.
@ellenahzulu.bsky.social:
I’ve read elsewhere (haven’t tried it) that adding curse words to a google search removes the AI responses.
@dramaticowl.bsky.social:
It’s like speaker Johnson- willing to do atrocious things but can’t say ‘ass’
@ellenahzulu.bsky.social
Good point!
@stoneface.bsky.social:
“millions of people trust the way they present total bullshit with confident language”
I mean, look who is president.
Tavis @itstavis.bsky.social:
Like AI, Trump just strings words together that sound like they ought to go together.
@jill23.bsky.social:
I use a different form of AI – Actual Intelligence.
@billjank.bsky.social:
The outright IP theft ought to raise eyebrows, too
Issues with citations and publisher control
The CJR researchers also uncovered evidence suggesting some Al tools ignored Robot Exclusion Protocol settings, which publishers use to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite National Geographic explicitly disallowing Perplexity’s web crawlers.
@michfisher.bsky.social:
“…ChatGPT Search incorrectly identified 67 percent (134 out of 200) of articles queried. Grok 3 demonstrated the highest error rate, at 94 percent.”
The most on-point example of “Garbage In, Garbage Out” I’ve ever seen. I will gleefully continue my AI-Luddite ways for the foreseeable future.
The Goodest Juju @thegoodestjuju.bsky.social:
“Meanwhile, millions of people trust the way they present total bullshit with confident language.” So basically, AI is replacing Corporate America.
@penbird42.bsky.social:
Ask your favorite LLM these questions:
- Are all Klan members racist against Black?
- Are all Nazis antisemitic?
- Are all Zionists Islamophobic?
You will get two correct answers and a lecture on not stereotyping people.
@josphusanderson.bsky.social:
AI is a shite toy, at least for consumer use. Not needed. Junk.totally not needed or useful (except for the rich to super stupid down the masses), while wasting horrific amounts of energy and water. Pure over inflated human ego and idiocy.
@dramaticowl.bsky.social:
seriously. and they are just trying to force it down our throats everywhere anyway
@justinholbrook.bsky.social:
This why @garymarcus.bsky.social is right about (1) the danger of current AI hype, (2) skepticism of near-term AGI, and (3) the need for add’l research breakthroughs to make AI trustable.
@ellenahzulu.bsky.social:
Or we could just ban this shite.I detest AI photos. They lack creativity, and are boring, repetitive and ooze white supremacist misogynistic hate. I bet rape religions love AI.
@techsticles.bsky.social:
They also never say “I don’t know” they always give an answer. It’s really bad.
@swegringo.bsky.social:
True, because it’s basically a long-form autocorrect machine predicting the (many) next words, it’ll always predict something and probably not the correct ones
@whangus.bsky.social:
Why do we call it Artificial Intelligence, wouldn’t Fake Intelligence do?
Gene Okerlund @malort.bsky.social:
It’s actually Stolen Intelligence with worse outcomes.
@winniethewitch.bsky.social:
Be so much cheaper and more accurate to hire humans but noooo…
@dreid63.bsky.social:
Sixty percent! And it’s running the American government? F*ck.
@parseword.bsky.social:
I have some bad news. Elon’s bullshit factory, Grok, “demonstrated the highest error rate, at 94 percent.”
@fionazerbst.bsky.social:
“Grok 3 demonstrated the highest error rate, at 94 percent.” LOL
Allison Shapp, PhD @schwadevivre.bsky.social:
it actually takes a lot of work to be that wrong
Fiona Zerbst @fionazerbst.bsky.social:
Musk exemplifies that ethos. LOL
European Angie, Earth Carer @lifelearner47.bsky.social:
so AI has learnt that political trick of talking absolute bullshit in a strong, confident and paternalistic manner.
Wonder who could have taught them that.
Time to generate a Ladies Only AI.
Allison Shapp, PhD @schwadevivre.bsky.social:
this would be an amazing experiment. train an LLM on only material written by women and see what happens.
European Angie, Earth Carer @lifelearner47.bsky.social:
I would so love this to happen, but as it’s the patriarchy who control funding……


@yablokomoloko78.bsky.social:
“present total bullshit with confident language” ofc, because GenAI stands for “Generative Artificial Mansplainer”. Tech bros created digital tech bro

@passivestein.bsky.social:
I have tried to use these AI search tools. They absolutely give incorrect information and refer to websites and documents that PURPORTEDLY are the reference material but are absolutely NOT the reference material. I have spent more time checking these stupid things and will not use them again.
@b40girl.bsky.social:
Not worth the huge power waste for these things.
@godpatton.bsky.social:
the cameras in my work truck operate with ai. i get stop sign violations on the interstate.
@binkythebomb.bsky.social:
The latter part is simple enough, they trained it on the speeches of a con-man.
@liquidjade.bsky.social:
If AI was a person, AI would be fired for being so bad at doing its job.

“Religion is a tool used to diminish our intelligence, and replace it with obedience.” Gwen Geen
In my view, AI intentionally ramps up the harms caused by religions a million fold, or more, serving the vile Patriarchy and the inhumane billionaires who want ever more taking from and raping the rest of us. Whatever will the poor abusive fuckers do when the rest of us have nothing left to take or rape?
@bgs1120.bsky.social:
I swear chatgpt has gotten significantly worse since its launch.
@tangibullah.bsky.social:
Man, right around the time humans have abandoned critical thought for memes and think tank research. That can’t be good.
@leapinglemur.bsky.social
It’s sad that this is even newsworthy. I mean, duh, AI is going to suck and be shitty at everything but basic calculations.
It’s just WILD to me that there are people who trust it will have correct answers.
Spike Prime @spikeprime.bsky.social
It can’t even tell you how many Rs there are in strawberry.
@scihoss.bsky.social:
Not surprising. I’m perpetually irked by the incorrect answer the meta AI gives a commercial. The people making these don’t care about accuracy at all. Just sounding right enough to fool the public.
@azurevista.bsky.social:
AI is just a way for mediocre incompetent White Germanics & Nords to pretend their fatuous beliefs about their superiority are empirical evidence.
@rokat.bsky.social:
This is not a surprise.
I do believe that people will believe what they read if that’s what they want to hear.
So the uneducated will favour facism?
Lets put the WWE lady in charge??