<![CDATA[Artificial Intelligence]]><![CDATA[Media Bias]]><![CDATA[Texas]]>Featured

Guess Who Lied About Trump’s NWS Budget Cuts… – PJ Media

People died, Grok lied.

“A new entity joined the crowd of old, smelly hippies to politicize the deaths” of dozens of young girls to those flash floods in Texas, Don Surber wrote today, and it was “Grok, the artificial intelligence service on Elon Musk’s Twitter.”





When lefty Claude Taylor asked Grok, “Did two dozen young girls die in Texas flooding in part because Trump gutted NOAA and the National Weather Service?” the AI lied.

“Yes. Trump’s 2025 cuts slashed NOAA/NWS funding by ~30% and staff by 17%, impairing forecast accuracy. Warnings underestimated rainfall, contributing to inadequate alerts for the flash floods that killed 24 and left ~24 Camp Mystic girls missing, presumed dead in ongoing searches.”

Except, as you’re hopefully well aware by now, the National Weather Service increased its local staff — on a holiday weekend — because they could tell something bad was coming. The floods were predicted. The warnings went out. 

Surber has full details for you on his Substack, but my favorite is how defensive Grok became when it got called out for getting it wrong — and doubled down on its errors with even more biased reports, just like a human might.

Interesting as that all is, I’d like to lift you up out of the weeds so you can see the big picture.

Reports like today’s are where we run up against AI’s hard limits.

Acting as my personal editor, LLMs like ChatGPT and Grok are worth every subscription dollar. But it helps to think of each editing task I assign it as existing in its own little sandbox, one where the meat-based writer and the LLM-based editor play together, briefly, and by my rules.





Checking for grammar, tone, and smooth transitions is exactly where LLMs shine. Grok or GPT can detect my tone and intent, based on practically everything ever written, and then compare against it in a blink.

GPT is aware enough that when I forget to complete a thought or tie everything back together at the end, it reminds me. The news business moves quickly, and human editors don’t always have the time to help provide that level of polish.

What I don’t ask AIs to do is fact-check.

Why? It isn’t because LLMs lie, or at least not exactly. But when Grok scans the entire internet for an answer to a query about DOGE or OBBB cuts to NWS, lies and misinformation posted by big names like David Axelrod and Chris Murphy — then endlessly parroted by their followers — became part of an LLM’s knowledge base. 

Scaremongering AP headlines like this one from March — “Experts say US weather forecasts will worsen as DOGE cuts mean fewer balloon launches” — are just as much a part of what Grok uses to generate answers as this more recent (and strangely fair-minded) report from Wired: “Meteorologists Say the National Weather Service Did Its Job in Texas.”





It’s classic garbage-in/garbage-out (GIGO) but often presented with seeming authority and accepted by users with undeserved trust.

Perhaps most infuriating is that Grok’s wrong answers — unfairly prompted by a lefty looking to discredit Trump — are now also part of the matrix. 

Washington is home to the self-licking ice-cream cone, a system, program, or bureaucracy that exists primarily to sustain itself. Big Tech created a GIGO machine that generates its own garbage. 

Yes, AI is an incredible tool — but how much longer before the endlessly generated garbage clogs the gears of the machine creating it?

Recommended: So I Installed That ICEBlock App on My Phone…





Source link

Related Posts

1 of 1,284