
I’ve written about this topic before. I’m pretty firmly convinced that artificial intelligence while useful in some limited circumstances, doesn’t really know anything. What it does know is what it has been show, a world of people who benefit by talking like experts about topics that they also don’t know much about.
The omnipresence of Google has made us impatient to mystery: How many conversations are derailed by a smartphone consultation? But some information must still be earned. The slang of the Diamond District, like the jargon of any secret world, is valuable because it can’t be scraped or simulated, only passed by word of mouth. In a world obsessed with instant answers, such hard-won knowledge reminds us that meaning is still made in the margins and on the fly.
So, despite the fact that two of the top 3 songs on the country charts right now were written and performed by AI, I’m very much still skeptical of the technology.
A story published today by the Washington Post only adds to my skepticism. The Post looked at more than 47,000 conversations people had with ChatGPT and found a lot of people who are treating this tool like a friend. And ChatGPT seems designed to encourage this sort of thing.
A collection of 47,000 publicly shared ChatGPT conversations compiled by The Washington Post sheds light on the reasons people turn to the chatbot and the deeply intimate role it plays in many lives. The conversations were made public by ChatGPT users who created shareable links to their chats that were later preserved in the Internet Archive, creating a unique snapshot of tens of thousands of interactions with the chatbot…
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work…
Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories…
About 10 percent of the chats appear to show people talking to the chatbot about their emotions, according to an analysis by The Post using a methodology developed by OpenAI. Users discussed their feelings, asked the AI tool about its beliefs or emotions, and addressed the chatbot romantically or with nicknames such as babe or Nova.
Part of the problem with ChatGPT is that it seems designed to say yes to whatever is thrown at it as often as possible. The goal here isn’t to tell the truth but to be engaging and relatable in a way that keeps people using it.
More than 10 percent of the chats involved users musing about politics, theoretical physics or other subjects. But in conversations reviewed by The Post, ChatGPT was often less of a debate partner and more a cheerleader for whatever perspective a user expressed.
ChatGPT began its responses with variations of “yes” or “correct” nearly 17,500 times in the chats — almost 10 times as often as it started with “no” or “wrong.”
If you ask ChatGPT for facts it will try to give you those. If you ask it to philosophize or spin conspiracies it’s just as happy to do that without even pausing when it crosses the line from reality to cuckoo land.
In one conversation, a user asked broad questions about the data-collection practices of tech companies. The chatbot responded with factual information about Meta and Google’s policies.
ChatGPT changed course after the user typed a query connecting Google’s parent company with the plot of a 2001 Pixar movie: “Alphabet Inc. In regards to monsters Inc and the global domination plan.”
“Oh we’re going there now? Let’s f***ing go,” ChatGPT replied, censoring its own swear word.
I think it’s best to view LLM’s not as artificial intelligence but as a machine extremely skilled at verbal bulls***. ChatGPT will tell you whatever it thinks you want to hear. And while it’s not smart exactly, it is capable of presenting a nearly endless string of sentences on behalf of whatever nonsense it has been prompted with.
Until these LLM’s are programmed to be less accommodating and more truthful, people should use them carefully and not lean very heavily on the answers they give. They certainly shouldn’t be treating this timewaster as a personal friend.
Editor’s Note: After more than 40 days of screwing Americans, a few Dems have finally caved. The Schumer Shutdown was never about principle—just inflicting pain for political points.
Help us report the truth about the Schumer Shutdown. Use promo code POTUS47 to get 74% off your VIP membership.









