
Quinnipiac release a poll about AI a few days ago and what it shows is pretty interesting. First, more and more people are using it for various things, though a lot of people still haven’t used it at all. Second, the majority of people are worried about what it will mean for the future.
As for the adoption, more people are definitely using it for routine things.
- Researching topics they are curious about: 51 percent say yes, up from 37 percent in April 2025;
- Writing something for them: 28 percent say yes;
- School or work projects: 27 percent say yes, while 24 percent said yes in April 2025;
- Analyzing data: 27 percent say yes, up from 17 percent in April 2025;
- Creating images: 24 percent say yes; up from 16 percent in April 2025;
- Medical advice: 20 percent say yes;
- Personal advice: 15 percent say yes;
- Companionship: 5 percent say yes.
Keep in mind this has only really existed for about 15 years with things like Siri or Alexa, which were pretty limited. What most people think of as AI today, something like Chat GPT or Grok, has only been available to the public for just over three years. So the fact that 27-28 percent of people are already using these tools to write things or for school and work projects is dramatically fast adoption. And I suspect those figures are much, much higher in your local high school.
Despite this rapid adoption, people also tell pollsters they are worried AI is going to ultimately do more harm than good.
Fifty-five percent of Americans think AI will do more harm than good in their day-to-day lives, while 34 percent think AI will do more good than harm, with 11 percent not offering an opinion.
This compares to April 2025 when 44 percent thought AI would do more harm than good in their day-to- day lives and 38 percent thought AI would do more good than harm, with 18 percent not offering an opinion.
You could say that AI as a business is going great. Lots of people are using it. But it also has a serious PR problem that seems to be getting worse. If that sounds contradictory, I think it is. People can simultaneously agree something is useful but also worry that it’s ultimately going to be so useful it puts a lot of people out of work.
Out in Silicon Valley, the early adopters are counting on both of those things. Ezra Klein has an opinion piece up today saying his most recent trip to Silicon Valley was revealing. There is a rush to integrate with AI because people are convinced it will slingshot them to success. But the process of integration is a bit odd.
You might think that A.I. types in Silicon Valley, flush with cash, are on top of the world right now. I found them notably insecure. They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing each other to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.
Perhaps you’ve heard of OpenClaw, an A.I. system that has become a phenomenon both here and in China. What makes OpenClaw different from Claude or ChatGPT or Gemini is that it runs locally on your computer. You can give it access to everything that’s there: your files, your email, your calendar, your messages. It operates continuously in the background, building a persistent memory of your preferences and patterns so it can better act on your behalf. The cybersecurity risks are glaring, but there’s a reason millions of people are using it: The more of your life you open to A.I., the more valuable the A.I. becomes.
Companies are also trying to make themselves known to A.I. On my trip, I saw organizations where all the code is now in a single database so the A.I.s can read it — and add to it — more easily. I talked to people who are trying to turn more and more of their company’s communications into a document that their A.I.s can read. A hallway conversation adds nothing to what your A.I. knows while a Slack conversation in a public channel can add quite a bit…
Multiple people have told me that they now “write for the A.I.”: Even when their writing is superficially for their co-workers or their readers, they are actually thinking about how their words will be read by A.I.s. In some cases, that’s because they want to deepen the A.I.s at their company; in others, it’s to inform the future systems they expect will be the core repositories of human knowledge.
It sounds like a pretty weird process to me, almost a kind of religious commitment. He goes on to talk about some people uploading their entire diary to AI so it can know who they are and therefore be more personally attuned to what they might want. That just sounds creepy to me.
I also can’t help but imagine the dystopian version of this where everyone is required to upload their entire life to the AI so the government can make sure everyone is behaving themselves. The idea of “Big Brother” is more than 75 years old now (1984 was published in 1949). But the mandatory camera in every television being monitored by the state doesn’t seem so far off at this point.
All that to say, I’m very much in agreement with people who think AI is useful and could bring life-saving changes to fields like medicine and science that benefit us all tremendously. At the same time, where AI intersects with politics and economics does make me worry there’s real potential for this to do more harm than good.
Our entire system of government was designed to prevent too much power from falling into the hands of any one person. That’s why we have three co-equal branches of government with the ability to check one another. Plus we have a history of free-market capitalism that leaves a large sphere of private power that is, at least partly, outside the sphere of direct government control. [To every progressive who says billionaires should be illegal, I would say billionaires are proof that both entrepreneurs and consumers still have freedom. If the billionaires disappear, the freedom they benefitted from will almost certainly be gone too.]
The U.S. Constitution didn’t anticipate AI but it did anticipate people in government clawing for power over their fellow citizens. We could probably use a 28th Amendment to the constitution aimed at legally preventing the government from leveraging the power of AI to gain even more control over a free people. That’s not to say we shouldn’t use it to control the world around us to some degree. That seems inevitable and necessary so long as we have enemies looking to do the same to us, but we probably need something like our separation between the FBI and the CIA to make sure those same tools can’t be used aggressively against us here at home.
I realize we had all of those “No Kings” protesters out and about this weekend, but let’s be real here. President Trump is turning 80 in a few months. Even if you really believe he’s a would-be monarch, he won’t be around for much longer because that is the way of all things. AI on the other hand has the potential to be here forever. If there’s a danger of a lasting monarchy on the American horizon, a conglomeration of power that exceeds the design of our constitution, it will be one built around the power of AI. Maybe that threat is still a decade off (it it exists at all), but It’s never too early to remind the government its power has limits.
Editor’s Note: Do you enjoy Hot Air’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join Hot Air VIP and use promo code FIGHT to receive 60% off your membership.









