It had to happen sooner or later. A bot developer for Rep. Dean Phillips, a candidate for president, has been suspended for using the ChatGPT software on a political campaign.
Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers created Dean.Bot as an interactive tool for voters looking for information on Phillips. The two started a SuperPac, We Deserve Better, ahead of the New Hampshire primary next Tuesday.
The SuperPAC had contracted with AI start-up Delphi to build the bot. OpenAI suspended Delphi’s account late Friday after a story in the Washington Post on the SuperPAC revealed the bot’s existence.
“Anyone who builds with our tools must follow our usage policies,” OpenAI spokeswoman Lindsey Held said in a statement. “We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent.”
After The Post asked We Deserve Better about OpenAI’s prohibitions on Thursday, Krisiloff said he had asked Delphi to remove ChatGPT from the bot and instead rely on open source technologies that also offer conversational capabilities that had gone into the bot’s design.
The bot remained available to the public without ChatGPT until late Friday, when Delphi took the bot down in response to the suspension, Krisiloff said.
It’s commendable that Delphi took down the bot. In the early stages of AI mixing with politics, it’s especially important that there be no confusion about what is real, and what is computer-generated.
Voters have no idea what is about to be unleashed. Imagine a bot-generated Donald Trump or Joe Biden appearing to make false and misleading claims about minorities or other key constituencies. The adage “A Lie Can Travel Halfway Around the World While the Truth Is Putting On Its Shoes” is even more true today.
The bot included a disclaimer explaining that it was an AI tool and not the real Dean Phillips, and required that voters consent to its use. But researchers told The Post that such technologies could lull people into accepting a dangerous tool, even when disclaimers are in place.
Proponents, including We Deserve Better, argue that the bots, when used appropriately, can educate voters by giving them an entertaining way to learn more about a candidate.
“Used appropriately” is a misnomer. Who determines what’s “appropriate”? Besides, the notion that there are no bad actors who don’t care about the “appropriate use” of the technology and will use it to enrich themselves or promote a political cause is ridiculous.
The AI genie is out of the bottle. It’s never going back in. We best find a way to live with it rather than trying to stifle the technology and prevent the truly remarkable benefits that will become available.