Representatives of more than 50 organizations spanning the political spectrum signed a declaration urging members of Congress and tech industry leaders to protect children from the potentially harmful effects of artificial intelligence.
The statement proposes five guidelines for AI products targeted toward children to follow.
The “National Declaration of AI and Kids Safety” gives examples of artificial intelligence chatbots engaging minors in sexually suggestive conversations, adult content, and suicide-themed conversations and posed serious privacy concerns.
“We, the undersigned, call urgently on policymakers, tech companies, and communities to join us in championing a safer, responsible, and ethical digital future for our children,” the national declaration concludes. “Our kids deserve technology that enriches their lives, protects their innocence, and empowers their potential—not technology that exploits or endangers them.”
The statement calls for five “non-negotiable guiding principles and standards” for AI products aimed at children.
One would ban what is known as attention-based design.
“No AI designed for minors should profit from extending engagement through manipulative design of any sort,” the statement says.
It adds that manipulation includes “anthropomorphic companion AI, which by its nature, deceives minors by seeking to meet their social needs.” Anthropomorphic companion AI simulates a human friendship with characters who often look and speak like people.
The statement also calls for data collected by companies to be protected for privacy purposes.
“Companies should collect only essential data required for safe AI operation,” the statement says. “Children’s data must never be monetized, sold, or used without full and clear disclosure and parental consent in support of that usage.”
The coalition also advocates: “Parents should have comprehensive visibility and control, including proactive notifications and straightforward content moderation tools.”
It also insists on age-appropriate safeguards, saying: “It must not serve up inappropriate or harmful content, specifically content that would violate a platform’s own community guidelines or federal law.”
The fifth call to action is for independent auditing and accountability.
“AI products must undergo regular third-party audits and testing with child development experts,” the declaration says. “Companies must swiftly address identified harms, taking full accountability. Future products should be extensively tested with minors before release instead of after.”
The signatories included Wes Hodges, acting director of the Center for Technology and the Human Person at The Heritage Foundation.
“Innovation through exploitation is not the American way,” Hodges is quoted saying in a section at the end of the declaration that includes specific comments from several of the signers.
“We have a responsibility to ensure AI tools are designed to enrich—not endanger—our children’s lives. The difference lies in the design choices developers make and the standards policymakers enforce,” Hodges continued. “This declaration offers an invaluable framework to guide us in the right direction.”
Other signers included Alix Fraser of the left-of-center advocacy group Issue One; Clare Morell of the conservative-leaning Ethics and Public Policy Center; South Carolina state Rep. Brandon Guffey, a Republican who runs the Less Than 3 nonprofit group that works to prevent teenage suicide; Beth Pagano of the New York State Society for Clinical Social Work; and Chad Pecknold of The Catholic University of America.
The declaration notes bipartisan concern on the topic, referencing a 2023 Senate hearing.
At that hearing, Sen. Richard Blumenthal, D-Conn., said, “The [tech] companies must be held accountable.”
And Sen. Josh Hawley, R-Mo., expressed to the tech companies appearing at the hearing that he didn’t “want 13-year-olds to be your guinea pig.”
“We had social media, who made billions of dollars giving us a mental health crisis in this country. They got rich, the kids got depressed, committed suicide,” Hawley said. “Why would we want to run that experiment again with AI?”