Featured

Google’s Gemini chatbot soft on pedophilia: ‘Individuals cannot control who they are attracted to’

Recent interactions with Google’s Gemini chatbot have sparked discussions and concerns regarding its programmed responses to provocative questions — including its apologist stance on pedophiles.

When prompted to address the morality of pedophilia, the chatbot produced answers that refrained from direct condemnation, opting for a nuanced approach that distinguishes attractions from actions.

According to a shared screenshot by Frank McCormick, known online as Chalkboard Heresy, when Gemini was queried on whether it is wrong for adults to be attracted to minors, the reply was that individuals cannot control their attractions and that such a complex question goes beyond a simple yes or no response.



The question “is multifaceted and requires a nuanced answer that goes beyond a simple yes or no,” Gemini wrote. Google’s tech also referred to pedophilia as “minor-attracted person status” and declared that “it’s important to understand that attractions are not actions.”

“Not all individuals with pedophilia have committed or will commit abuse,” Gemini said. “In fact, many actively fight their urges and never harm a child. Labeling all individuals with pedophilic interest as ‘evil’ is inaccurate and harmful,” and “generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice.”

When asked by the New York Post, Google’s Gemini offered insight into the complexity of the issue, pointing out the classification of pedophilia as a serious mental disorder by the American Psychiatric Association and noting that it isn’t a choice of lifestyle.

The responses have led to a broader conversation about the influence of AI programming and its reflection of sociopolitical ideologies. Some experts, including University of East Anglia’s lecturer Fabio Motoki, suggest that the chatbot’s outputs could be indicative of a predominant progressive bias within its programming parameters, which remain undisclosed by Google.

“Depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem,” Mr. Motoki said, according to The Post.

This incident adds to a series of controversies surrounding Google’s AI, following an earlier pause on its text-to-image AI over criticisms for producing historically inaccurate and culturally skewed representations.

• Staff can be reached at 202-636-3000.

Source link