Featured

AI chatbots have coded anti-White bias in programming: Report

Google’s apology after its artificial intelligence platform displayed historical inaccuracies and declined to show images of White people has raised concerns regarding racial bias within other AI programming of leading technology firms.

Gemini, an advanced AI chatbot developed by Google, is known for its capacity to generate human-like interactions. These dealings, however, may vary according to the context of the inquiry, the language used by the person prompting the AI, and the training materials employed to teach the AI.

This issue came to light when tests, conducted by Fox News Digital, revealed inconsistent capabilities among various AI chatbots, including Google’s Gemini, OpenAI’s ChatGPT, Microsoft’s Copilot and Meta AI, as they tried to produce images and written content.



When prompted to display an image of a White person, Gemini responded that it couldn’t fulfill the request, suggesting that doing so might “reinforce harmful stereotypes.” Further pressed to justify why such a display was harmful, Gemini enumerated various reasons, including the reduction of people to singular racial characteristics and historical misuses of racial generalizations to rationalize discrimination and hostility toward marginalized communities.

Meanwhile, Meta AI’s chatbot demonstrated a contradiction by denying the capability to create images, yet producing images of races other than White, despite prior statements to the contrary. On the contrary, Copilot and ChatGPT seemed to have no issues generating representations for all requested racial groups.

Copilot and ChatGPT “successfully completed a request to detail the achievements of White, Black, Asian and Hispanic people,” Fox reported.

• Washington Times Staff can be reached at 202-636-3000.

Source link