Apple Confirms iPhone Glitch After “Racist” Becomes “Trump”

A curious glitch in Apple’s iPhone voice-to-text feature has sparked controversy in tech circles and beyond. The timing of this controversy follows another recent tech bias incident involving Amazon’s Alexa, raising broader questions about political neutrality in everyday technology. Is there an inherent bias in Apple’s voice recognition technology?

Apple’s Voice-to-Text Error Sparks Controversy

iPhone users have discovered an unusual glitch in Apple’s voice-to-text feature that temporarily displays “Trump” when saying the word “racist.” This error, which gained attention after appearing in a viral TikTok video, has prompted discussions about potential political bias in technology.

Fox News Digital conducted tests confirming the issue, noting that the word “Trump” would briefly appear before changing back to “racist” in many instances. The bug doesn’t occur consistently and also produces other incorrect interpretations like “reinhold” and “you” in place of “racist.”

Apple Acknowledges the Problem

Apple has addressed the controversial bug through an official statement to media outlets. A company spokesperson confirmed: “We are aware of an issue with the speech recognition model that powers Dictation, and we are rolling out a fix as soon as possible.”

Technical experts have identified that the glitch specifically affects words with an “r” consonant when dictated through the iPhone’s voice recognition system. Apple attributes the error to phonetic overlap in its speech recognition model rather than intentional programming.

Tech Bias Concerns Grow

This incident adds to a growing list of technology controversies involving perceived political bias in consumer electronics. Amazon faced criticism when its Alexa assistant provided reasons to vote for Kamala Harris but refused to do the same for Donald Trump.

Amazon subsequently apologized for the Alexa incident, stating that having “a political opinion” or “bias for or against a particular party or particular candidate” violated their standards. The company has since implemented manual overrides for all election-related prompts to prevent similar issues.

These recurring incidents have fueled concerns about whether major technology companies are maintaining political neutrality in their products. Consumer trust remains at stake as users increasingly rely on AI assistants and voice recognition technology for daily tasks.

The timing of these controversies has amplified scrutiny of how technology companies handle politically sensitive content. Many tech users are left wondering whether these are simply innocent algorithmic errors or indications of deeper biases within Silicon Valley.

Sources:

Recent

Weekly Wrap

Trending

You may also like...

RELATED ARTICLES