It all began, as these things often do, with an Instagram ad. “No one tells you this if you’re an immigrant, but accent discrimination is a real thing,” said a woman in the video. Her own accent is faintly Eastern European—so subtle it took me a few playbacks to notice.
The ad was for BoldVoice, an AI-powered “accent training” app. A few clicks led me to its “Accent Oracle,” which promised to guess my native language. After I read a lengthy phrase, the algorithm declared: “Your accent is Korean, my friend.” Smug. But impressive. I am, in fact, Korean.
I’ve lived in the US for more than a decade, and my English isn’t just fluent. You could say it’s hyperfluent—my diction, for one, is probably two standard deviations above the national average. But that still doesn’t mean “native.” I learned English just late enough to miss the critical window for acquiring a native accent. It’s a distinction that, depending on the era, could lead to certain complications. In the Book of Judges, the Gileadites are said to have used the word “shibboleth” to identify and slaughter fleeing Ephraimites, who couldn’t pronounce the sh sound and said “sibboleth” instead. In 1937, the Dominican dictator Rafael Trujillo ordered the death of any Haitian who couldn’t pronounce the Spanish word perejil (parsley) in what became known as the Parsley Massacre.
So the stakes felt high as the Accent Oracle kept listening to me talk, at one point scoring me 89 percent (“Lightly Accented”), another time 92 percent (“Native or Near-native”). The spread was unsettling. On a bad day, I could have been slaughtered. To improve my odds of survival, I signed up for a free, one-week trial.
There is a medium-is-the-message quality to accents. How you say something often reveals more—about your origin, class, education, interests—than what you say. In most societies, phonetic mastery becomes a form of social capital.
As it has for everything else, AI has now come for the accent. Companies like Krisp and Sanas sell real-time accent “neutralization” for call center workers, smoothing a Filipino agent’s voice into something more palatable for a customer in Ohio. The immediate reaction from the anti-AI camp is that this is “digital whitewashing,” a capitulation to an imperial, monolithic English. This is often framed as a racial issue, perhaps because ads for these services feature people of color and the call centers are in places like India and the Philippines.
But that’d be too hasty. Modulating speech for social advantage is an old story. Remember that George Bernard Shaw’s Pygmalion—and its musical adaptation, My Fair Lady—hinges on Henry Higgins reshaping Eliza Doolittle’s Cockney accent. Even the eminent German philosopher Johann Gottlieb Fichte shed his Saxon accent when he moved to Jena, fearing people would not take him seriously if he sounded rural.
This is no relic of the past. A 2022 British study found that a “hierarchy of accent prestige” persists and has changed little since 1969, with a quarter of working adults reporting some form of accent discrimination on the job, and nearly half of respondents saying they were mocked or singled out in social contexts.
In a Hacker News thread announcing BoldVoice’s launch, one commenter wrote, “I’d rather strive toward a world where accents matter less than fixing accents.” Well, tell that to countless Koreans in this country navigating the treacherous phonetic gulf between beach and bitch or coke and cock. That online comment was characteristic of the usual sanctimonious pablum, the kind of casual moral high ground afforded only to a native English speaker or to someone willfully ignorant of the daily indignities non-native speakers face.
Besides, the harshest judgments don’t always come from native speakers. A subtle hierarchy of assimilation often plays out in ESL classrooms and among immigrants, where an accent can delineate the more settled from the freshly arrived. “All accents are equal” is linguistically true but sociologically false.
Why do people have accents? For one thing, units of sound in one language—its phonemes—don’t map neatly onto those of another. A language’s sound catalog can have as few as 11 phonemes or more than 100 (as in Southern Africa’s Taa language, famous for its click consonants). English has about 44, while Standard Korean has a similar 40. But many of them get lost in translation. Koreans struggle with th, which the brain often substitutes with d or s. Conversely, English speakers trip over vowels like eu (으), which don’t exist in the English phonemic inventory. (It’s part of the reason I shortened my Korean name, Seung Heon, to Sheon—pronounced like Sean.)
After its diagnostic, BoldVoice flagged some “Top Focus Areas” for me: the th sound (no surprise); devoicing the final consonant, like the d in did, which I made sound like dit; and lengthening the ee vowel in words like seat so it doesn’t collapse into the short i of sit.
So, did my accent change? I’ll never know. I lasted three lessons. The ignominy of sitting at home repeating “think, thought, thirty” into my phone was a touch absurd, reminding me of a sad evening I spent in front of my mirror in an ungainly attempt to replicate dance moves I saw on YouTube.
Perhaps I would have been more eager for such a tool when I first learned English, back when I was self-conscious of what must have been a thicker accent. But as it softened over the years, it hardened into something else: a convenient sonic shorthand that telegraphs my identity. The app’s exercise clarified what I stood to lose—that to sand down every last foreign edge of my speech would be to erase the vocal fingerprint that is recognizably me.
Turns out the woman in that first Instagram ad was a cofounder of BoldVoice, an Albanian immigrant. To her, and anyone who finds the app useful, I’d say Godspeed. We never know when we’ll be asked to pronounce “parsley.”


