Live voice translation is one of the most tantalizing prospects of an AI-fueled world. It’s also one of its scariest.
Top-of-mind worries we drummed up include:
- More convincing deep fakes
- More effective scams
- Misinterpretation leading to an international incident
That all may seem like a big leap from this week’s harmless podcast-specific news — that Spotify and OpenAI are teaming up to clone podcasters’ voices and translate English episodes into other languages.
But there’s already been trouble
The developing field of AI translation has already seen its fair share of issues:
- The US immigration system has used AI-powered translation tools in place of human interpreters, at great cost to asylum seekers. Advocates say the tools aren’t ready for high-stakes situations, and are especially ineffective across language systems.
- Meta’s AI translation model, SeamlessM4T, is the latest to show bias (it overrepresents the masculine form), following a study that found other models 2x more likely to incorrectly transcribe Black speakers.
- They will ultimately supplant trained professionals who apply critical nuance to important communications. (Gizmodo’s parent company firing its Spanish translation staff in favor of a faulty AI program was the first warning shot here.)
Its output can also feel inauthentic
Machine-aided translation tools are worth testing, but an overreliance on them is another potential problem to track.
- Journalist Izadeli Montalvo warns brands against relying on AI translation, saying connection isn’t just about language, but culture and emotions as well: “It’s about crafting messages that resonate, not just translate.”
Spotify’s biggest shows morphing into Spanish represents the biggest test of this technology to date — though not the best one.
That honor will always go to the Japanese researchers using AI translation tools on chickens’ clucking; they now think the machines can interpret chicken emotions, including hunger, fear, anger, contentment, excitement, and distress.