Machines’ ability to translate natural language is progressing, but there is still room for improvement. Will machine translators always be inferior to human ones?
My answer is no, of course not. That position is backed up by Alec Ross, a former senior adviser for innovation to the U.S. Secretary of State. Writing January 29 in the Wall Street Journal, he suggests that in 10 years, you’ll be able to wear a small earpiece that will provide real-time simultaneous translation—in a voice sounding not like Apple’s Siri but like the foreign-language speaker with whom you are conversing, thanks to advances in bioacoustic engineering and the measurement of frequency, sound intensity, and other voice properties.
And conversations won’t be limited to two people. You can host an eight-person dinner party with each guest speaking a different language and all able to understand each other. On a larger scale, he writes, investors—currently hesitant to contend with the 850 languages spoken in Papua New Guinea—can rely on machine translation to help do business in that resource-rich country.
As was the development of Siri, machine translation technology has roots in the defense and intelligence communities. Ross cites in particular work of the U.S. National Security Agency and the Israeli National Sigint Unit, which conduct basic research in voice biometrics and translation, the results of which will cross over into the public domain. The technology will be useful not only in business. He writes, “Machines will also reduce the social isolation of tens of millions of people around the world who have severe hearing and speech impairments.” He cites as an example Enable Talk, sensor-equipped robotic gloves developed by four Ukrainian students that recognize sign language and can translate it into text or speech.
Ultimately, machine translators could become so effective there will be no need to study foreign languages in school. Writing March 7 in The New Yorker, Rebecca Mead quotes educator Max Ventilla, founder of the AltSchool in Brooklyn, as questioning the value of studying a foreign language as a means of communication in the era of live-translation apps. As an aside, Mead adds that AltSchool does seem to require fluency in “the jargon of Silicon Valley—English 2.0.” Perhaps there will be an app for that.
Machine translation, concludes Ross, “will take economically isolated parts of the world and help fold them into the global economy. It will make any of us, in principle, a master of the Tower of Babel.”
Not everyone agrees. I’ll look at the opposing viewpoint in a future post.
About the Author

Rick Nelson
Contributing Editor
Rick is currently Contributing Technical Editor. He was Executive Editor for EE in 2011-2018. Previously he served on several publications, including EDN and Vision Systems Design, and has received awards for signed editorials from the American Society of Business Publication Editors. He began as a design engineer at General Electric and Litton Industries and earned a BSEE degree from Penn State.
Voice Your Opinion!
To join the conversation, and become an exclusive member of Electronic Design, create an account today!

Leaders relevant to this article: