

Saying: “Hey, Google, be my French interpreter” will activate spoken and, on smart displays, text translation. Last month, Google introduced interpreter mode for its home devices. “We’re generally very early in the paradigm of voice-enabled devices,” says Barak Turovsky, Google Translate’s director of product, “but it’s growing very rapidly, and translation will be one of the key parts of this journey.” One of the main developments driving artificial speech translation is the vogue for encouraging people to talk to their technology. The systems currently available offer proof of the concept, but at this stage they seem to be regarded as eye-catching novelties rather than steps towards what Waibel calls “making a language-transparent society”. Reviews in the tech media could reasonably be summarised as “not bad, actually”. A number of smaller outfits, such as Waverly Labs, a Brooklyn-based startup, have developed earpiece translators. Skype has a Translator feature that handles speech in 10 languages. Google has incorporated a translation feature into its Pixel earbuds, using Google Translate, which can also deliver voice translation via its smartphone app. Now devices that look like prototype Babel fish have started to appear, riding a wave of advances in artificial translation and voice recognition. “It wasn’t ready for prime time,” he acknowledges. Waibel’s first speech translation system, assembled in 1991, had a 500-word vocabulary, ran on large work stations and took several minutes to process what it heard.
#TRANSLATOR WITH VOICE LISTENER PORTABLE#
It came to represent one of those devices that technology enthusiasts dream of long before they become practically realisable, like portable voice communicators and TVs flat enough to hang on walls: a thing that ought to exist, and so one day surely will. The Hitchhiker’s Guide to the Galaxy featured a life form called the Babel fish which, when placed in the ear, enabled a listener to understand any language in the universe.
.jpg)
I proposed it at MIT in 1978.” Douglas Adams sort of invented it around the same time too. Waibel, who is also a professor of computer science at Carnegie Mellon University in Pittsburgh, “sort of invented it. The idea of artificial speech translation has been around for a long time. It helps that lecturers speak clearly, don’t have to compete with background chatter, and say much the same thing each year. The system generates text that students can read on their laptops or phones, so the process is somewhat akin to subtitling. At Karlsruhe Institute of Technology, where he is a professor of computer science, Waibel and his colleagues already give lectures in German that their students can follow in English via an electronic translator.
