With grant, USC researchers hope to help break language barrier


A million-dollar grant awarded to a team of USC researchers could make communication with doctors across language barriers a lot easier, thanks to a new advanced translation system.

The National Science Foundation grant of $2.2 million, awarded

Aug. 15, will fund four years of research on SpeechLinks, the translation system that researchers hope can go beyond basic word recognition to interpret emotion and intonation in speech.

“There’s much more going on in human speech than what we say,” said Shrikanth Narayanan, an engineering professor and the SpeechLinks project director. “What we speak and how we translate depends a lot on the context … You can say the same set of words and by changing one set’s intonation, the two can have a different meaning.”

Although voice recognition and basic emotion recognition software already exist, Narayanan said the marriage of the two makes the SpeechLinks project more unique.

Most speech-to-speech translation devices follow a pipeline approach, in which each element — speech recognition, conversion of words to text and translation into a foreign language — are all developed separately.

Narayanan, however, wants to integrate these elements. He said he wants to “capture the rich information in speech,” such as emotions.

He added that he hoped the software would be able to fulfill a need for “cross-language [and]

cross-cultural communication” in many professional settings, including in the health care industry.

Researchers will first test SpeechLinks in hospitals, in an effort to improve doctor-patient relationships when one speaks English as a second language.

“Take an urban center like Los Angeles,” Narayanan said. “There are a lot of people here who have limited or no proficiency in English — a language barrier can compromise health care treatment, so can we build a set of tools that can work with human translation abilities?”

Win May, an associate professor of clinical pediatrics at the Keck School of Medicine and a collaborator on the SpeechLinks project, said SpeechLinks would help health care facilities, many of which are required to provide access to translation services for patients who speak little or no English.

“However, these [translation] services may not be readily available,” May said. “So SpeechLinks will allow health care providers to communicate effectively with patients in lieu of a human interpreter.”

As director of the Clinical Skills, Education and Evaluation Center at the Keck School of Medicine, May will coordinate the medical students, actors and volunteers portraying real patients during the project’s test runs.

“We want to make sure the goal of being understood is met on both sides,” said Margaret McLaughlin, a collaborator on the project and a professor of communication at the Annenberg School for Communication.

McLaughlin, who has a background in conversation analysis and computer-mediated communication, became involved with the project after Narayanan approached her.

“The patient will be sensitive to whether or not the [human] interpreter is accurately relaying what they’re saying to the provider,” McLaughlin said. “We want them to have confidence that SpeechLinks is accurate in doing so.”

Narayanan also said SpeechLinks will be cost-effective because it will not depend on special equipment.

“Since it’s software-based, it can be run on any laptop computer and can potentially serve any number of users,” Narayanan said.

Narayanan added that he hopes to present the SpeechLinks technology at engineering, medicine and health care forums because of its interdisciplinary approach.

“This is a very exciting but also tremendously challenging problem,” Narayanan said. “But even small steps can result in big impacts in our communities.”