With grant, USC researchers hope to help break language barrier
Posted September 27, 2009 at 2:30 pm in News
A million-dollar grant awarded to a team of USC researchers could make communication with doctors across language barriers a lot easier, thanks to a new advanced translation system.
The National Science Foundation grant of $2.2 million, awarded
Aug. 15, will fund four years of research on SpeechLinks, the translation system that researchers hope can go beyond basic word recognition to interpret emotion and intonation in speech.
‚ÄúThere‚Äôs much more going on in human speech than what we say,‚ÄĚ said Shrikanth Narayanan, an engineering professor and the SpeechLinks project director. ‚ÄúWhat we speak and how we translate depends a lot on the context … You can say the same set of words and by changing one set‚Äôs intonation, the two can have a different meaning.‚ÄĚ
Although voice recognition and basic emotion recognition software already exist, Narayanan said the marriage of the two makes the SpeechLinks project more unique.
Most speech-to-speech translation devices follow a pipeline approach, in which each element ‚ÄĒ speech recognition, conversion of words to text and translation into a foreign language ‚ÄĒ are all developed separately.
Narayanan, however, wants to integrate these elements. He said he wants to ‚Äúcapture the rich information in speech,‚ÄĚ such as emotions.
He added that he hoped the software would be able to fulfill a need for ‚Äúcross-language [and]
cross-cultural communication‚ÄĚ in many professional settings, including in the health care industry.
Researchers will first test SpeechLinks in hospitals, in an effort to improve doctor-patient relationships when one speaks English as a second language.
‚ÄúTake an urban center like Los Angeles,‚ÄĚ Narayanan said. ‚ÄúThere are a lot of people here who have limited or no proficiency in English ‚ÄĒ a language barrier can compromise health care treatment, so can we build a set of tools that can work with human translation abilities?‚ÄĚ
Win May, an associate professor of clinical pediatrics at the Keck School of Medicine and a collaborator on the SpeechLinks project, said SpeechLinks would help health care facilities, many of which are required to provide access to translation services for patients who speak little or no English.
‚ÄúHowever, these [translation] services may not be readily available,‚ÄĚ May said. ‚ÄúSo SpeechLinks will allow health care providers to communicate effectively with patients in lieu of a human interpreter.‚ÄĚ
As director of the Clinical Skills, Education and Evaluation Center at the Keck School of Medicine, May will coordinate the medical students, actors and volunteers portraying real patients during the project‚Äôs test runs.
‚ÄúWe want to make sure the goal of being understood is met on both sides,‚ÄĚ said Margaret McLaughlin, a collaborator on the project and a professor of communication at the Annenberg School for Communication.
McLaughlin, who has a background in conversation analysis and computer-mediated communication, became involved with the project after Narayanan approached her.
‚ÄúThe patient will be sensitive to whether or not the [human] interpreter is accurately relaying what they‚Äôre saying to the provider,‚ÄĚ McLaughlin said. ‚ÄúWe want them to have confidence that SpeechLinks is accurate in doing so.‚ÄĚ
Narayanan also said SpeechLinks will be cost-effective because it will not depend on special equipment.
‚ÄúSince it‚Äôs software-based, it can be run on any laptop computer and can potentially serve any number of users,‚ÄĚ Narayanan said.
Narayanan added that he hopes to present the SpeechLinks technology at engineering, medicine and health care forums because of its interdisciplinary approach.
‚ÄúThis is a very exciting but also tremendously challenging problem,‚ÄĚ Narayanan said. ‚ÄúBut even small steps can result in big impacts in our communities.‚ÄĚ