UW researchers have developed a way for disabled people to navigate a computer without the need to use a mouse.
Say “ahh” and the cursor zips toward the northeast corner of the computer screen. “Ooo” sends it shooting straight south. Want it to head southeast? Say “ohh.” To make the cursor do a circle or figure 8, let vowel sounds bleed into one another, like eee into ahh into aww and so on. You can make it hurry or slow by regulating the volume of your voice. To open a link, make a soft clicking sound.
So goes the University of Washington’s “Vocal Joystick” software, which uses sounds to help people with disabilities use their computers.
Its development has been a multidisciplinary task with faculty and students from several university departments — electrical engineering, linguistics, computer science, as well as the Information School — blending their expertise. (It is just one of a series of UW-generated assistive-technology projects ranging from enabling the blind to use touch screens to developing an alternative to the point-and-click method of computer navigation).
Most Read Business Stories
- U.S. pilots flying 737 MAX weren't told about new automatic systems change linked to Lion Air crash
- Will Amazon's HQ2 sink Seattle's housing market?
- Starbucks laying off 350 people, mostly at Seattle headquarters
- We freaked out over Amazon's HQ2 search. But it turned out to be for all the wrong reasons | Danny Westneat
- Amazon selects New York, Northern Virginia, for HQ2 expansion, reports say VIEW
Tests with patients
Researchers have tested the joystick with spinal-cord-injury patients at the UW Medical Center and just finished another round of testing with 10 participants with varying levels of disabilities.
Susumu Harada, a computer- science and engineering graduate student, administered the tests, putting each subject through 12 hours of training. He evaluated how they learned producing the correct vowel sounds, memorized the directional patterns and manipulated cursor speed.
Sometimes, moving the mouse by voice seemed frustrating, even a bit tiring. If the operator was out of sync with his own sounds as recorded by the software, the cursor might speed past a target in one direction and go so slowly in the other that the subject would have to take a break to catch his breath.
Some sounds came easily. Some seemed a bit unnatural and strained. But when a subject caught the rhythm, the task was easy and natural.
There are several options for people who need accommodations in using computers, but the UW software is distinguished on several levels. For one, it doesn’t use standard voice-recognition technology. Instead, it detects basic sounds at about 100 times a second and harnesses them to generate fluid, adaptive cursor movement.
Vocal-joystick researchers maintain the system is easier to use because it allows users to exploit a large set of sounds for both continuous and discrete movement and to make visual adjustments on the fly.
Kurt L. Johnson, a professor in the Department of Rehabilitation Medicine at the UW, says he believes the software has great potential because it is easy to both learn and use.
“A lot of assistive software doesn’t get used because it is too complicated,” he says.
“But I think they’ve created something intuitive here. We had some of our higher-level of spinal-cord patients test it, and one of them learned to use it in about 90 seconds.”
The Vocal Joystick requires only a microphone, a computer with a standard sound card and a user who can vocalize. The team behind the study, funded by the National Science Foundation, hopes to make a prototype available online this fall.
Vocal Joystick began in the electrical-engineering department. Professor Jeff Bilmes and his students, especially Jon Malkin and Xiao Li, created the underlying sound-recognition engine. From there, computer-science and engineering professor James Landay and Information School professor Jacob Wobbrock, along with Harada, developed creative ways to apply the technology.
Various offshoots of the Vocal Joystick technology, from playing a video game to operating a robotic arm, have been developed. Ultimately, researchers would one day like to apply this technology to a number of home devices, even electronic wheelchairs.
One of the other applications was “VoiceDraw,” which allows hands-free computer drawing. Harada used it to “paint” a portrait of Mount Fuji by sounds alone, and he won second place in a national competition for workplace innovation and design.
Wobbrock, who has been mentoring Harada and is a former first-place winner of the national award, leads a group he calls AIM, which stands for accessibility, interaction and mobility. He is working on a software project that makes a mouse slow and become more accurate as a user tries to enter and click on a target (think how you decelerate when your car begins to take a tight curve).
Another of his projects seeks an alternative to mouse clicking by triggering functions when the device crosses a “goal line.” You don’t click the mouse; you just cross a threshold.
Before coming to the UW, Wobbrock did cutting-edge work in assistive technology at Carnegie Mellon University in Pittsburgh. He says a parallel goal behind all these projects is to make technology work better for everyone, not just those with physical difficulties.
“Think of cut sidewalk curb cuts,” he says. “They help people in wheelchairs, but they also help me pushing a stroller or a grocery cart or riding a 10-speed bicycle.”
Richard Seven: 206-464-2241 or email@example.com