CHAMPAIGN, Ill. - If only Fred Astaire and Ginger Rogers were around today to take a spin with new technology being developed and tested by a team of computer scientists in Illinois and California.
If they were, they'd be dancing circles around each other - only from a considerable distance. That's the beauty of Tele-immersive Environments for EVErybody, or TEEVE, a system that's being test-driven simultaneously across thousands of miles this spring in the labs of Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana-Champaign, and Ruzena Bajcsy, a professor of computer science at the University of California at Berkeley.
In technical terms, TEEVE is a distributed multi-tier application that captures images using 3-D camera clusters and distributes them over Internet2 (the network reserved for research and corporate clients), compressing and decompressing the 3-D video streams, rendering them into immersive video and displaying them on one or multiple large screens.
In layman's terms, think of TEEVE as a turbocharged version of videoconferencing, but with some very fancy new bells and whistles. Most notably, Nahrstedt said, TEEVE makes it possible for people to view their counterparts at remote sites from all angles.
And an important feature that sets it apart from other tele-immersive video-conferencing systems currently being developed or used elsewhere is its potential for delivering high-quality images and communications using relatively inexpensive technology and COTS - or commercial-off-the-shelf products and equipment.
"TEEVE is a great technology because it allows for more cost-effective cyberspace communication of people in their full body size," Nahrstedt said.
"This system is especially suited for learning new activities, training and meeting in cyberspace if a physical activity is to be performed," she said.
The researchers also believe the technology is ideally suited for a variety of entertainment-related purposes.
"With TEEVE we want to allow distributed artists such as dancers to train, design new choreography and experiment with different movements in the cyberspace," she said, noting that TEEVE's relatively low price tag would be of special interest to artists, who typically struggle to produce their work with limited funding.
This spring, Nahrstedt, Bajcsy and their research teams have been testing the technology with the aid of two performers: U. of I. dance student Renata Sheppard and U. of C. dance professor Lisa Wymore. In each experimental pas de deux, Sheppard stretched and spun about before semi-circular clusters of 3-D cameras in Nahrstedt's lab on the Urbana-Champaign campus, while Wymore executed her moves in Berkeley in a similar environment.
To date, Nahrstedt has been pleased with the results, which she pronounced "exciting and excellent."
"Both dancers met in the cyberspace, danced together and also synchronized when dancing," she said.
Among other potential applications, Nahrstedt expects TEEVE will, in the not-too-distant future, allow for the following scenarios to take place:
• After accidents, medical patients and physiotherapists meet in cyberspace, where the physiotherapist demonstrates muscle-strengthening exercises.
• Students are able to learn new sports or movement activities, such as tai chi, even when living in remote locations where no local teacher is available.
• While communicating with an elderly parent, adult children living far away can more accurately assess a parent's physical condition.
Nahrstedt predicts that it will be at least five to six years before TEEVE and other tele-immersive 3-D multi-camera collaborative environments are routinely used in university or corporate settings.
"Videoconferencing equipment - 2-D, single view - has been available for the last eight to 10 years, and only maybe in the last three years has it become common to have a conference room equipped with polycom conferencing equipment or access grid or net-meeting on a more regular basis."
In the meantime, Nahrstedt said, she and her colleagues will continue to design new software systems, protocols and hardware capabilities as common new platforms, such as multi-core processors and better cameras, become available.
A number of interesting research issues remain ripe for exploration as well, she said. They include figuring out how to simplify human-computer interfaces to allow people to customize their content and displays in order to more easily process multiple views and large amounts of information, and real-time processing and communication that would bring 3-D tele-immersive content closer to the quality of current television and radio quality.
Another big goal for the researchers is to focus on ways to automate the technology and make it more user-friendly.
Ultimately, she said, "the environments should be set up with a push of a button - this is absolutely impossible at this point."
Research on TEEVE was presented earlier this year at the Multimedia Computing and Networking conference in San Jose, Calif., and featured in the February issue of ProAV magazine. At Illinois, Nahrstedt's research team consists of graduate students Jigar Doshi, Jin Liang, Wanmin Wu, Zhenyu Yang and Bin Yu. Working with Bajcsy at Berkeley are Ross Diankov and Samuel Morris Johnston.
The research was funded by a grant from the National Science Foundation.