We tour several of the leading tech research labs in the US to discover which 10 research projects could soon change the world
Supercomputing power, via the Internet: the IBM world community grid
With the human genome now well and truly cracked, and disease research increasingly performed by computer simulation, the challenge is finding enough processing power without having to build your own supercomputer. One example of this is the FightAIDS@Home project, which started in 2000 and uses the processing power of home computers when the PC isn't in use.
According to Arthur Olson, the lab director of the project at Scripps Research Institute, the experiments involve immense calculations to find a drug compound for HIV.
The process is complex because the mutated cells are constantly evolving, so the tests are "embarrassing parallel" - they need to be run over and over again until the scientists "understand the mechanisms well enough to design new drugs to box the virus into
a corner," Olson says.
"It's an ambitious goal involving our computational arm, synthetic chemists and molecular biologists working together to test out the ideas and models that are provided in the FightAIDS@Home project."
Scripps now uses the World Community Grid, which is an IBM project involving several hundred servers. The grid consists of three parts. There's a website hat provides statistics and forums for users.
In California, there are IBM servers used by researchers to submit and process research projects. IBM also hosts a multitude of servers (it won't reveal how many) in Canada to do the back-end processing.
"World Community Grid runs at an average of 235 teraflops, and our members contribute well over 200 years of computer time each day," said Robin Willner, the vice-president for global community initiatives at IBM.
"We've had some research projects that took just a few months and others that have been running for years. The requirements are simply that the research should help humanity or the world in some way, and that the programming code used will work in a grid environment."
And we thought the requirements for the latest 3D games were tough.
9. Livermore's Nuclear Computer
Lawrence Livermore's Sequoia supercomputer, will assist in simulating nucelar tests
It's an in-joke among high-performance computing labs: no-one spends millions on a supercomputer to win the top 500 race, but it's a pleasant surprise if they do (we didn't say it was a good joke).
This year, Lawrence Livermore National Laboratory in California will install the Sequoia supercomputer, which will run at 20 petaflops and easily take the number one spot.
The exceptional speed will assist in simulating nuclear tests. In an interesting twist, the simulations - which predict how atoms will react in a nuclear explosion - have uncovered unique phenomena, demonstrating that computers have a powerful ability to show virtual data in extremely high detail.
One of the holy grails of technology is finally figuring out 3D displays - at the right price for consumers, without causing eye-strain, and without having to re-program existing content.
Alcatel-Lucent, which operates the famous Bell Labs in New Jersey, is tackling at least one of those variables. The idea: to deliver 3D content over an IPTV network.
One of the goals for the project is to develop 3D footage that can be viewed from multiple angles, with several 3D streams arriving in the home at once.
"Delivery of 3D necessitates higher transport bandwidths and much more flexibility, especially if interactive viewing or selection of multiple viewing angles is a requirement," said
Erwin Six, team leader for video technologies at Alcatel-Lucent
"As such, IPTV is the most suitable technology for enabling this transformation, and this is the case both in the production networks as well as in the access and core networks that deliver this content to the home."
Stanford professor Michael Genesereth has developed a semantic mail project called SEAmail, which aims to deliver email to the right inbox more reliably. Instead of addressing a message using an email address, you describe the attributes of that person, such as their job title. This way, the message reaches the person regardless of whether they change their email address.
One interesting side-effect is that the technique could reduce spam by using "semantic" messages. Junk messages are sent to a library of known email addresses: they're simple lists, not tagged with rich data such as "professor in the engineering department".
SEAmail works in the normal manner, but understands the sender metadata as well: it can recognise that a message is sent by a student at Stanford, rather than a spammer.
Professor Genesereth said the project is still to address some of the big challenges. "Getting good data for SEAmail becomes a much harder problem on the broader internet than it is within an organisation," he said.
"Although there are semantic standards that can allow systems to extract information about people from web pages, incomplete and/or inconsistent data could degrade the quality of the system."
MIT's robotic clam project mirrors the activity of a real razor clam
Researchers at MIT are learning from the animal kingdom. The robotic clam project mirrors the activity of a real razor clam, which can dig through underwater sand and implant itself more securely than an anchor for an ocean liner.
The razor clam turns the surrounding sand into a more liquid form to help with digging. Amos Winter, an MIT student who developed the project, created the robot clam with similar attributes, although it's about half the size of a real razor clam.
"The RoboClam can push with about 50 times the force of a real clam, and can move about twice as fast and open up twice as wide," said Winter. " A razor clam embeds itself much
more efficiently than any existing anchor. This is attractive in applications
where energy is at a premium, such as underwater robots, remote ocean
sensors and space applications.
There's also interest in using this technology where weight is an issue, such as seaplanes. Furthermore, the oil industry is interested in ultra-deepwater applications, where human interaction with the ocean bottom is difficult."
It isn't only creatures buried beneath the seabed that are attracting the interest of the MIT scientists. Winter says that the robotics field may start examining animals for other projects that mirror unique abilities - not just for digging anchors, but mimicking the speed, motion and force of living creatures, such as a whale or tiger. A robotic tiger? Now that we want to see.
HP is testing the use of Photonics, or sending data by light instead of copper wiring
Data at the speed of light: that's the promise of photonics. At HP Labs, senior fellow Stan Williams, who invented the memristor technology that works like a light switch to speed up transistors, has now developed a new way to move data more quickly between components.
Photonics uses light on a circuit board instead of copper wiring, which is slower and more prone to errors. Already, fibre-optics is used for networking inside office buildings and datacentres, and to connect one server rack to another. Now, HP hopes to push the technology even further.
"We are just about at the point where we can put photonics in systems," Williams told us. "In modern datacentres, copper cables are already being replaced by optical systems.
As you get inside the rack, however, the amount of data that needs to be moved goes up and you need to keep down costs. We're starting to connect racks together. In a year or two, we'll be testing blades together in a rack - CPU assemblies - using a photonic bus."
Williams calls this "bringing back the bus", and envisions a day when wires will not be used to connect, say, a motherboard to a hard disk. Instead, laser lights would be used (and switched off when not needed to save power) to transmit the data.
This vision of computing seems a bit like the Terminator movies, where data moves over thin lasers and could easily change what we think of as a computer. There would no longer be a simple ATX-sized box: a computer could be housed in a much smaller form factor, or not in a casing at all.
The one major challenge for the project has to do with costs. A single photonics testing device costs $1 million. Nevertheless, that's still a small drop in the vast ocean that is HP's research lab.
The company employs 700 researchers on 22 major research projects at its headquarters in Palo Alto, ranging from social networking to RFID chips that are smaller than a fingernail.
An Intel demonstrating showing wireless power.
The idea of charging a mobile device using a wireless signal has finally resulted in retail products. Originally, the concept was an in-joke between techie elites - ThinkGeek.com even had a mock product for charging devices wirelessly.
The real version would mimic the fake one, saving time in having to connect a device to a charger: instead, you simply set your phone or MP3 player on a pad.
Convenient Power has developed a Dual Charger pad that uses a vertical flux to transmit a "fountain of energy" to a receiver. A coil inside the transmitter sends a power signal to a coil inside the receiver, which is either attached to the phone
or embedded into the device.
Camille Tang, CFO of Convenient Power, claims that wireless power will become commonplace once the standards are approved. "People will be able to charge any of their devices on a common charging platform - that is, devices of different brands, models, and power ratings," said Tang.
One of the big challenges for wireless power is controlling heat, especially if used for lighting in a room, stereos, game consoles and other gear. This is truly hot technology.
Songsmith is an unusual project for Microsoft - it's partly an entertainment tool that you can use to make real recordings and partly a proof-of-concept. The concept is that the computer uses signal-processing techniques to analyse your voice and adjust background music to the right key automatically.
There are several musical styles - such as big band and rock - and you can edit the styles and tempo after you do the voice recording. You start a new song, select the style and tempo, and start singing using a PC microphone.
Songsmith is an excellent example of machine learning: the tool can adjust to your voice in the same way that another person hears a pitch and can then adjust their performance.
In the future, these kinds of tools could be much more pervasive: a speech-recognition system that knows you're speaking English, but can also interpret French automatically.
Or, when you're surfing the web, the computer learns which sites are valuable to you based on previous clicks, and makes suggestions.
Google's translation engine works by using a vast library of language pairs
One important characteristic of Google is that the company doesn't let projects linger in obscurity. Some - such as the odd Dodgeball service for meeting friends - the company will simply cancel. Others slowly germinate over time until they're much more powerful and compelling. Google Translate is one such project.
The company recently added several more languages and has plans to include just about every language still spoken. Most importantly, the Google algorithms for translation keep improving such that the engine can now translate whole passages of text in only a second or two.
The translation engine works by using a vast library of language pairs, word matching that takes into account
the subtle variations between words.
One example is that there may be multiple meanings for one word, but one obvious meaning for that same word in another language (the word cousin in English has one meaning, in French it depends on whether the person is male or female), or - as with Chinese - there might not be spaces between words, which adds to the complexity of the engine.
Jeff Chin, the product manager on the project, says that translation poses one of the most interesting computer science challenges, because of all of the subtle variations in word meanings and the processing power required to provide fast results. "To solve machine translation problems, we're using the Minimum Bayes-Risk (MBR) criterion," he explained.
"Essentially, we look at a sample of the best candidate translations - the so called n-best list - and choose the safest one, the one most likely to provide the best translation quality. You might want to view this as choosing a translation that's a lot like the other good translations instead of choosing that strange one that had the good model score.
We build a lattice of translations during the search and then we do our MBR search over the lattice. Instead of a hundred or thousand best translations that we'd use for the n-best approach, lattices give us access to a number that rivals the number of particles in the visible universe."
Each year, the quality of translation improves. Next up in the field: a speech-recognition system that uses the same language pair database, but allows you to speak the word you want to translate and hear the result, likely with a mobile phone.
You might not think of Yahoo as an exceptionally innovative company. It was embarrassingly out-smarted in search by Google and tends to acquire companies - such as Flickr - rather than develop new technologies. Yet, with 13,000 employees on a sprawling campus in Silicon Valley, it's engaged in some impressive research.
In each building, there are usability testing labs where web users' reactions to unreleased products are monitored and evaluated. One recent research projects, Fire Eagle, has emerged as a new portal (http://fireeagle.yahoo.net).
The project has great ambitions: today, it lets you share your geolocation automatically from GPS devices. You can also let others know your whereabouts from a GPS-enabled phone. In the future, location awareness could become a prime social indicator, a kind of automated Twitter that feeds your location status into multiple sites.
"We're taking all of this ambient awareness data and moving it into the social-networking space," said the Oxford-educated Tyler Bell, who heads the Yahoo Geo program. "It incarnates location data about you on to the network."
He described the Yahoo geolocation research as "game-changing", a way to move GPS information from mobiles to the web. There's also a personalisation angle - your geostatus can be configured appropriately for acquaintances (who just know the city location) or good friends (who might know which table you're sitting at in a restaurant). In the future, geostatus could confirm your identity for a purchase, or even make the purchase for you from a car at the drive-through.
Mozilla is planning similar location features for the next full release of Firefox, and Google has also mirrored the concept with its new Latitude service.