Advertisement

Breaking the Speed Barrier

Share
Times Science Writer

When a flash flood of Mars Pathfinder enthusiasts threatened to overwhelm the Internet on the Fourth of July, a network manager at NASA’s Ames Research Center in Silicon Valley quickly channeled the traffic off the regular Internet and onto a special, high-speed research network.

There the river of digital bits instantly swelled to 20 megabits per second-more than triple the capacity of the normal network connection to the Mars Web site at the Jet Propulsion Laboratory in Pasadena, though still only a fraction of what the experimental network could carry. In the days that followed, JPL’s Web site attracted more than 400 million “hits”, and 1 million people downloaded images, audio and video.

Few people had any reason to know their timely connections were made possible by advanced Internet technology normall reserved for a select group of university researchers and supercomputer users. But NASA’s experimental Research and Education Network, or NREN, is just one of a series of projects supporting continued innovation in Internet technology--and assuring that the scientific community has access to the high-speed networks it needs.

Advertisement

The Internet, of course, was originally built for scientists and engineers. But the explosion in the commercial use of the network over the last several years has created congestion so severe that the Internet today is useless for much advanced research. And the private sector, which took over operation of the Internet “backbone” from the National Science Foundation in 1994, has not done a good job of pushing network technology forward, federal officials say.

As a consequence, scientists are now flocking to private high-speed networks--including NASA’s NREN, the National Science Foundation’s vBNS network and the Energy Department’s ESnet-- which are up to 10 times faster than the Internet’s fastest central connections. These private networks are designed for the world’s true power users: scientists who are trying to simulate the collision of galaxies, model the behavior of the world’s weather in real time, create three-dimensional digital simulations of the oceans, or even investigate the inherently fluid complexity of the Internet itself.

“We want to make sure the networks aren’t a choke point” for scientists, said John Dundas, manager of Caltech’s CITnet 2000 network project.

These advanced networks not only foster scientific collaborations among far-apart laboratories--the purpose of the original Internet--and spur the development of systems that will eventually serve networks of all kinds, they also offer the possibility of new kinds of computing.

“People have realized there are entirely new applications that would transform how we do things, if only the Internet worked better,” said Mark Luker, program director of NSFnet at the National Science Foundation.

For example, researchers at the Pittsburgh Supercomputer Center and at Stuttgart University in Germany this summer used a high-speed transatlantic telecommunications network to link two Cray supercomputers. The resulting “meta-computer” combined the power of 1,024 processors with a theoretical peak performance of 675 billion calculations per second.

Advertisement

They plan this fall to expand their meta-computer to encompass a third supercomputer at the Sandia National Laboratory in New Mexico. To make the three think as one, they will need links that would allow the equivalent of the 30-volume Encyclopaedia Britannica to be transmitted halfway around the world every few seconds.

But even these Internet fast lanes are not capacious enough.

“We have outgrown a lot of the original Internet technology,” said Michael Robert, project director for an academic computing consortium called the Internet 2 Project. “There is a combination of frustration at the underlying technology, plus a conviction that it is time to assemble the next generation.”

The search for new Internet technology is accelerating:

* The National Science Foundation is spending $10 million a year on a new optical cable network, the very-high-speed Backbone Network Service (vBNS), operated by MCI Communications. This spring it upgraded the network to 622 megabits per second--more than 10 times the current Internet maximum capacity and about 22,000 times faster than conventional modems. By the end of the year, it will link 100 research universities and five major supercomputer centers. The hope is to achieve speeds of 2,400 megabits per second within three years.

* The Internet 2 Project, which consists of 108 research universities, including Caltech, USC and the UC system, is spending $50 million a year to develop a national network 100 times faster than today’s Internet. As a first step, an experimental network operating at 2,400 megabits per second recently started service among Duke University, North Carolina State University and the University of North Carolina at Chapel Hill.

* The Clinton administration has pledged $100 million annually for a Next Generation Internet initiative, or NGI, over the next five years. If funded by Congress this fall, NASA, the Defense Department, the Energy Department and several other federal agencies will meld their high-speed research networks into the National Science Foundation’s vBNS network. Within five years, federal officials want to connect research groups at speeds 1,000 times faster than today’s Internet.

* A group of California universities, working with a $3.8-million NSF grant, is building a statewide research network that will operate initially at speeds 100 times faster than today’s Internet. Within three years, it expects to have one high-performance network in Los Angeles, operating at speeds of 2.4 gigabits per second, linked to a similar high-speed network in San Francisco through the science foundation’s vBNS service.

Advertisement

* In Europe, a consortium of 16 national research networks led by Dante, a nonprofit company in Cambridge, England, is spending about $45 million a year on the Ten-34 Project to forge private high-speed links of up to 622 megabits per second across national borders.

“Things are heating up,” said Patrick Kleinhammer, who manages JPL’s network of 13,000 computers in Pasadena.

*

The new push for advanced networking research and development efforts reflects a surprising change of heart about the relationship between the federal government and the Internet. When the Internet backbone was first privatized, it was thought that commercial providers could handle the technological challenges by themselves, under the spur of market competition.

Now, though, any thought that industry on its own will develop the next wave of Internet-networking technology has been abandoned, federal officials say.

Moreover, they acknowledge that privatization of the Internet caused a slip in the development of new technology.

“There were research applications that were going begging for the lack of high-performance networks,” said George Strawn, the National Science Foundation’s division director for networking and communications research and infrastructure. “So we find ourselves developing high-performance networking and a new set of technologies for which it is not clear yet there is a business case.”

Advertisement

The most fundamental network improvements involve optimizing the use of fiber-optic cable, which in principle can transmit data at 2.4 billion bits of data per second, about 50 times faster than the “T-3” cables used by many Internet service providers.

Advanced-network researchers, however, want to ensure that high-speed connections are directly available to each Internet user in what they call desktop-to-desktop service, and that entails a more fundamental leap in networking technology.

The evolution of the Internet also demands more advanced switching systems and routers--as the way stations and transfer points along the network are known--and dramatic revisions in the electronic protocols that today govern the commercial Internet.

An experimental Resource Reservation Protocol under development at the USC Information Sciences Institute and at Xerox’s Palo Alto Research Center allows users to make a request for high-quality service that would be honored across all the thousands of networks that make up the Internet. That way, some users can take--and pay--for priority over other network traffic.

*

But part of the problem in designing tomorrow’s Internet is that no one really understands what is going on in the one that exists today. And as the speed of network traffic increases, the engineering mystery only deepens.

At the San Diego Supercomputer Center, researchers are trying to invent a new generation of measurement tools, such as system monitors and flow meters, that can help answer the questions posed by the accelerating complexity of Internet traffic patterns.

Advertisement

NSF officials expect the foundation’s network to become an open-ended research and development project--always staying at least one technological generation ahead of what is available commercially.

“The vBNS is essentially an engineering incubator that allows us to develop, test and try out these new tools,” said Tracie Monk, program coordinator for the National Laboratory for Advanced Network Research in San Diego.

“And they are very quickly being adopted by [commercial] Internet service providers,” she said.

In some instances, the political hurdles today are almost as daunting as the technical problems.

The Internet was developed in relative obscurity, but its commercial success means the effort to develop new network technologies is subject to intense scrutiny in Congress and by commercial interests eager to exploit any emerging technology.

This summer, Sen. Conrad R. Burns (R-Mont.) threatened to block federal funding for any advanced Internet research out of concern that proposed national high-performance networks would bypass rural states, leaving them to wither in an information-driven economy, rather like small towns bypassed a century ago by the railroad.

Advertisement

The funding proposal earned congressional committee approval only after NSF Director Neal Lane promised new $200,000 grants for researchers in 18 rural states to defray the higher costs of linking to the NSF’s high-speed network. The science foundation already pays up to $350,000 of the cost of a new connection.

“Now that we are working in the spotlight of our previous successes, our politics are much more complicated. Everybody wants to make sure they get their claim staked out quickly,” said the NSF’s Strawn. “Everybody is watching very closely.”

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

Step on It

The congestion on the Internet has made the network useless for conducting much of today’s advanced research. Scientists are developing higher-speed networks to compensate. A sampling:

Network: vBNS

How much faster than Internet: 10 times

Annual cost (millions): $10

Operator: National Science Foundation

Status: Operational

*

Network: Ten-34 Project

How much faster than Internet: 10 times

Annual cost (millions): 45

Operator: 16 European research networks

Status: In development

*

Network: Internet 2

How much faster than Internet: 100 times

Annual cost (millions): 50

Operator: University consortium

Status: In development

*

Network: Next Generation Internet

How much faster than Internet: 1,000 times

Annual cost (millions): 100

Operator: Federal government

Status: Awaiting congressional approval

Advertisement