Advertisement

THE CUTTING EDGE : New Way to Ease Web Traffic Jams Is Winning Big-Name Fans : Internet: AOL, Yahoo and others back system that uses hundreds of computers to send data. Others say it’s passing fancy.

Share
TIMES STAFF WRITER

For all the high-tech trappings of the Internet, moving billions of bits of images and text across the Web still works a bit like an old-fashioned bucket brigade.

Web pages are broken down into tiny packets of information and then handed from one computer to the next until they finally reach their destination. It is a robust system of moving data, but one that can be sluggish at times, in part because of the slight delays at each handoff.

To ease the congestion, engineers and researchers have over the last few years developed a subtly different way of moving information, loosely known as distributed content delivery.

Advertisement

Instead of storing Web pages in a few centralized locations on the Internet, the new system distributes the information to hundreds and ultimately thousands of computers placed as close as possible to computer users, thus bypassing some of the congestion of the Internet.

In essence, it is an effort to bring data to your curb.

The idea of distributed delivery has won the support of some of the biggest names on the Internet, including America Online, CNN, Yahoo, Cisco Systems and ESPN.com, all of which have either invested in or begun using the method to deliver information.

Borrowing from the world of supercomputers, the pieces of distributed networks work with one another to find the closest storage point to a user, monitor congestion and balance one another’s workload.

In the grand scheme of the Internet, it is a small adjustment--one that some competitors who run large, centralized data farms say is a passing fancy, a technology that will be overtaken by decreasing prices for transmitting information and the high cost of maintaining all the computers strewn across the world.

“Why go through all this grief when the one cost you’re trying to save is dropping like a rock,” said Mark Cuban, the president and co-founder of Broadcast.com, the biggest provider of video on the Internet and a potential competitor to the distributed delivery systems.

But at least for the time being, the idea of distributed delivery has sparked a hot field now led by two companies-Akamai Technologies Inc. of Cambridge, Mass., and Sandpiper Networks of Thousand Oaks.

Advertisement

Akamai, whose technology was developed over a three-year period by a professor at MIT’s Laboratory for Computer Science, now claims among its customers Yahoo, CNN Interactive, ESPN.com, Go Network, CBS Sportsline, About.com, the New York Times, Paramount Digital Entertainment and Infoseek.

Akamai, which has been backed with investments from companies such as Apple Computer Inc. and Cisco Systems Inc., announced last week its plan to go public with an offering that could raise $86.25 million.

Sandpiper Networks counts such companies as the Los Angeles Times, Intuit, WebRadio.com, E! Online and NBC among its customers. Sandpiper has been backed with investments from American Online, Inktomi, NBC and Times Mirror Co.

A third company, SightPath Inc., which is headed by another MIT professor, M. Frans Kaashoek, also uses a similar concept of distributed delivery, but it has focused mainly on delivering video within large corporate networks.

Albert Lill, research director for Internet and interactive media for the Gartner Group, a Stamford, Conn.-based research and consulting firm, said that while the idea of distributing content to bypass the bottlenecks may seems obvious, getting the systems to work has only come about in the past few months through the use of complex algorithms to control the web-like networks.

“It’s a tremendously elegant solution to a difficult problem,” he said.

The problem that has slowed the development of distributed content delivery is one that has dogged the Internet since its explosive growth in the early 1990s.

Advertisement

Computers that store Web pages can be overloaded by so many requests that they can’t respond fast enough to all of them, resulting in congestion and delays. Passing information across the Internet from computer to computer also adds another bit of delay and increases the chances of packets of information being lost.

Today, the most popular sites, such as Yahoo, are stored on vast server farms in a few key locations around the world.

For some users, it may take only a few hops to send a piece of information; for others, it could take dozens of jumps to negotiate the web of networks that make up the Internet. The delay can be measured in milliseconds or, at times of bad congestion, seconds.

Frank Thomson Leighton, Akamai’s chief scientist, said the distributed solution eliminates as many of the hops as possible. Akamai’s network is made up of about 900 computers in 50 locations.

When a computer user clicks on a Web link, a signal is sent to the main computer storing the Web page. But instead of sending the pictures and words directly, it contacts another computer that stores the same information but is located closest to the user.

Instead of passing information by bucket brigade, the information is sent directly to the user.

Advertisement

To make the concept work, each computer in a distributed system must be coordinated with the rest of the network.

“There are hundreds of thousands of variables,” Leighton said. “It’s a very large problem that is rapidly changing. You have to make a decision with imperfect information, and you don’t want a central computer making that decision because the problem is just too large. What you need is a distributed intelligence where every piece is part of a larger brain.”

Monty Mullig, vice president for Internet technology at CNN and now an Akamai customer, said the speed gains can be up to 50%.

But he added that one of the main reasons CNN has begun using the service is that moving large amounts of information through a distributed network has turned out to be cheaper.

“Even if it was speed neutral, we’d still use it,” Mullig said. “It’s cheaper, so why not do it?”

Cuban, of Broadcast.com, said that while systems like Akamai and Sandpiper’s may speed delivery and save money in some cases now, the logic will become more blurred in the future.

Advertisement

He said maintaining distributed servers is costly and complex. Just finding the space and the people to handle all the thousands of computers that Akamai and Sandpiper project is a major expense.

In his view, the future belongs to opening larger trunk lines--in essence building bigger freeways fed by larger highways to avoid congestion. And unlike the cost of labor and space, the price of moving information is dropping dramatically.

Kaashoek, chief scientist and co-founder of SightPath, countered that just building ever-bigger data farms fed with bigger trunk lines has its limits as well. Those computers must also be coordinated, and the networks can become fragile as the systems grow larger.

Plus, there are still the potential problems caused by hops and congestion on the Internet, which will only increase.

Leo Spiegel, president of Sandpiper, said that in many cases in modern technology, from supercomputers to the Internet, the trend has been to distribute intelligence instead of centralizing it because distributed systems are more nimble, scalable and, in some ways, more robust.

“Just look back in time,” Spiegel said. “Everything always moves toward decentralization. The history of computing is all about distributing intelligence.”

Advertisement
Advertisement