The next great telecom revolution phần 6

pdf
Số trang The next great telecom revolution phần 6 25 Cỡ tệp The next great telecom revolution phần 6 255 KB Lượt tải The next great telecom revolution phần 6 0 Lượt đọc The next great telecom revolution phần 6 0
Đánh giá The next great telecom revolution phần 6
4.2 ( 5 lượt)
Nhấn vào bên dưới để tải tài liệu
Đang xem trước 10 trên tổng 25 trang, để tải xuống xem đầy đủ hãy nhấn vào bên trên
Chủ đề liên quan

Nội dung

106 THE INTERNET RULES—IP TECHNOLOGIES provided by broadband Internet connections. “Overall, by 2007, the U.S. IP telephony market is forecast to grow to over 5 million active subscribers,” says In-Stat/MDR’s Schoolar. “While this shows a fivefold increase in subscribers over 2002, it still lags U.S. plain old telephone service (POTS) with over 100 million households.” As interest in VoIP grows, the private line business suffers. Once the cash cow of data transport, the future of private line services is in jeopardy as the world slowly migrates to IP. Whereas the bottom hasn’t yet fallen out on private line expenditures (U.S. businesses spent roughly $23 billion on private line services in 2003, up about 4 percent), In-Stat/MDR expects that growth will stagnate in the near term, with this market facing an eventual and pronounced decline in the long term. “The reality is that the public network is migrating to IP, meaning traditional circuit-switch private lines will need to migrate as well,” says Kneko Burney, In-Stat/MDR’s chief market strategist. Today, the most common migration path is primarily to switch to a high-end version of DSL, such as HDSL, and this is likely to escalate over time as DSL reach and capabilities broaden. “However, this migration will be gradual,” says Burney, “meaning that the T1 businesses will experience a long, slow, and possibly painful exit as replacement escalates—similar to that experienced by long distance and now, local phone service.” Burney believes that traditional T1 providers may be able to manage the erosion through innovation—meaning stepping up plans to offer integrated T1 lines—and by focusing on specific segments of the market, like midsized businesses (those with 100 to 999 employees). “According to In-Stat/MDR’s research, respondents from midsized businesses were somewhat less likely than their peers from both smaller and larger firms to indicate that they were planning or considering switching from T1 to integrated T1, cable, or S, H, or VDSL alternatives,” says Colin Nelson, an In-Stat/MDR research analyst. 5.2 THE NEXT INTERNET Today’s Internet is simply amazing, particularly when combined with broadband access. Yet speeds are set to rise dramatically. Organizations such as the academic-sponsored Internet2 and the U.S. government’s Next Generation Internet are already working on developing a global network that can move information much faster and more efficiently than today’s Internet. In 2003, Internet2 researchers sent data at a speed of 401 megabits per second (Mbps) across a distance of over 7,600 miles, effectively transmitting the contents of an entire CD in less than two minutes and providing a taste of what the future Internet may be like. By 2025, we’ll likely be using Internet version 3, 4, or 5 or perhaps an entirely new type of network technology that hasn’t yet been devised. How fast will it run? Nobody really knows right now, but backbone speeds of well THE NEXT INTERNET 107 over 1 billion bps appear likely, providing ample support for all types of multimedia content. Access speeds, the rate at which homes and offices connect to the Internet, should also soar—probably to well over 100 Mbps. That’s more than enough bandwidth to support text, audio, video, and any other type of content that users will want to send and receive. The next-generation Internet will even revolutionize traditional paperbased publishing. Digital paper—thin plastic sheets that display highresolution text and graphic images—offers the prime attributes of paper, including portability, physical flexibility, and high contrast, while also being reusable. With a wireless connection to the Internet, a single sheet of digital paper would give users access to an entire library of books and newspapers. Ultimately, however, an ultrabroadband Internet will allow the creation of technologies that can’t even be imagined today. Twenty years ago, nobody thought that the Internet would eventually become an everyday consumer technology. In the years ahead, the Internet itself may spin off revolutionary, life-changing “disruptive” technologies that are currently unimaginable. “It’s very hard to predict what’s going to be next,” says Krisztina Holly, executive director of the Deshpande Center for Technological Innovation at the Massachusetts Institute of Technology. “Certainly the biggest changes will be disruptive technologies.” 5.2.1 Riding the LambdaRail An experimental new high-speed computing network—the National LambdaRail (NLR)—will allow researchers nationwide to collaborate in advanced research on topics ranging from cancer to the physical forces driving hurricanes. The NLR consortium of universities and corporations, formed over the past several months, is developing a network that will eventually include 11,000 miles of high-speed connections linking major population areas. The LambdaRail name combines the Greek symbol for light waves with “rail,” which echoes an earlier form of network that united the country. NLR is perhaps the most ambitious research and education networking initiative since ARPANET and NSFnet, both of which eventually led to the commercialization of the Internet. Like those earlier projects, NLR is designed to stimulate and support innovative network research to go above and beyond the current Internet’s incremental evolution. The new infrastructure offers a wide range of facilities, capabilities, and services in support of both application level and networking level experiments. NLR will serve a diverse set of communities, including computational scientists, distributed systems researchers, and networking researchers. NLR’s goal is to bring these communities closer together to solve complex architectural and end-to-end network scaling challenges. Researchers have used the recently created Internet2 as their newest superhighway for high-speed networking. That system’s very success has given rise 108 THE INTERNET RULES—IP TECHNOLOGIES to the NLR project. “Hundreds of colleges, universities, and other research institutions have come to depend on Internet2 for reliable high-speed transmission of research data, video conferencing, and coursework,” says Tracy Futhey, chair of NLR’s board of directors and vice president of information technology and chief information officer of Duke University. “While Internet2’s Abilene network supports research, NLR will offer more options to researchers. Its optical fiber and light waves will be configured to allow essentially private research networks between two locations. The traffic and protocols transmitted over NLR’s point-to-point infrastructure provide a high degree of security and privacy. “In other words, the one NLR network, with its ‘dark fiber’ and other technical features gives us 40 essentially private networks, making it the ideal place for the sorts of early experimentation that network researchers need to develop new applications and systems for sharing information,” says Futhey. NLR is deploying a switched Ethernet network and a routed IP network over an optical DWDM network. Combined, these networks enable the allocation of independent, dedicated, deterministic, ultra-high-performance network services to applications, groups, networked scientific apparatus and instruments, and research projects. The optical waves enable building networking research testbeds at switching and routing layers with ability to redirect real user traffic over them for testing purposes. For optical layer research testbeds, additional dark fiber pairs are available on the national footprint. NLR’s optical and IP infrastructure, combined with robust technical support services, will allow multiple, concurrent large-scale networking research and application experiments to coexist. This capability will enable network researchers to deploy and control their own dedicated testbeds with full visibility and access to underlying switching and transmission fabric. NLR’s members and associates include Duke, the Corporation for Education Network Initiatives in California, the Pacific Northwest Gigapop, the Mid-Atlantic Terascale Partnership and the Virginia Tech Foundation, the Pittsburgh Supercomputing Center, Cisco Systems, Internet2, the Georgia Institute of Technology, Florida LambdaRail, and a consortium of the Big Ten universities and the University of Chicago. Big science requires big computers that generate vast amounts of data that must be shared efficiently, so the Department of Energy’s Office of Science has awarded Oak Ridge National Laboratory (ORNL) $4.5 million to design a network up to the task. “Advanced computation and high-performance networks play a critical role in the science of the 21st century because they bring the most sophisticated scientific facilities and the power of high-performance computers literally to the researcher’s desktop,” says Raymond L. Orbach, director of the Department of Energy’s science office. “Both supercomputing and high-performance networks are critical elements in the department’s 20-year facilities plan that Secretary of Energy Spencer Abraham announced November 10th.” THE NEXT INTERNET 109 The prototype-dedicated high-speed network, called the Science UltraNet, will enable the development of networks that support high-performance computing and other large facilities at DOE and universities. The Science UltraNet will fulfill a critical need because collaborative large-scale projects typical today make it essential for scientists to transfer large amounts of data quickly. With today’s networks, that is impossible because they do not have adequate capacity, are shared by many users who compete for limited bandwidth, and are based on software and protocols that were not designed for petascale data. “For example, with today’s networks, data generated by the terascale supernova initiative in two days would take two years to transfer to collaborators at Florida Atlantic University,” says Nageswara Rao of Oak Ridge National Laboratory’s Computer Science and Mathematics Division. Obviously, Rao says, this is not acceptable; thus, he, Bill Wing, and Tom Dunigan of ORNL’s Computer Science and Mathematics Division are heading the three-year project that could revolutionize the business of transferring large amounts of data. Equally important, the new UltraNet will allow for remote computational steering, distributed collaborative visualization, and remote instrument control. Remote computational steering allows scientists to control and guide computations being run on supercomputers from their offices. “These requirements place different types of demands on the network and make this task far more challenging than if we were designing a system solely for the purpose of transferring data,” Rao says. “Thus, the data transmittal requirement plus the control requirements will demand quantum leaps in the functionality of current network infrastructure as well as networking technologies.” A number of disciplines, including high-energy physics, climate modeling, nanotechnology, fusion energy, astrophysics, and genomics will benefit from the UltraNet. ORNL’s task is to take advantage of current optical networking technologies to build a prototype network infrastructure that enables development and testing of the scheduling and signaling technologies needed to process requests from users and to optimize the system. The UltraNet will operate at 10 to 40 Gbps, which is about 200,000 to 800,000 times faster than the fastest dialup connection of 56,000 bps. The network will support the research and development of ultra-high-speed network technologies, high-performance components optimized for very large-scale scientific undertakings. Researchers will develop, test, and optimize networking components and eventually make them part of Science UltraNet. “We’re not trying to develop a new Internet,” Rao says. “We’re developing a high-speed network that uses routers and switches somewhat akin to phone companies to provide dedicated connections to accelerate scientific discoveries. In this case, however, the people using the network will be scientists who generate or use data or guide calculations remotely.” 110 THE INTERNET RULES—IP TECHNOLOGIES The plan is to set up a testbed network from ORNL to Atlanta, Chicago, and Sunnyville, California. “Eventually, UltraNet could become a specialpurpose network that connects DOE laboratories and collaborating universities and institutions around the country,” Rao says. “And this will provide them with dedicated on-demand access to data. This has been the subject of DOE workshops and the dream of researchers for many years.” 5.2.2 Faster Protocol As the Internet becomes an ever more vital communications medium for both businesses and consumers, speed becomes an increasingly critical factor. Speed is not only important in terms of rapid data access but also for sharing information between Internet resources. Soon, Internet-linked systems may be able to transfer data at rates much speedier than is currently possible. That’s because a Penn State researcher has developed a faster method for more efficient sharing of widely distributed Internet resources, such as Web services, databases, and high-performance computers. This development has important long-range implications for virtually any business that markets products or services over the Internet. Jonghun Park, the protocol’s developer and an assistant professor in Penn State’s School of Information Sciences and Technology, says his new technology speeds the allocation of Internet resources by up to 10 times. “In the near future, the demand for collaborative Internet applications will grow,” says Park. “Better coordination will be required to meet that demand, and this protocol provides that.” Park’s algorithm enables better coordination of Internet applications in support of large-scale computing. The protocol uses parallel rather than serial methods to process requests. This ability helps provide more efficient resource allocation and also solves the problems of deadlock and livelock—an endless loop in program execution—both of which are caused by multiple concurrent Internet applications competing for Internet resources. The new protocol also allows Internet applications to choose from among available resources. Existing technology can’t support making choices, thereby limiting its utilization. The protocol’s other key advantage is that it is decentralized, enabling it to function with its own information. This allows for collaboration across multiple, independent organizations within the Internet’s open environment. Existing protocols require communication with other applications, but this is not presently feasible in the open environment of today’s Internet. Internet computing—the integration of widely distributed computational and informational resources into a cohesive network—allows for a broader exchange of information among more users than is possible today. (users include the military, government, and businesses). One example of Internet collaboration is grid computing. Like electricity grids, grid computing har- THE NEXT INTERNET 111 nesses available Internet resources in support of large-scale, scientific computing. Right now, the deployment of such virtual organizations is limited because they require a highly sophisticated method to coordinate resource allocation. Park’s decentralized protocol could provide that capability. Caltech computer scientists have developed a new data transfer protocol for the Internet that is fast enough to download a full-length DVD movie in less than five seconds. The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 Mbps by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world’s high-speed production networks. The experiment was performed in November 2002 by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3). The FAST protocol was developed in Caltech’s Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort led by Caltech. Harvey Newman, a professor of physics at Caltech, says the fast protocol “represents a milestone for science, for grid systems, and for the Internet.” “Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields,” Newman says. “The ability to extract, transport, analyze, and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet.” Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. “These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution. Today, these activities often are forced to share their data using literally truck or plane loads of data,” Cottrell says. “Utilizing the network can dramatically reduce the delays and automate today’s labor intensive procedures.” 112 THE INTERNET RULES—IP TECHNOLOGIES The ability to demonstrate efficient high-performance throughput using commercial off-the-shelf hardware and applications, is an important achievement. With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years; this demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds. The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman’s group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity, and importance of organizing and implementing leading-edge global experiments. HENP is one of the principal drivers and codevelopers of global research networks. One unique aspect of the HENP testbed is the close coupling between research and development (R&D) and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data-intensive science. The congestion control algorithm of the present Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. Today, this algorithm cannot scale to anticipated future needs, when networks will be to carry millions of uncompressed voice calls on a single path or to support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as the use of nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have been ineffective or difficult to deploy. These efforts, however, are necessary steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or with interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware. The development of robust and practical protocols means that the most advanced hardware will be effectively used to achieve ideal performance in realistic environments. The FAST team is addressing protocol issues head on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. This integrated approach combining theory, implementation, and experiment is what makes the FAST team research unique and what makes fundamental progress possible. With the use of standard packet size supported throughout today’s networks, TCP presently achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 km. This represents an efficiency of just THE NEXT INTERNET 113 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, which is 153,000 times that of today’s modem and close to 6,000 times that of the common standard for ADSL (asymmetric digital subscriber line) connections. The 10-flow experiment set another first in addition to the highest aggregate speed over routed paths. High capacity and large distances together cause performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. However, the Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data were transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deployment of FAST TCP in communities that urgently need multi-Gbps networking. The demonstrations used a 10-Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5-Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking. One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures, and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries. Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is key to enabling global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the present practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech’s FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. 114 THE INTERNET RULES—IP TECHNOLOGIES This will drive scientific discovery and utilize the world’s growing bandwidth capacity much more efficiently than has been possible until now. 5.3 GRID COMPUTING Grid computing enables the virtualization of distributed computing and data resources such as processing, network bandwidth, and storage capacity to create a single system image, giving users and applications seamless access to vast IT capabilities. Just as an Internet user views a unified instance of content via the Web, a grid user essentially sees a single, large virtual computer. At its core, grid computing is based on an open set of standards and protocols—such as the Open Grid Services Architecture (OGSA)—that enable communication across heterogeneous, geographically dispersed environments. With grid computing, organizations can optimize computing and data resources, pool them for large-capacity workloads, share them across networks, and enable collaboration. In fact, grid can be seen as the latest and most complete evolution of more familiar developments—such as distributed computing, the Web, peer-to-peer computing, and virtualization technologies. Like the Web, grid computing keeps complexity hidden: multiple users enjoy a single, unified experience. Unlike the Web, which mainly enables communication, grid computing enables full collaboration toward common business goals. Like peer-to-peer, grid computing allows users to share files. Unlike peerto-peer, grid computing allows many-to-many sharing—not only files but other resources as well. Like clusters and distributed computing, grids bring computing resources together. Unlike clusters and distributed computing, which need physical proximity and operating homogeneity, grids can be geographically distributed and heterogeneous. Like virtualization technologies, grid computing enables the virtualization of IT resources. Unlike virtualization technologies, which virtualize a single system, grid computing enables the virtualization of vast and disparate IT resources. 5.4 INFOSTRUCTURE The National Science Foundation (NSF) has awarded $13.5 million over five years to a consortium led by the University of California, San Diego (UCSD) and the University of Illinois at Chicago (UIC). The funds will support design and development of a powerful distributed cyber “infostructure” to support data-intensive scientific research and collaboration. Initial application efforts will be in bioscience and earth sciences research, including environmental, seismic, and remote sensing. It is one of the largest information technology research (ITR) grants awarded since the NSF established the program in 2000. INFOSTRUCTURE 115 Dubbed the “OptIPuter”—for optical networking, Internet protocol, and computer storage and processing—the envisioned infostructure will tightly couple computational, storage, and visualization resources over parallel optical networks with the IP communication mechanism. “The opportunity to build and experiment with an OptIPuter has arisen because of major technology changes in the last five years,” says principal investigator Larry Smarr, director of the California Institute for Telecommunications and Information Technology [Cal-(IT)2], and Harry E. Gruber Professor of Computer Science and Engineering at UCSD’s Jacobs School of Engineering. “Optical bandwidth and storage capacity are growing much faster than processing power, turning the old computing paradigm on its head: we are going from a processor-centric world, to one centered on optical bandwidth, where the networks will be faster than the computational resources they connect.” The OptIPuter project will enable scientists who are generating massive amounts of data to interactively visualize, analyze, and correlate their data from multiple storage sites connected to optical networks. Designing and deploying the OptIPuter for grid-intensive computing will require fundamental inventions, including software and middleware abstractions to deliver unique capabilities in a lambda-rich world. (A “lambda,” in networking parlance, is a fully dedicated wavelength of light in an optical network, each already capable of bandwidth speeds from 1 to 10 Gbps.) The researchers in southern California and Chicago will focus on new network-control and traffic-engineering techniques to optimize data transmission, new middleware to bandwidth-match distributed resources, and new collaboration and visualization to enable real-time interaction with high-definition imagery. UCSD and UIC will lead the research team, in partnership with researchers at Northwestern University, San Diego State University, University of Southern California, and University of California-Irvine [a partner of UCSD in Cal(IT)2]. Co-principal investigators on the project are UCSD’s Mark Ellisman and Philip Papadopoulos of the San Diego Supercomputer Center (SDSC) at UCSD, who will provide expertise and oversight on application drivers, grid and cluster computing, and data management; and UIC’s Thomas A. DeFanti and Jason Leigh, who will provide expertise and oversight on networking, visualization, and collaboration technologies. “Think of the OptIPuter as a giant graphics card, connected to a giant disk system, via a system bus that happens to be an extremely high-speed optical network,” says DeFanti, a distinguished professor of computer science at UIC and codirector of the university’s Electronic Visualization Laboratory. “One of our major design goals is to provide scientists with advanced interactive querying and visualization tools, to enable them to explore massive amounts of previously uncorrelated data in near real time.” The OptIPuter project manager will be UIC’s Maxine Brown. SDSC will provide facilities and services, including access to the NSF-funded TeraGrid and its 13.6 teraflops of cluster computing power distributed across four sites.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.