wiki:OldGPGResearchQuestions

Version 1 (modified by hdempsey@bbn.com, 15 years ago) (diff)

--

*GENI PLANNING GROUP DOCUMENTS ARE NO LONGER CURRENT. See GpoDoc and the GENI project pages for up-to-date documents and specifications.*

Scientific Questions

Although we focus on the design of a Future Internet that delivers increasing value to society, there are many scientific challenges involved in realizing such a goal. Here, we describe some of these challenges, as well as identify the unique opportunities this effort creates. Theoretical Underpinnings

Communications systems such as the Internet and the telephone system (which is morphing into the Internet) are perhaps the largest and most complex distributed systems ever built. The degrees of interconnection and interaction, the fine-grain timing of these interactions, the decentralized control, and the lack of trust among the parts raise fundamental questions about stability and predictability of behavior. There is beginning to emerge some relevant theories of highly distributed complex systems, some of which have roots in control theory and some of which draw on analogies with biological systems. We should take advantage of this work in this redesign, to improve our chances that we come as close as possible to the best levels of availability and resilience. There may be other important contributions from the theory community, for example, the use of game theory to explore issues of incentives in design of protocols for interconnection among competing Internet Service Providers. This is a chance for CISE to engage members of the theory community in this program. Architectural Limits

A fundamental question at the core of this effort is to understand the architectural limits of the current Internet, and to test whether alternative designs better position the Internet to address the many challenges it faces. At the heart of this question is the issue of whether or not we can continue to patch the Internet for the indefinite future, or are there indeed limits to the current design that will keep the Future Internet from realizing its potential.

While there is no way to be certain that the incremental path we are currently following will ultimately fail to address the challenges facing the Internet, it is clear that many of the assumptions underlying the Internet's design no longer hold:

  • The Internet originally viewed network traffic as fundamentally friendly, but it today it is more appropriate to view it as adversarial. An alternative design would minimize trust assumptions.
  • The Internet was originally developed independent of any commercial considerations, but today the network architecture must take competition and economic incentives into account. An alternative design would enable more user choice.
  • The Internet originally assumed host computers were connected to the edges of the network, but host-centric assumptions are not appropriate in a world with an increasing number of sensors and mobile devices. An alternative design would allow for much more edge diversity.
  • The Internet originally did not expose information about its internal configuration, but there is value to both users and network administrators in making the network more transparent. An alternative design would provide more network transparency.
  • The Internet originally provided only a best-effort packet delivery service, but there is value in enhancing (adding functionality to) the network to meet application requirements. An alternative design would provide more explicit support for widely distributed applications.
  • The Internet originally drew a sharp line between the network and the underlying transport facilities, but emerging optical integration technology makes it possible to embed network functionality in the optical transport. An alternative design would make configurable aspects of the underlying transport a first-class element in the architecture.

While these assumptions may eventually lead to an architectural dead-end, there are other issues that come into play. First, while it may be possible to apply incremental point-solutions to the Internet, doing so comes at the cost of increased complexity, which makes it hard to reason about the network as a whole. This increased complexity makes the Internet harder to manage, more brittle in the face of new requirements, and more vulnerable to emerging threats. Understanding the tradeoffs between complexity and architectural purity will be important.

Second, it is possible to overlay new network architectures and services on top of the current Internet without changing the Internet architecture, per se. This assumes the new architecture or service has many points-of-presence, which is a capability that GENI will provide. Understanding the limits of overlay-based solutions, along with identifying what changes to the core network (if any) are necessary to better support overlays, will be a central question addressed by this effort. Analysis and Modeling

Mathematical models and analysis of measurement data have provided key insights into the fundamental limits of today's Internet. We believe they will continue to play a crucial role in the research on a Future Internet, and in fact, the design of new network architectures should be amenable to modeling and measurement in ways that today's Internet is not.

There are many examples of where measurements and analytical models have shed light on the limitations of today's architecture, including the following.

  • Analysis of Internet traffic measurements has shown that IP traffic is self-similar. The burstiness of the traffic on multiple time scales makes traditional queuing models a poor predictor of network performance. Moreover, transport protocols such as TCP morph the traffic in ways that further complicate analytical modeling. Although statistical-analysis techniques have shed light on the key properties of the traffic, and the origins of these properties, analytical models of Internet performance remain elusive. Work on a Future Internet could consider protocols and mechanisms amenable to analytical modeling, making it easier to provide predictable performance, and as a consequence, to provision network resources.
  • Numerous measurement studies have unveiled key properties of Internet traffic, performance, and topologies. However, many of these studies rely on inference from edge measurements. With the increasing size and commercialization of the Internet, these studies have become ever more difficult to conduct, and the generality and accuracy of the results more suspect. A Future Internet should include support for measurement as a first-class citizen because of the importance of measurement in understanding and operating the network.
  • End users and network operators have great difficulty detecting, diagnosing, and fixing performance and reachability problems. The networking research community has created tools for anomaly detection and root-cause analysis, but these solutions are forced to work with extremely limited data collected from remote vantage points in competing domains. Today's protocols were not designed with diagnosis in mind. Future theoretical work can quantify the fundamental limits on diagnosing problems in today's network and identify key features for a future architecture to support diagnosis.
  • Measurement studies and analytical models have demonstrated significant benefits that competing domains could achieve by cooperating in computing paths for network traffic. However, today's routing protocols do not provide sufficient means for neighboring domains to negotiate over the exchange of traffic. New research in game theory and inter-domain negotiation offer promising solutions that are difficult to realize in today's architecture. Insights from these studies can drive the creation of new architectures for evaluation.
  • Existing protocols and mechanisms were designed without the network operator's goals in mind, leaving the operator with (at best) indirect control over the traffic flowing through a domain. Recent theoretical work has shown that selecting the best configuration of the intra-domain routing protocols is a computationally intractable optimization problem, even for the simplest of network objectives. In addition, robustness is difficult to achieve because small changes in parameter settings can lead to large changes in the flow of traffic. Other mechanisms, such as queue-management schemes, do not lend themselves to analytical frameworks that guide operators in setting the tunable parameters. A Future Internet architecture could have manageability in mind from the beginning, by having protocols and mechanisms that either adapt on their own to network conditions or present tractable optimization problems to network operators.

Measurement and models have already provided significant insight into the behavior of today's protocols and mechanisms, and their fundamental limitations. The design of a Future Internet offers a rich landscape of research problems, as well as a unique opportunity to create new architectures with measurement and modeling in mind from the beginning. Opportunities at Community Boundaries

Many of the opportunities for innovation and discovery will happen at the boundaries of traditionally separate research communities. A Future Internet will cut across the networking community (which traditionally considers issues inside the network), the distributed systems community (which traditionally innovates on the design of robust services and applications on top of the network), the mobile and wireless community (which traditionally considers problems at the edge of the network), and the optical communications community (which traditionally develops device technology upon which networks are built).

Wireless is perhaps the most transforming of the current network technologies, with its promise of "always connected", the potential to provide connectivity without the high cost of fixed wireline infrastructure, and the capability to hook new classes of inexpensive computing devices such as sensors and actuators. But these capabilities challenge the Future Internet to deal with issues of mobility, new forms of routing (in which links are not pre-defined circuits but can be reconfigured in real time), and the problems of links with highly variable capacity.

Distributed systems and applications have traditionally been designed to run "on top of" the Internet, and to take the architecture of the Internet as given. This re-design raises the opportunity to better understand and assess higher-level system requirements, and use these as drivers of the lower layer architecture. In this process, mechanisms that are implemented today as part of applications may conceivably migrate into the network itself, and the relevant research communities themselves may blend together and share or exchange research ideas and architectural proposals.

Optical technology has proved itself as the workhorse of high-speed low-cost circuits that efficiently transmit data over long distances. However, there is the opportunity for optical technology to be used for more than simple, point-to-point circuits, where circuits through ring and mesh networks are actually configured using optical switch hardware managed by the same software as the electronic portion of the network. Even more exciting, there are new technologies just around the corner that will allow the optical fiber bandwidth to be dynamically accessed by edge nodes in a way that is as revolutionary to networking in the core as wireless has been at the edge. However, to realize this potential, the network architecture will have to be redesigned to take the emerging optical capabilities into account. Optical systems will be able to provide highly reconfigurable connections, which implies, for example, changes in the way a Future Internet will do routing. Promising directions in optical system design must be a driver for a Future Internet and mechanisms to integrate and manage this new technology in a new Internet architecture must be provided.

Broader Interdisciplinary Implications

Beyond looking across boundaries that separate technical sub-communities, this effort will benefit greatly from looking for help from disciplines much farther afield, disciplines as diverse as economics, sociology, and law. For example, a fundamental question facing the design of a Future Internet is how to balance privacy against accountability. To what extent should users be anonymous as they use the network, versus what rights does society have in holding users responsible for their actions. Several engineering design points are possible, but it is a legal and societal question as to how this question is resolved. Similarly, there are countless economic issues involved in who extracts value from the network, how cost recovery is managed, and how the network provides incentives for desired behavior.