A Study on the Standardization of Future Internet Architecture

Release Date:2010-06-09 Author:He Baohong, Zhu Gang Click:

 

This work was funded by National High-tech R&D Program of China (863 Program) under Grant No. 2008AA01A301.

 

1 Problems with Existing Internet Architecture
To facilitate the concept of “participation by everyone”, the Internet adopts a transparent end-to-end architecture[1]. In such an architecture, the intelligence of terminals is used to produce diverse information, while the network simply does its best to transmit this information without any change or control. This design principle, commonly known as "intelligent terminal plus dumb network”, aims to simplify network functions and hand complex information processing and control over to the terminal nodes (including servers and users). In this way, users are afforded greater autonomy and more room for innovation, and everybody is able to participate in the development of the Internet.


    In the progression from laboratories and research institutes to commercialization, application scenarios of the Internet undergo an enormous change and architecture related problems begin to emerge. These problems challenge its core end-to-end transparent design. The basic assumption that Internet users are self-disciplined and can trust each other is no long justified. Instead, the Internet is often a source of attacks, virus spreading, and malicious information propagation. The Internet’s role has also changed, from that of a non-commercial trial network used primarily for research, to an important part of the national information infrastructure. It continues to penetrate into every sector of the national economy. The large-scale application of the Internet has brought about security, business model, and Quality of Service (QoS) problems that are bottlenecks to its sustainable and healthy development. Finally, there are no effective benefit allocation and coordination mechanisms for all participants in the industry chain, and this hinders the further development of Internet services.


    As application scenarios of the Internet change, the problem of poor controllability in existing Internet architecture becomes increasingly acute. This is reflected in many aspects, including network security, QoS, scalability, and business model. In terms of network security, Internet architecture enables the separation of end-to-end services from bearer networks. The network provides a unified IP interface for upper-layer applications, leaving all control capabilities and security responsibilities to users at the edge of the network. The network does not sense or limit upper-layer applications nor does it have a reward and penalty mechanism—a condition which leads to uncontrollable user behavior and high trace cost. As for QoS, the Internet lacks necessary resource control and management mechanisms, and can only do its best to deliver services. Improvement in QoS depends largely on an increase of network resources and the self-discipline of users. In fact, the Internet makes no QoS commitments to upper-layer services and applications. Consequently, applications with strict real-time requirements cannot be widely promoted over the Internet. Dynamic routing in the Internet also makes QoS improvement more difficult.


2 Concept and Evolution of the Future Internet
The future Internet may change in many ways, but the fundamental concept of participation by everyone should be maintained and respected. The Internet will otherwise lose its driving force and orientation, and become something else. Sticking to this core principle means not dogmatizing or making absolute the “spirits”, such as freedom, equality, openness and innovation[2], that derive from it.


    The future Internet should adopt end-to-end transparent architecture within certain constraints; that is, conditional end-to-end transparency. On the premise that everybody can participate in development and innovation, management and control mechanisms that are transparent to users, and which restrain unruly user behaviors should be embedded into the Internet. These should also balance the duties and interests of all parties in the industry chain. In light of the Internet’s large-scale, and its infinite, interactive, public, and autonomous attributes, the future Internet must provide users with a trustworthy, high QoS, ubiquitous and harmonious virtual experience.


    Today, the evolution from existing Internet to future Internet may follow one of three routes: reformative, integrated, or revolutionary.


    The “reformative” route takes advantage of the massive amount of information already known about the Internet and employs new technologies to mend what is already existing.  Such technologies include address translation, resource control, and security monitor, and these are used to solve existing Internet architecture problems in order to meet the increasing demands for social applications. Persisting with the original transparent end-to-end Internet architecture, IPv6 can be regarded as a representative technology of this route.


    Researchers who advocate the revolutionary route believe the existing Internet is unsuitable as a future information infrastructure, and that patchy or systematic mends will only increase the burden on it. Therefore, a new Internet has to be developed for the long term. New Internet architecture has gradually come to the forefront of worldwide research. Various new ideas and technologies have emerged—for instance, the Forwarding Directive, Association, and Rendezvous Architecture (FARA) model proposed by Massachusetts Institute of Technology; the I3 project of the University of California; and the Global Environment for Network Innovations (GENI) project supported by the US National Science Foundation[3]. Yet questions about which technologies will be used, and whether new technologies will need further integration remain unclear.


    Researchers favoring the integrative route try to compromise between reformative and revolutionary routes. They tend to think that patchy mends are insufficient for solving existing problems, and that a revolutionary approach will take an excessively long time to implement. The integrative route holds to the principle of mending the Internet in a systematic, large-scale, and overall way. By nature, the Internet is a kind of overlay network. Based on existing Internet architecture, as well as the idea of an overlay network, an intermediate layer can be added between the carrier and application layers. This added layer implements the individual functions of the carrier layer as well as the common functions of the application layer. Hence, the Internet can develop in a healthy and sustainable way on the basis of existing information.


3 Research and Standardization of Future Networks

 

3.1 Typical Research Projects
Future network architecture is faced with several developmental choices; for instance, virtualization network, automatic network, hierarchically switched network, high performance network, trustable network, long-distance low-consumption network, and high-bandwidth long-delay network. Many countries are actively engaged in policy making and investing into these networks, and are trying to take the lead in next generation Internet research. Typical research projects include PlanetLab[4] in the United States, 4WARD in the FP7 frame of the European Union, AKARI[5] in Japan, and the Chinese Public Telecom Packet Data Network (PTDN) project.


    PlanetLab started in 2002 and is an open testbed for developing, deploying, and accessing global services. Until January 2009, the platform had 474 sites and 950 nodes distributed in over 40 countries. On February 10, 2009, the slice-based architecture was released, which defines interactive interfaces and data types.

    
    Beginning in January 2008, the 4WARD project is a typical network research project of the EU. It employs a strategy of “walking on two legs”: on the one hand, it tries to innovate its way around the shortcomings in current communication network architecture; on the other, it seeks an overall framework that allows interoperability of several network architectures, avoiding pitfalls like the current Internet’s “patch on patch” approach.


    The Japanese AKARI Architecture Design Project was launched in 2006. By 2015, it intends to have developed a new network architecture and created a network design based on that architecture. The project team has studied many technical solutions over the past four years, and the design of the architecture diagram is scheduled for completion in 2010.


    In 2003, the Chinese Academy of Telecommunication Research proposed the Public Telecom Data Network (PTDN).  Research and development into core routing devices, edge routing devices, address mapping system, and address translation system have been undertaken, and interoperation and interconnection with several manufacturers’ devices has been achieved. The PTDN standard has also been submitted to the ITU-T Study Group 13 (SG13) and has been accepted as a candidate solution for Future Packet Based Network (FPBN) in Next Generation Network (NGN) architecture. Moreover, three ITU recommendations have been approved: ITU-T Y.2601, Y.2611, and Y.2612.

 

3.2 Standardization Efforts of the International Standardization Organization
In the evolution to future Internet, ITU has tended to pursue the revolutionary route. SG13 is the primary group responsible for future network research. Recently, they established a future network team focused on the vision, demand, new technologies, timetable, and standardization of future networks. So far, SG13 has called two meetings which have been attended by research institutes from Europe, China, Japan, South Korea, and other countries. Future networks such as virtualization network, automatic network, and energy-saving network have been discussed. The main work of SG13, however, still rests on collecting design principles, concepts, demands, and technical features of future networks (which are far from having a standard).


    In contrast, the Internet Engineering Task Force (IETF) has made a great inroads into the research and standardization of reformative and integrated strategies. It has introduced next generation Internet protocol IPv6 to solve the address scalability problem, studied next generation routing, and addressed the framework to solve the routing scalability problem (where there are two main approaches: ID/Locator[6-7] and Map/Encaps[8-9]). It has also researched next generation Domain Name System (DNS) for Peer-to-Peer (P2P) distributed domain name services (in order to solve DNS overload and security problems), and has developed Multipath-TCP to achieve high network throughput.


    The World Wide Web Consortium (W3C) currently plays a leading role in developing the principles and protocols of semantic web. Semantic-based stack comprises seven layers: identifiers and character set, Extensible Markup Language (XML) syntax, Resource Description Framework (RDF), ontologie, unifying logic, proof, and trust from bottom to top. The standards and specifications for the first four layers have been released. At present, W3C is focused on the research of new RDF-based tools and languages as well as the development of new applications.


    In addition to the ITU, IETF, and W3C, other international standardization organizations like ISO have also undertaken research into the standardization of future network architecture.


4 Conclusions
Although several international organizations have studied the standardization of future network architecture, the standardization of next generation Internet architecture still faces many challenges. It may take 10 to 20 years to complete the standards of the next generation Internet and put them into practice. 

 
    Standardization organizations have tended to go their own way, as opposed to coordinating with each other. The ITU-T favors the revolutionary route, and the development of a new architecture standard. In contrast, the IETF is relatively "realistic” and focused on the short term. It has a preference for improving existing Internet architecture rather than planning long-term solutions, and attempts to work out new standards for existing problems.
Each organization has its own understanding of network architecture. W3C regards semantic web as the next generation Internet, so it understands network architecture more from the perspective of application layer. ITU-T and other organizations understand the architecture from the point of view of bearer network.


    Finally, organizations adopt different attitudes to the inheritance of existing technologies. IETF has acknowledged that a fatal flaw in IPv6 is its failure to be backwardly compatible with existing IPv4. The ITU-T, however, argues that it may be unnecessary to take into account compatibility with existing networks in the innovative design of future network architecture. So far, on the critical question of whether (or to what extent) the Internet will be compatible with existing IPv4 and IPv6, and how to learn from IPv6, little study has been conducted and no consensus has been reached.

 

References
[1] 何宝宏. 互联网“端到端透明”面临挑战 [N]. 人民邮电, 2006-02-13.
HE Baohong. Challenges Facing “End-to-End Transparence” of the Internet [N]. People’s Posts and Telecommunications News, 2006-02-13.
[2] 信息产业部电信研究院. 互联网技术发展白皮书 第一卷: 发展脉络与体系架构 [J]. 世界电信, 2007, 20(7): 8-13.
China Academy of Telecommunication Research (CATR) of MII. Internet Technology Development White Paper Volume 1: History of Development and System Framework [J]. World Telecommunications, 2007, 20(7): 8-13.
[3] Plain Text-GENI: geni-Trac [EB/OL]. [2009-02-13]. http://svn.planet-lab.org/attachment/wiki/GeniWrapper/sfa.pdf.
[4] PlanetLab [EB/OL]. [2009-05-20]. http://www.planet-lab.org/.
[5] AKARI [EB/OL]. [2009-09-30]. http://akari-project.nict.go.jp/.
[6] MOSKOWITZ R, NIKANDER P. Host Identity Protocol (HIP) Architecture [R]. IETF RFC 4423. 2006.
[7] XU Xiaohu, GUO Dayong. Hierarchical Routing Architecture (HRA) [C]//Proceedings of Next Generation Internet Networks (NGI’08), Apr 28-30, 2008, Krakow, Poland. Piscataway, NJ, USA: IEEE, 2008: 92-99.
[8] FARINACCI D, ORAN D, FULLER V, et al. Locator/ID Separation Protocol (LISP) [R]. IETF Network Working Group. Internet Draft draft-farinacci-lisp-10.txt.2008.
[9] FRANCIS P, XU X, BALLANI H. FIB Suppression with Virtual Aggregation and Default Routes [R]. IETF Network Working Group. Internet Draft draft-ietf-grow-va-00 September. 2008.

[Abstract] Due to great changes in the application environment of the Internet, current Internet architecture, with "end-to-end transparency" as its principle, is facing challenges such as security, scalability, and Quality of Service (QoS). This paper introduces the design principles and concepts, evolutionary strategies and research status of future Internet architecture. It also analyzes problems and challenges in the process of its standardization, and discusses three evolutionary routes: reformative, integrated, and revolutionary.