Forwarding plane: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Howard C. Berkowitz
(New page: {{subpages}} In routing, the '''forwarding plane''' defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly...)
 
mNo edit summary
 
(12 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}
In [[routing]], the '''forwarding plane''' defines the part of the [[router]] architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal '''forwarding fabric''' of the router.  The [[IP Multimedia Subsystem]] architecture uses the term '''transport plane''' to describe a function roughly equivalent to the routing control plane.
In routing, the '''forwarding plane''' defines the part of the router   (or bridge (computer network)|bridge architecture that decides what to do with frames arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, the packet being contained in a frame and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal '''forwarding fabric''' of the forwarding device.  The IP Multimedia Subsystem architecture uses the term '''transport plane''' to describe a function roughly equivalent to the routing control plane.  Bridges do not look at the packet they encapsulate, but at destination headers in the frames.


The table also might specify that the packet is discarded. In some cases, the router will return an ICMP "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently.  By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected.
For low to medium throughput requirements, there is one forwarding fabric, orders of magnitude faster than any interface connected to it. The fabric is fast enough to be nonblocking: with all input interfaces running at maximum speed, the fabric will have adequate capacity to transfer all packets at line rate. In extremely high performance applications, such as major Internet service provider routers, a crossbar fabric is used, which provides multiple concurrent paths.
==Forwarding plane in bridging==
Bridges have two  levels of bridging table. First, each interface learns the source addresses that are local to the medium connected to it, and should ''not'' be forwarded beyond the local network.  Second, a bridge may copy the contents of those local tables, so if interface A knows that destination 1 is local to interface 2, it will send the frame directly to interface 2 rather than the default behavior of flooding it out all other interfaces.
==Forwarding plane in routing==
In  routing, the table also might specify that the packet is discarded, for security or for capacity management. In some cases, the router will return an Internet Message Control Program (ICMP) "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently.  By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected.


The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the [[Internet Protocol|IP]] specification indicates that an [[Internet Control Message Protocol|ICMP]] TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently.
The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol|IP specification indicates that an Internet Control Message Protocol|ICMP TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently.


Depending on the specific router implementation, the table in which the destination address is looked up could be the [[routing table]] (also known as the routing information base), or a separate [[forwarding information base]] that is populated (i.e., loaded) by the [[control plane]], but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or [[Transmission Control Protocol|TCP]] or [[User Datagram Protocol|UDP]] port number.
Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base), or a separate forwarding information base that is populated (i.e., loaded) by the control plane, but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or Transmission Control Protocol|TCP or User Datagram Protocol|UDP port number.


Forwarding plane functions,  run in the forwarding element. <ref>[ftp://ftp.rfc-editor.org/in-notes/rfc3746.txt Forwarding and Control Element Separation (ForCES) Framework], RFC 3746, April 2004</ref>.  High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing.
Forwarding plane functions,  run in the forwarding element. <ref>[ftp://ftp.rfc-editor.org/in-notes/rfc3746.txt Forwarding and Control Element Separation (ForCES) Framework], RFC 3746, April 2004</ref>.  High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing.


The outgoing interface will encapsulate the packet in the appropriate data link protocol.  Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by [[differentiated services]].
The outgoing interface will encapsulate the packet in the appropriate data link protocol.  Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services.


In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the '''fast path''' of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the '''services plane''' of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload.
In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the '''fast path''' of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the '''services plane''' of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload.


== Issues in router forwarding performance ==
=== Issues in router forwarding performance ===


Vendors design router products for specific markets. Design of routers intended for home use, perhaps supporting several PCs and VoIP telephony, is driven by keeping the cost as low as possible. In such a router, there is no separate forwarding fabric, and there is only one active forwarding path: into the main processor and out of the main processor.
Vendors design router products for specific markets. Design of routers intended for home use, perhaps supporting several PCs and VoIP telephony, is driven by keeping the cost as low as possible. In such a router, there is no separate forwarding fabric, and there is only one active forwarding path: into the main processor and out of the main processor.
Line 21: Line 25:


Several design factors affect router forwarding performance:
Several design factors affect router forwarding performance:
 
* Forwarding information base|Forwarding information base design
* Data link layer processing and extracting the packet
* Data link layer processing and extracting the packet
* Decoding the packet header
* Decoding the packet header
Line 29: Line 33:
* Processing and data link encapsulation at the egress interface
* Processing and data link encapsulation at the egress interface


Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized [[application-specific integrated circuit]]s (ASIC).
Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated circuits (ASIC).


Very high performance products have multiple processing elements on each interface card.  In such designs, the main processor does not participate in forwarding, but only in control plane and management processing.
Very high performance products have multiple processing elements on each interface card.  In such designs, the main processor does not participate in forwarding, but only in control plane and management processing.
Line 35: Line 39:
=== Benchmarking performance ===
=== Benchmarking performance ===


In the [[Internet Engineering Task Force]], two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services.  Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG).
In the Internet Engineering Task Force, two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services.  Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG).


RFC 2544 is the key BMWG document <ref>[http://www.ietf.org/rfc/rfc2544.txtBenchmarking Methodology for Network Interconnect Devices],RFC 2544, [[Scott Bradner|S. Bradner]] & J. McQuade,[[March]] [[1999]]</ref>.  A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports.
RFC 2544 is the key BMWG document <ref>[http://www.ietf.org/rfc/rfc2544.txtBenchmarking Methodology for Network Interconnect Devices],RFC 2544, Scott Bradner|S. Bradner & J. McQuade,March 1999</ref>.  A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports.


== Forwarding information base design ==
== Distributed forwarding ==


Originally, all destinations were looked up in the RIB. Perhaps the first step in speeding routers was to have a separate RIB and FIB in main memory, with the FIB, typically with fewer entries than the RIB, being organized for fast destination lookup. In contrast, the RIB was optimized for efficient updating by routing protocols.
A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor.  The fast routing processor typically had a small FIB, with hardware memory (e.g., static random access memory (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally dynamic random access memory (DRAM).
 
Early uniprocessing routers usually organized the FIB as a [[hash table]], while the RIB might be a [[linked list]]. Depending on the implementation, the FIB might have fewer entries than the RIB, or the same number.
 
When routers started to have separate forwarding processors, these processors usually had far less memory than the main processor, such that the forwarding processor could hold only the most frequently used routes. On the early Cisco AGS+ and 7000, for example, the forwarding processor cache could hold approximately 1000 route entries. In an enterprise, this would often work quite well, because there were fewer than 1000 server or other popular destination subnetsSuch a cache, however, was far too small for general Internet routing. Different router designs behaved in different ways when a destination was not in the cache.
 
=== Cache miss issues ===
 
A '''cache miss''' condition might result in the packet being sent back to the main processor, to be looked up in a '''slow path''' that had access to the full routing table. Depending on the router design, a cache miss might cause an update to the fast hardware cache or the fast cache in main memory. In some designs, it was most efficient to invalidate the fast cache for a cache miss, send the packet that caused the cache miss through the main processor, and then repopulate the cache with a new table that included the destination that caused the miss. This approach is similar to an operating system with virtual memory, which keeps the most recently used information in hardware memory.
 
As memory costs went down and performance needs went up, FIBs emerged that had the same number of route entries as in the RIB, but arranged for fast lookup rather than fast update. Whenever a RIB entry changed, the router changed the corresponding FIB entry.
 
=== FIB design alternatives ===
 
High-performance FIBs achieve their speed with implementation-specific combinations of specialized algorithms and hardware.
 
==== Software ====
 
Various [[search algorithm| search algorithms]] have been used for FIB lookup.  While well-known general-purpose data structures were first used, such as [[hash tables]], specialized algorithms, optimized for IP addresses, emerged. They include:
 
* Binary [[trie]]
* [[Radix tree]]
* Four-way trie
* [[Patricia tree]] <ref>[http://portal.acm.org/citation.cfm?id=227248.227256 Routing on Longest Matching Prefixes],ID, W. Doeringer 'et al.', IEEE/ACM Transactions on Networking,[[February]] [[1996]]</ref>
 
==== Hardware ====
 
Various forms of fast RAM and, eventually, basic [[content addressable memory]] (CAM) were used to speed lookup. CAM, while useful in layer 2 switches that needed to look up a relatively small number of fixed-length [[MAC]] addresses, had limited utility with IP addresses having variable-length routing prefixes (see [[Classless Inter-Domain Routing]]).  Ternary CAM (CAM), while expensive, lends itself to variable-length prefix lookups <ref>[http://www.hoti.org/archive/hoti10/program/liu_tcam.pdf Efficient Mapping of Range Classifier into Ternary-CAM],IEEE Symposium on High-Speed Interconnects, H. Liu,August 2002</ref>.
 
One of the challenges of forwarder lookup design is to minimize the amount of specialized memory needed, and, increasingly, to minimize the power consumed by memory<ref>[http://www.hoti.org/archive/hoti10/program/Sharma_power.pdf Reducing TCAM Power Consumption and Increasing Throughput], IEEE Symposium on High-Speed Interconnects, R Panigrahy & S. Sharma,August 2002</ref>.
 
== Distributed forwarding ==


A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor.  The fast routing processor typically had a small FIB, with hardware memory (e.g., [[static random access memory]] (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally [[dynamic random access memory]] (DRAM).
In bridging, a subset of distributed forwarding came very early: the interface-level table of source MAC addresses that are local to that interface and should ''not'' be forwarded.


=== Early distributed forwarding ===
=== Early distributed forwarding ===


Next, routers began to have multiple forwarding elements, that communicated through a high-speed '''shared bus'''<ref>[http://www.isi.edu/touch/pubs/isi-lanman98abs.pdf High Performance IP Forwarding Using Host Interface Peering], J. Touch ''et al.'',Proc. 9th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN),[[May]] 1998</ref> or through a '''shared memory'''<ref>[http://www.cs.ucr.edu/~bhuyan/papers/tpds.ps Shared Memory Multiprocessor Architectures for Software IP Routers], Y. Luo ''et al.'',IEEE Transactions on Parallel and Distributed Systems,2003</ref>. Cisco used shared busses until they saturated, while Juniper preferred shared memory <ref>[http://www.informit.com/articles/article.aspx?p=30631  Juniper Networks Router Architecture],''Juniper Networks Reference Guide: JUNOS Routing, Configuration, and Architecture'', T. Thomas, Addison-Wesley Professional, 2003]</ref>.
Next, routers began to have multiple forwarding elements, that communicated through a high-speed '''shared bus'''<ref>[http://www.isi.edu/touch/pubs/isi-lanman98abs.pdf High Performance IP Forwarding Using Host Interface Peering], J. Touch ''et al.'',Proc. 9th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN),May 1998</ref> or through a '''shared memory'''<ref>[http://www.cs.ucr.edu/~bhuyan/papers/tpds.ps Shared Memory Multiprocessor Architectures for Software IP Routers], Y. Luo ''et al.'',IEEE Transactions on Parallel and Distributed Systems,2003</ref>. Cisco used shared busses until they saturated, while Juniper preferred shared memory <ref>[http://www.informit.com/articles/article.aspx?p=30631  Juniper Networks Router Architecture],''Juniper Networks Reference Guide: JUNOS Routing, Configuration, and Architecture'', T. Thomas, Addison-Wesley Professional, 2003]</ref>.


Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 <ref>[http://safari.ciscopress.com/1578701813/ch06 Hardware Architecture of the Cisco 7500 Router],''Inside Cisco IOS Software Architecture (CCIE Professional Development'', V. Bollapragada ''et al.'',Cisco Press, [[2000]]</ref>
Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 <ref>[http://safari.ciscopress.com/1578701813/ch06 Hardware Architecture of the Cisco 7500 Router],''Inside Cisco IOS Software Architecture (CCIE Professional Development'', V. Bollapragada ''et al.'',Cisco Press, 2000</ref>


Eventually, the shared resource became a bottleneck, with the limit of shared bus speed being roughly 2 million packets per second (Mpps).  Crossbar fabrics broke through this bottleneck.
Eventually, the shared resource became a bottleneck, with the limit of shared bus speed being roughly 2 million packets per second (Mpps).  Crossbar fabrics broke through this bottleneck.
Line 88: Line 61:
As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus.
As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus.


While some designs experimented with multiple shared busses, the eventual approach was to adapt the [[crossbar switch]] model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing.  There are multistage designs for crossbar systems, such as [[Clos networks]].
While some designs experimented with multiple shared busses, the eventual approach was to adapt the crossbar switch model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing.  There are multistage designs for crossbar systems, such as Clos networks.


== References ==
== References ==


{{reflist | 2}}
{{reflist | 2}}[[Category:Suggestion Bot Tag]]

Latest revision as of 06:00, 18 August 2024

This article is developing and not approved.
Main Article
Discussion
Definition [?]
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

In routing, the forwarding plane defines the part of the router (or bridge (computer network)|bridge architecture that decides what to do with frames arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, the packet being contained in a frame and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal forwarding fabric of the forwarding device. The IP Multimedia Subsystem architecture uses the term transport plane to describe a function roughly equivalent to the routing control plane. Bridges do not look at the packet they encapsulate, but at destination headers in the frames.

For low to medium throughput requirements, there is one forwarding fabric, orders of magnitude faster than any interface connected to it. The fabric is fast enough to be nonblocking: with all input interfaces running at maximum speed, the fabric will have adequate capacity to transfer all packets at line rate. In extremely high performance applications, such as major Internet service provider routers, a crossbar fabric is used, which provides multiple concurrent paths.

Forwarding plane in bridging

Bridges have two levels of bridging table. First, each interface learns the source addresses that are local to the medium connected to it, and should not be forwarded beyond the local network. Second, a bridge may copy the contents of those local tables, so if interface A knows that destination 1 is local to interface 2, it will send the frame directly to interface 2 rather than the default behavior of flooding it out all other interfaces.

Forwarding plane in routing

In routing, the table also might specify that the packet is discarded, for security or for capacity management. In some cases, the router will return an Internet Message Control Program (ICMP) "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently. By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected.

The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol|IP specification indicates that an Internet Control Message Protocol|ICMP TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently.

Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base), or a separate forwarding information base that is populated (i.e., loaded) by the control plane, but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or Transmission Control Protocol|TCP or User Datagram Protocol|UDP port number.

Forwarding plane functions, run in the forwarding element. [1]. High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing.

The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services.

In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the fast path of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the services plane of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload.

Issues in router forwarding performance

Vendors design router products for specific markets. Design of routers intended for home use, perhaps supporting several PCs and VoIP telephony, is driven by keeping the cost as low as possible. In such a router, there is no separate forwarding fabric, and there is only one active forwarding path: into the main processor and out of the main processor.

Routers for more demanding applications accept greater cost and complexity to get higher throughput in their forwarding planes.

Several design factors affect router forwarding performance:

  • Forwarding information base|Forwarding information base design
  • Data link layer processing and extracting the packet
  • Decoding the packet header
  • Looking up the destination address in the packet header
  • Analyzing other fields in the packet
  • Sending the packet through the "fabric" interconnecting the ingress and egress interfaces
  • Processing and data link encapsulation at the egress interface

Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated circuits (ASIC).

Very high performance products have multiple processing elements on each interface card. In such designs, the main processor does not participate in forwarding, but only in control plane and management processing.

Benchmarking performance

In the Internet Engineering Task Force, two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services. Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG).

RFC 2544 is the key BMWG document [2]. A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports.

Distributed forwarding

A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor. The fast routing processor typically had a small FIB, with hardware memory (e.g., static random access memory (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally dynamic random access memory (DRAM).

In bridging, a subset of distributed forwarding came very early: the interface-level table of source MAC addresses that are local to that interface and should not be forwarded.

Early distributed forwarding

Next, routers began to have multiple forwarding elements, that communicated through a high-speed shared bus[3] or through a shared memory[4]. Cisco used shared busses until they saturated, while Juniper preferred shared memory [5].

Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 [6]

Eventually, the shared resource became a bottleneck, with the limit of shared bus speed being roughly 2 million packets per second (Mpps). Crossbar fabrics broke through this bottleneck.

Shared paths become bottlenecks

As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus.

While some designs experimented with multiple shared busses, the eventual approach was to adapt the crossbar switch model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing. There are multistage designs for crossbar systems, such as Clos networks.

References

  1. Forwarding and Control Element Separation (ForCES) Framework, RFC 3746, April 2004
  2. Methodology for Network Interconnect Devices,RFC 2544, Scott Bradner|S. Bradner & J. McQuade,March 1999
  3. High Performance IP Forwarding Using Host Interface Peering, J. Touch et al.,Proc. 9th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN),May 1998
  4. Shared Memory Multiprocessor Architectures for Software IP Routers, Y. Luo et al.,IEEE Transactions on Parallel and Distributed Systems,2003
  5. Juniper Networks Router Architecture,Juniper Networks Reference Guide: JUNOS Routing, Configuration, and Architecture, T. Thomas, Addison-Wesley Professional, 2003]
  6. Hardware Architecture of the Cisco 7500 Router,Inside Cisco IOS Software Architecture (CCIE Professional Development, V. Bollapragada et al.,Cisco Press, 2000