Multilayer Design: The hierarchical approach we are going to use is called the "multilayer design." The multilayer design is modular and capacity scales as building blocks are added. Intelligent Layer 3 services reduce the scope of many typical problems caused by misconfigured or malfunctioning equipment. Intelligent Layer 3 routing protocols such as Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) handle load balancing and fast convergence. The multilayer model makes migration easier because it preserves the existing addressing plan of campus networks based on routers and hubs. Redundancy and fast convergence to the wiring closet are provided by Hot Standby Router Protocol (HSRP). Bandwidth scales from Fast Ethernet to Fast EtherChannel and from Gigabit Ethernet to Gigabit EtherChannel. Layer 2 switching led to network designs that emphasized Layer 2 switching. These designs are characterized as "flat" because they avoid any logical, hierarchical structure and summarization provided by routers. Campus-wide virtual LANs (VLANs) are also based on the flat design model. Layer 3 switching provides the same advantages as routing in campus network design, with the added performance boost from packet forwarding handled by specialized hardware. Putting Layer 3 switching in the distribution layer and backbone of the campus segments the campus into smaller, more manageable pieces. Important multilayer services such as broadcast suppression and protocol filtering are used in the Layer 2 switches at the access layer. The multilayer approach combines Layer 2 switching with Layer 3 switching to achieve robust, highly available campus networks.
It is helpful to analyze campus network designs in the following ways:
Failure Domain A group of Layer 2 switches connected together is called a Layer 2 switched domain. The Layer 2 switched domain can be considered as a failure domain because a misconfigured or malfunctioning workstation can introduce errors that will impact or disable the entire domain. A jabbering network interface card (NIC) may flood the entire domain with broadcasts. A workstation with the wrong IP address can become a black hole for packets. Problems of this nature are difficult to localize.
The scope of the failure domain is reduced by restricting it to a single Layer 2 switch in one wiring closet. In order to do this, the deployment of VLANs and VLAN trunking is restricted, and also one VLAN (IP subnet) is restricted to one wiring-closet switch. The gigabit uplinks from each wiring-closet switch connect directly to routed interfaces on Layer 3 switches.
Broadcast Domain Media Access Control (MAC)-layer broadcasts flood throughout the Layer 2 switched domain. Layer 3 switching is purposely used in structured design to reduce the scope of broadcast domains. In addition, intelligent, protocol-aware features of Layer 3 switches will further contain broadcasts such as Dynamic Host Configuration Protocol (DHCP) by converting them into directed unicasts.
Spanning-Tree Domain Layer 2 switches can have spanning-tree protocol to break loops in the Layer 2 topology. If loops are included in the Layer 2 design, then redundant links are put in blocking mode and do not forward traffic. So we have avoided Layer 2 loops by design and have the Layer 3 protocols instead, handle load balancing and redundancy, so that all links are used for traffic. The spanning-tree domain is kept simple and loops are avoided. With loops in the Layer 2 topology, spanning-tree protocol takes between 30 and 50 seconds to converge. So, avoiding loops is especially important in the mission- critical parts of the network, such as the campus backbone. To prevent spanning-tree protocol convergence events in the campus backbone, we have ensured that all links connecting backbone switches are routed links, not VLAN trunks. Use of Layer 3 switching in a structured design is to reduce the scope of spanning-tree domains. And then let a Layer 3 routing protocol, such as Enhanced IGRP or OSPF, handle load balancing, redundancy, and recovery in the backbone.
Virtual LAN A VLAN is also an extended Layer 2 switched domain. VLAN has the same characteristics of a failure domain, broadcast domain, and spanning-tree domain, as described before. So, although VLANs can be used to segment the campus network logically, deploying pervasive VLANs throughout the campus adds to the complexity. Avoiding loops and restricting one VLAN to a single Layer 2 switch in one wiring closet will minimize the complexity. One of the motivations in the development of VLAN technology was to take advantage of high-speed Layer 2 switching. With the advent of high-performance Layer 3 switching in hardware, the use of VLANs is no longer related to performance. A VLAN can be used to logically associate a workgroup with a common access policy as defined by Access control lists (ACLs). Similarly, VLANs can be used within a server farm to associate a group of servers with a common access policy as defined by ACLs.
Policy Domain Access policy is usually defined on the routers or Layer 3 switches in the campus intranet. A convenient way to define policy is with ACLs that apply to an IP subnet. Thus, a group of servers with similar access policies can be conveniently grouped together in the same IP subnet and the same VLAN. Other services, such as DHCP are defined on an IP subnet basis. A useful new feature of the Catalyst® 6000 family of products is the VLAN access Virtual Access control list (VACL). A Catalyst 6000 or Catalyst 6500 can use conventional ACLs as well as VACLs. A VACL provides granular policy control applied between stations within a VLAN.
Why Multilayer Model (Hierarchical Model) The multilayer campus design consists of a number of building blocks connected across a campus backbone. Note the three characteristic layers: access, distribution, and core. In the most general model, Layer 2 switching is used in the access layer, Layer 3 switching in the distribution layer, and Layer 3 switching in the core. One advantage of the multilayer campus design is scalability. New buildings and server farms can be easily added without changing the design. The redundancy of the building block is extended with redundancy in the backbone. If a separate backbone layer is configured, it should always consist of at least two separate switches. Ideally, these switches should be located in different buildings to maximize the redundancy benefits. The multilayer campus design takes maximum advantage of many Layer 3 services including segmentation, load balancing, and failure recovery. IP multicast traffic is handled by Protocol Independent Multicast (PIM) routing in all the Layer 3 switches. Access lists are applied at the distribution layer for granular policy control. Broadcasts are kept off the campus backbone. Protocol-aware features such as DHCP forwarding convert broadcasts to unicasts before packets leave the building block.
Hierarchical models enable you to design internet-works in layers. Understand the importance of layering, we consider the Open System Interconnection (OSI) reference model, which is a layered model for implementing computer communications. Using layers, the OSI model simplifies the tasks required for two computers to communicate. Hierarchical models for internetwork design also use layers to simplify the tasks required for internetworking. Each layer can be focused on specific functions, allowing you to choose the right systems and features for each layer. Hierarchical models apply to both LAN and WAN design. The many benefits of using hierarchical models for your network design include the following:
- Cost savings
- Ease of understanding
- Easy network growth
- Improved fault isolation
The modular nature of the model enables appropriate use of bandwidth within each layer of the hierarchy, reducing wasted capacity.
Keeping each design element simple and small facilitates ease of understanding, which helps control training and staff costs.
In a network design, modularity allows creating design elements that can be replicated as the network grows, facilitating easy network growth. As each element in the network design requires change, the cost and complexity of making the upgrade is contained to a small subset of the overall network. In large, flat, or meshed network architectures, changes tend to impact a large number of systems.
Improved fault isolation is facilitated by structuring the network into small, easy-to-understand elements. Network managers can easily understand the transition points in the network, which helps identify failure points.
As Figure below illustrates, a hierarchical network design has three layers:
- The core layer provides optimal transport between sites.
- The distribution layer provides policy-based connectivity.
- The access layer provides workgroup/user access to the network.
Core Layer:
The core layer is the high-speed switching backbone of the network, which is crucial to enable corporate communications. The core layer should have the following characteristics:
- Offer high reliability
- Provide redundancy
- Provide fault tolerance
- Adapt to changes quickly
- Offer low latency and good manageability
- Avoid slow packet manipulation caused by filters or other processes
- Have a limited and consistent diameter
Distribution Layer:
The distribution layer of the network is the demarcation point between the access and core layers of the network. The distribution layer can have many roles, including implementing the following functions:
- Policy (for example, to ensure that traffic sent from a particular network should be forwarded out one interface, while all other traffic should be forwarded out another interface)
- Security
- Address or area aggregation or summarization
- Departmental or workgroup access
- Broadcast/multicast domain definition
- Routing between virtual LANs (VLANs)
- Media translations (for example, between Ethernet and Token Ring)
- Redistribution between routing domains (for example, between two different routing protocols)
- Demarcation between static and dynamic routing protocols
Several Cisco IOS software features can be used to implement policy at the distribution layer, including the following:
- Filtering by source or destination address
- Filtering on input or output ports
- Hiding internal network numbers by route filtering
- Static routing
- Quality of service mechanisms (for example, to ensure that all devices along a path can accommodate the requested parameters)
Access Layer:
The access layer provides user access to local segments on the network. The access layer is characterized by switched and shared bandwidth LANs in a campus environment. Micro-segmentation, using LAN switches, provides high bandwidth to workgroups by dividing collision domains on Ethernet segments and reducing the number of stations capturing the token on Token Ring LANs.
Campus Backbone Design:
We have considered five different Backbone Designs that are designs with different performance and scalability. We have chosen the one we considered most suitable after briefly analyzing other four. In our project work the term ‘Backbone’ is used to represent switches and links in the core of the network through which all other traffic passed on its way from client to server. A building design (module) is applied to each campus of the University and these are interconnected using a campus backbone to form the campus design for the entire University. By using such a design, scalability is achieved, thereby, allowing Middlesex University to cater for its future expansion needs. This also permits new buildings or server farms to be added without changing the design.
The first three approaches have no switches in core and just for small designs.
First one we considered is Collapsed Backbone (used in small campus designs), problem is Layer 3 switches in the backbone must maintain Address Resolution Protocol (ARP) entries for every active networked device in the campus. Excessive ARP activity is CPU-intensive and can affect overall backbone performance. The Second one is Partial Mesh (one consideration for small campus designs), another is that traffic between client modules requires three logical hops through the backbone. In effect, the Layer 3 switches at the server-farm side become a collapsed backbone for any client-client traffic.
The third one is Full Mesh Backbone (Small Campus Design), this design is ideal for connecting two or three modules together. However, as more modules are added, the number of links required to maintain a full-mesh rises as the square of the number of modules. As the number of links increases, the number of subnets and routing peers also grows and the complexity rises. The fourth one is Layer 2 Switched Backbone, Layer 2 switched backbone is appropriate for a larger campus with three or more buildings to be connected. Adding switches in the backbone reduces the number of connections and makes it easier to add additional modules. It is easy to avoid spanning-tree protocol loops with just two Layer 2 switches in the backbone. However, this restriction limits the ultimate scalability of the Layer 2 backbone design. Another limitation is that all broadcasts and multicasts flood the backbone.
The Last one we found is Layer 3 Switched Backbone, the backbone makes use of Layer 3 switches only. The backbone switches are connected to each other using routed Gigabit Ethernet links.
With respect to Middlesex University, this approach seems suitable as it provides:
- Scalability- expansion needs are taken into consideration with this design. Modules or building blocks can be added with ease.
- Flexibility of the topology – no spanning tree loops in the core since Layer 3 switches are being used.
- Control of multicast and broadcast traffic in the backbone.
- Reduced latency as there is reduced router peering.
As an alternative to the above design, where single paths from each distribution link to the core, an approach is to use dual paths from the distribution switches to the backbone switches. This has for effect to provide fast recovery from link failures as equal-cost paths to every destination network are maintained. Double trunking capacity is also provided into the backbone.
Server Farm Design:
In the current design, as mentioned earlier, servers are accessed in a distributed fashion as there are servers located on each of the different campuses.
In the design proposed for the server farm, servers will be accessed in a centralised fashion. The server farm will be a module (building block) attached to the campus backbone. The links connecting the server farm to the core can be Gigabit Ethernet links though Gigabit EtherChannel would be advisable as server farms are concentration points for traffic coming from the different campuses.
Since in the case of Middlesex University the server farm is a critical building block, it is proposed that servers be dual home attached to the Layer 2 switches. Therefore, one interface on the servers will be active while the other is on hot standby, thereby providing resilience.
Transmission Media: An Enterprise Campus can use various physical media to interconnect devices. Selecting the type of cable is an important consideration when deploying a new network or upgrading an existing one. Cabling infrastructure represents a long-term investment. It is usually installed to last for ten years or more. In addition, even the best network equipment does not operate as expected with poorly chosen cabling. Twisted-pair cables (copper) and optical cables (fiber) are the most common physical transmission media used in modern networks.
Unshielded Twisted-Pair (UTP) Cables
UTP consists of four pairs of isolated wires that are wrapped together in plastic cable. No additional foil or wire is wrapped around the core wires (thus, they are unshielded). This makes these wires less expensive, but also less immune to external electromagnetic influences than shielded cables.
Optical Cables Typical requirements that lead to the selection of optical cable as a transmission media include distances longer than 100 meters, and immunity to electromagnetic interference. There are different types of optical cable; the two main types are multimode (MM) and single-mode (SM). Both MM and SM optical cable have lower signal losses than a twisted pair cable; therefore, optical cables automatically enable longer distances between devices. However, fiber cable has precise production and installation requirements, resulting in a higher cost than twisted pair cable. Multimode fiber is optical fiber that carries multiple light waves or modes concurrently, each at a slightly different reflection angle within the optical fiber core. Because modes tend to disperse over longer lengths (modal dispersion), MM fiber transmission is used for relatively short distances. The typical diameter of an MM fiber is 50 or 62.5 micrometers. Single-mode (also known as monomode) fiber is optical fiber that carries a single wave (or laser) of light. The typical diameter of an SM fiber core is between 2 and 10 micrometers.
Copper Versus Fiber Media
* When using Gigabit Ethernet
Network Geography The location of Campus nodes and the distances between them determine the network's geography. When designing the Campus network, the network designer's our first step is to identify the network's geography. The network designer must determine the following: Location of nodes—Nodes (end users, workstations, or servers) within an organization can be located in the same room, building, or geographical area. Distances between the nodes—Based on the location of nodes and the distance between them, the network designer decides which technology should be used, the maximum speeds, and so on. (Media specifications typically include a maximum distance, how often regenerators can be used, and so on.) The following geographical structures can be identified with respect to the network geography: Intra-Building Structure An intra-building campus network structure provides connectivity for the end nodes, which are all located in the same building, and gives them access to the network resources. (The access and distribution layers are located in the same building.) User workstations are attached to the floor-wiring closet with UTP cables. To allow the most flexibility in the use of technologies, the UTP cables are Category 5 (CAT 5) or better. Wiring closets connect to the building central switch (distribution switch) over optical fiber. This offers better transmission performances and is less sensitive to environmental disturbances. Inter-Building Structure An inter-building network structure provides the connectivity between the individual campus buildings' central switches (in the distribution and core layers). Typically placed only a few hundred meters to a few kilometres apart, these buildings are usually in close proximity. Because the nodes in all campus buildings share common devices such as servers, the demand for high-speed connectivity between the buildings is high. To provide high throughput without excessive interference from environmental conditions, optical fiber is the media of choice between the buildings. Distant Remote Building Structure When connecting distances that exceed a few kilometres (usually within a metropolitan area), the network designer's most important factor to consider is the physical media. The speed and cost of the network infrastructure depend heavily on the media selection. Usually, the bandwidth requirements are higher than the physical connectivity options can support. In such cases, we must identify the organization's critical applications and then select the equipment that supports intelligent network services, such as quality of service (QoS) and filtering capabilities that allow optimal use of the bandwidth. Some companies might own their media, such as fiber or copper lines. However, if the organization like ours does not own physical transmission media to certain remote locations, the Enterprise Network Campus must connect through the Enterprise Edge wide-area network (WAN) module using connectivity options from public service providers (such as metropolitan area network [MAN]).
Network Geography Considerations
MM = Multimode; SM = single-mode
WAN Technology To provide information services among the different campuses, these need to be connected with one another. There are a number of WAN technologies through which inter-campus connection could be achieved namely, ISDN, ATM, X.25, Frame Relay, SONET and also leased lines such as T1 and T3.
These services are provided by carriers such as BT. Since the entire design emphasizes on the Gigabit technology, the WAN technology would need to run at Gigabit speeds. The research performed on BT’s website indicated a suitable service which was “MetroWave”
MetroWave is fibre based and provides a secure connectivity service. The MetroWave links can be divided into a maximum of 32 separate channels, each operating at 1.25 Gbps. Gigabit Ethernet is supported and each MetroWave system is currently delivered with four active channels. The major advantage of using MetroWave is that it is suitable when high bandwidth demands are required as well as when multiple protocols need to be supported. Also, resilience can be achieved with optional dual fibre.
Short-Haul Data Services is another service provided by British Telecom. They are designed to extend local area networks and storage area networks between business sites up to 25kms apart. This service is delivered over dedicated optical fibres and the bandwidth ranges from 4Mbps to 40Gbps.
Quality Of Service (Qos)
Scalable Bandwidth Upgrading to EtherChannel provides increased bandwidth and redundancy. EtherChannel is convenient because it scales the bandwidth without adding to the complexity of the design. Spanning-Tree Protocol treats the EtherChannel bundle as a single link, so no spanning-tree loops are introduced. Routing protocols also treat the EtherChannel bundle as a single, routed interface with a common IP address, so no additional IP subnets are required, and no additional router peering relationships are created. The load balancing is handled by the interface hardware. We have also used EtherChannel to link backbone switches, to connect the backbone to the distribution layer, and to join the distribution layer to the wiring closet.
We have also explained Load balancing, Redundancy (will also be involved in Resiliance) and Multilayer Design which can be considered in QoS.
High Availability Considerations Determinism is an important design goal. For the network to be deterministic, the design is kept as simple as possible and highly structured as possible. Recovery mechanisms are considered as part of the design process. Recovery timing is determined in part by protocol messages such as hellos and keepalives, and these may need to be tuned to achieve recovery goals.
The following is an example of a multilayer design configured with all relevant mechanisms enabled for a high availability solution.
Figure shows a two-way data flow between a client and a server. Different recovery mechanisms apply depending on the type of failure. The nonredundant parts are the client workstation, the NIC card on the workstation, the Ethernet cable from the workstation, as well as the dedicated port on the access switch. It is not generally considered cost-effective and practical to provide redundancy for these points in the design.
If the uplink number 3 fails, recovery is provided by HSRP, which can be tuned to recover in less than one second if required. If the distribution switch number 4 fails, the recovery is also HSRP. If one backbone link number 5 fails, equal-cost routing automatically uses the remaining link for all traffic to the core. If both uplinks to the core are lost, a feature called HSRP-track will move the default gateway services over to the neighboring distribution switch. If the backbone switch number 6 fails, the recovery is by the routing protocol OSPF. For most failures in the backbone, traffic will not be affected for more than one second as long as two equal-cost paths exist to any destination subnet. If one backbone link number 7 to the server farm is lost, equal-cost routing recovers immediately. If both backbone links are lost, HSRP-track recovers the gateway router function to the backup distribution-layer switch. Again HSRP can be tuned to recover in less than one second. If the distribution switch number 8 fails, several things occur simultaneously. The routing protocol will converge in the backbone for any subnets in the server farm; however, recovery should be within one second as long as an alternative equal-cost path exists. If the uplink number 9 fails, the Layer 3 recovery is HSRP and the Layer 2 recovery is about three seconds with UplinkFast configured at the wiring-closet switch. If the server switch number 10 fails, the dual-phy NIC card fails over in about one second. If the cable to the server number 11 fails, the dual-phy NIC card also fails over in about one second. Voice and Video Consideration Quality of service (QoS) for voice over IP (VoIP) consists of providing low-enough packet loss and low-enough delay so that voice quality is not affected by conditions in the network. A solution is to apply congestion management and congestion avoidance at oversubscribed points in the network. To achieve guaranteed low delay for voice at campus speeds, it is sufficient to provide a separate outbound queue for real-time traffic. The bursty data traffic such as file transfers is placed in a different queue from the real-time traffic. Because of the relative high speed of switched Ethernet trunks in the campus, it does not matter much whether the queue allocation scheme is based on weighted round robin, weighted fair or strict priority. Weighted random early detection (WRED) is used to achieve low packet loss and high throughput in any queue that experiences bursty data traffic flows. QoS maps very well to the multilayer campus design (that is already explained earlier) Packet classification is a multilayer service that applies at the wiring-closet switch, which is the ingress point to the network. VoIP traffic flows are recognized by a characteristic port number. The VoIP packets are classified with an IP type of service (ToS) value indicating "low delay voice." Wherever the VoIP packets encounter congestion in the network, the local switch or router will apply the appropriate congestion management and congestion avoidance based on the ToS value. Multicast Routing and Control
The multilayer campus design is ideal for control and distribution of IP multicast traffic. The Layer 3 multicast control is provided by PIM routing protocol. Multicast control at the wiring closet is provided by Internet Group Membership Protocol (IGMP) snooping or Cisco Group Multicast Protocol (CGMP). Multicast control is extremely important because of the large amount of traffic involved when several high-bandwidth multicast streams are provided. The Layer 3 campus backbone is ideal for multicast because Protocol Independent Multicast (PIM) runs on the Layer 3 switches in the backbone. PIM routes multicasts efficiently to their destinations along a shortest path tree. In a Layer 2 switched backbone on the other hand, all multicast traffic is flooded. At the wiring closet, IGMP snooping and CGMP are multilayer services that prune the multicast traffic back to the specific client ports that join a multicast group. Otherwise all multicast traffic floods all ports, interrupting every client workstation. One decision has been made in designing IP multicast routing is to use sparse mode rather than dense mode. Sparse mode is more efficient because dense mode periodically floods multicasts throughout the network and then prunes back based on client joins. The characteristic feature of sparse mode is a router acting as rendezvous point to connect multicast servers to multicast clients, which in turn establish a shortest-path tree. Routers use the rendezvous point to find sources of multicast traffic, and the rendezvous point instructs the last-hop designated router how to build multicast trees toward those sources. A rendezvous point and a backup rendezvous point should be chosen. With the Cisco auto rendezvous point feature, all other routers will automatically discover the rendezvous point. PIM should be configured pervasively on all IP routers in the campus. The rendezvous point and backup rendezvous point are in the shortest path. Now, there is no potential for sub-optimal routing of multicast traffic. We have placed the rendezvous point and the backup rendezvous point on the Layer 3 distribution switches in the server farm close to the multicast sources. This allows all state information to be prepopulated in the backup rendezvous point. If you have a redundant rendezvous point configured on the interior of your network, recovery is much slower and more CPU-intensive. So we have configured loopback interfaces on each server distribution switch for multicast rendezvous point functionality. A logical rendezvous point is a pair of Layer 3 switches or routers configured to act as a single redundant rendezvous point. Defined a loopback interface on both switches with the same IP address. Both switches see all multicast packets for all sources and create a state for them. If the primary rendezvous point fails for any reason, the backup rendezvous point just needs to throw the right interfaces into forwarding to resume operation of the multicast network. For this to work, a well-defined Layer 3 topology is required. Each router in the network must create just one entry in its routing table for this redundant loopback address. With the right topology and addressing, recovery takes under 10 seconds for most failures, and fallback is even less. The logical rendezvous point address for the group is advertised and the unicast routing protocol takes care of the rest.
Network Security
We have considered two different implantations that are applied in modern networks.
First one is Cisco Encryption Technology (CET). If you require a standards based solution that provides multivendor interoperability or remote client connections, then you can’t use this one, because this is for only Cisco router-to-router encryption.
Rather than that we are using IPSec Network Security technology. IPSec is a framework of open standards developed by the IETF (Internet Engineering Task Force)
IPSec provides security for transmission of sensitive information over unprotected networks such as the Internet IPSec acts at the network layer, protecting and authenticating IP Packets between participation IPSec devices (PEERS), such as Cisco routers.
IPSec provides the following security services.
Data Confidentiality
IPSec sender can encrypt packets transmitting them across a network
Data Integrity
The IPSec receiver can authenticate packets sent by the IPSec sender to ensure that the data has not been altered during transmission.
Data Origin Authentication
The IPSec receiver can authenticate the source of the IPSec packets sent. This service is dependent upon the data integrity service.
Anti-Replay
The IPSec receiver can detect and reject replayed packets
Other than that within the server farm, multiple VLANs are used to create multiple policy domains as required. If one particular server has a unique access policy, you may wish to create a unique VLAN and subnet for that server. If a group of servers has a common access policy, you may wish to place the whole group in a common VLAN and subnet. ACLs are applied on the interfaces of the Layer 3 switches.
And lastly about Security, Some security issues are also associated with electromagnetic interference—it is easy to eavesdrop on the traffic carried across UTP because these cables emit electromagnetic interference.
RESILIANCE While ensuring that the network quality of service delivers a low latency, high throughput and deterministic network in practice, it is also important to examine how resilience and redundancy can be added to the network. The main strategy in designing failure-resilient networks is to detect and correct for the failure as quickly as possible, minimizing network downtime and being as transparent to the network users as possible. Using link aggregation the joining of two or more links together in order to create a faster link does not protect against a switch failure, but is helpful in that it provides increased bandwidth and allows zero-delay failover (‘failover’ is the automatic and transparent switch from a failed element to a redundant element) when one of the aggregation members is lost. When compared with other approaches to resilience, link aggregation is very efficient as it does not leave areas of the network redundant: each switch will actively load-balance data across the aggregated ports. The zero-delay failover ensures that the network is always available so that any traffic loss occurring at the point of failure can be re-sent immediately by the traffic protocol. Of course, when a port aggregation member is lost, the available segment bandwidth is reduced too: potential bottleneck problems, as previously described, should be guarded against. For ultimate transparent resilience in the network, intelligent switches implement the spanning tree algorithm. The Spanning Tree Protocol, supported on most bridges and switches, is the tested and proven method for providing path redundancy while eliminating loops. The spanning tree algorithm actually corresponds to an evolution of algorithms, each an improvement over, and backward compatible with, what went before. The latest IEEE standard for spanning tree, Multiple Spanning Tree (MST), keeps the speed advantages of its predecessor, Rapid Spanning Tree (RST), but allows for VLAN-specific routing. Using a combination of MST and RST, it is possible to provide a very resilient network, with fast failover time, and provide traffic class-specific routing and resilience. Unlike the original Spanning Tree algorithm that took minutes to configure and failover the network, MST and RST take a matter of seconds to completely re-route traffic through the failed switch or link segment. In addition, with MST, as well as providing all of the benefits of RST, the algorithm will attempt to optimize the latency by minimizing the switch hops for each VLAN traffic class.
References:
CCIE Professional Development Cisco LAN Switching. K. Clark & K. Hamilton. (2001). Indianapolis, Indiana 46290, USA: Cisco Press
CCNP Cisco Networking Academy: Multi Layer Switching Companion. W. Lewis. (2003). Indianapolis, Indiana 46290, USA: Cisco Press
`