Network report for Middlesex University. The current network design is a star topology with a central point being one of the campus. Each campus is itself designed as a star network and is a Local Area Network.

Authors Avatar

Introduction:

LANs provide data transfer rates that are typically much faster than wide-area networks (WANs). While most companies own their own LAN infrastructure, wide-area connections between LANs are usually leased on a monthly basis from an outside carrier. With the recent developments in Gigabit Ethernet technologies, LAN designs are now capable of 1000 Mbps speeds. High-speed Gigabit links can connect servers to LAN switches. At these speeds, the capacity is there to meet the performance requirements of current high-bandwidth applications.

The most significant design rule for Ethernet is that the round-trip propagation delay in one collision domain must not exceed 512 bit times, which is a requirement for collision detection to work correctly. This rule means that the maximum round-trip delay for a 10 Mbps Ethernet network is 51.2 microseconds. The maximum round-trip delay for a 100 Mbps Ethernet network is only 5.12 microseconds because the bit time on a 100 Mbps Ethernet network is 0.01 microseconds as opposed to 0.1 microseconds on a 10 Mbps Ethernet network. To make 100 Mbps Ethernet work, distance limitations are much more severe than those required for 10 Mbps Ethernet. The general rule is that a 100 Mbps Ethernet has a maximum diameter of 205 meters when unshielded twisted-pair (UTP) cabling is used, whereas 10 Mbps Ethernet has a maximum diameter of 500 meters with 10BaseT and 2500 meters with 10Base5.

The most recent development in the Ethernet arena is Gigabit Ethernet. Gigabit Ethernet is specified by two standards: IEEE 802.3z and 802.3ab. The 802.3z standard specifies the operation of Gigabit Ethernet over fiber and coaxial cable and introduces the Gigabit Media Independent Interface (GMII). The 802.3z standard was approved in June 1998.

The 802.3ab standard specifies the operation of Gigabit Ethernet over Category 5 UTP. Gigabit Ethernet still retains the frame formats and frame sizes and it still uses CSMA/CD. As with Ethernet and Fast Ethernet, full duplex operation is possible. Differences can be found in the encoding; Gigabit Ethernet uses 8B/10B coding with simple nonreturn to zero (NRZ). Because of the 20 percent overhead, pulses run at 1250 MHz to achieve a 1000 Mbps.

The availability of multigigabit campus switches from Cisco presents customers the opportunity to build extremely high-performance networks with high reliability. Gigabit Ethernet and Gigabit EtherChannel provide the high-capacity trunks needed to connect these gigabit switches.

Analysis of current network infrastructure:

This section of the report gives a brief overview of how the computer network is set up.

Middlesex University is made up of a number of campuses, major and smaller ones, as well as Halls of residence.  A computer network, set up across these campuses and halls, allows information sharing services for both members of staff and students.  The facilities can be accessed in a centralized fashion on a given campus or in a distributed fashion across the campuses.

A wide variety of equipment is supported by the network, including Personal Computers (PCs), Macintosh computers, UNIX and DEC workstations.  The use of centralised servers is made to provide access to the Finance System, Administrative Cluster and the Corporate Student System for staff.  These centralised servers are also used to provide services to students.  File servers and print servers are provided on all campuses.

The current network design is a star topology with a central point being one of the campus. Each campus is itself designed as a star network and is a Local Area Network.  Communication among campuses takes place through a Metropolitan Area Network or Wide Area Network as required. A Dial-Up Network also allows access to and from the public telephone network.

As with any other network, Middlesex University computer network is made up of a number of hardware components, namely converters (transceivers), repeaters, Ethernet hubs, bridges, switches, multi-protocol routers and ATM switches.

Inter campus communication is provided with switches using ATM technology and operating at 34 Mbps.  These are installed at many of Middlesex University campuses. Each ATM switch has a 155 Mbps fibre optic interface to each campus router.

Within campus LANs, specific segments for staff and students are provided and with regards to the protocols, a number of these are handled, namely: IPX, IP, AppleTalk, DECnet, LAT and TCP/IP.

The University’s network must address the scaling challenges faced by today’s network.  These include any to any traffic, increasing network size, increasing throughput performance demand, requirements for network services, resilience and simple migration steps.

Considerations:

There are a number of models which can be used when designing campus networks and choosing a given model implies taking into consideration a number of factors and one of the most important ones is traffic patterns.  Taking traffic patterns into account allows limiting the waste of switching and link bandwidth.  Other factors which must be considered are briefly discussed below:

  • Deterministic Traffic Patterns – Troubleshooting is made easy during network failure and recovery situations if traffic flows are predictable.  Network performance can also be improved if traffic patterns are known.
  • Ease of Configuration – Initial configuration of the network should de achieved without too much difficulty.
  • Ease of troubleshooting – Troubleshooting which is an integral part of maintenance should not pose many problems, which implies that the use of modules or building blocks must be made of.
  • Load Balancing – Load balancing allows the ‘full’ use of bandwidth with the presence of redundant paths and in some way, allows to effectively doubling the bandwidth.
  • Cost – This is an important factor when considering the design of any network as budgetary constraints are always present.
  • Ease of maintenance – Maintenance should be minimized as much as possible and tasks should be well defined.
  • Redundancy – To increase the reliability of the network, redundancy must be introduced by making use of additional hardware.  This usually leads to an increase of ten to twenty percent of the overall costs.
  • Consistent number of hops – To provide determinism, there should be a consistent number of hops throughout the network and this is best achieved by using a modular approach.
  • Security – a certain degree of security should be present in order to prevent unauthorised intrusion into the network
  • Scalability – provision for expansion of the network should be made.

Campus and Building Design:

Basically our university have many campuses now and each campus have many buildings in itself. The building design is appropriate for a building-sized network with up to several thousand networked devices. The campus design is appropriate for a large campus consisting of many buildings. Both are based on a simple building block or module, which is the foundation of the modular design. The building design is described first because it is also used for each building within the campus design. To scale from the building model to the campus model, a campus backbone is added. Each building block or module connects to the campus backbone.

Building Design (Modular Design):

The multilayer design is based on a redundant building block, also called a module. Gigabit Ethernet trunks connect Layer 2 switches in each wiring closet to a redundant pair of Layer 3 switches in the distribution layer. The modular concept can also be applied to server farms and WAN connectivity. Each IP subnet is restricted to one wiring-closet switch. This design features no spanning-tree loops and no VLAN trunking to the wiring closet. Each gigabit uplink is a native routed interface on the Layer 3 switches in the distribution layer.

               

Load Balancing:                                                                                                                For Load balancing one approach is that the Distribution-layer switch on the left is designated the HSRP primary gateway for one subnet and the distribution-layer switch on the right is designated the HSRP primary gateway for the other subnet. A simple convention to follow is that the distribution- layer switch on the left is always HSRP primary for even-numbered subnets (VLANs) and that the distribution-layer switch on the right is always HSRP primary for odd-numbered subnets (VLANs).                                                                                       Another way to achieve load balancing is to use Multigroup HSRP (MHSRP). With MHSRP, a single IP subnet is configured on a wiring-closet switch, but two different gateway router addresses are used. The Layer 3 switch on the left acts as the gateway router for half of the hosts in the subnet and the Layer 3 switch on the right acts as the gateway router for the other half.

With MHSRP, packets from a particular host will always leave the building block via the active HSRP gateway. Either of the Layer 3 switch will forward returning packets. For symmetric routing, we can configure a lower routing metric on the wiring closet VLAN interface of the HSRP gateway router. This metric will be forwarded out to Layer 3 switches in the backbone as part of a routing update, making this the lowest-cost path for returning packets. Routing protocol metrics are tuned to ensure that packets leaving a building block follow the same path as packets returning to the building block. Packets flow from any station 1 on VLAN A through its default gateway which is Layer 3 switch Z. On Layer 3 switch Z the routing metric on interface VLAN A is adjusted to make this path more favourable than the alternate return path through switch Y. considering OSPF as our routing protocol, the interface cost metric is adjusted.                                                                                                         Another important consideration in the building design is to turn off routing protocol exchanges through the wiring closet subnets. To achieve this, we use the passive interface command on the distribution-layer switches. In this configuration the distribution switches only exchange routes with the core switches and not with each other across the wiring closet VLANs. Turning off routing protocol exchanges reduces CPU overhead on the distribution-layer switches. Other protocol exchanges, such as Cisco Discovery Protocol (CDP) and HSRP, are not affected.

Redundancy:                                                                                                            Redundancy and fast failure recovery is achieved with HSRP configured on the two Layer 3 switches in the distribution layer. HSRP recovery is 10 seconds by default, but can be tuned down as required. In the generic campus model each module has two equal cost paths to every other module. In this model, each distribution-layer switch has two, equal cost paths into the backbone. This model provides fast failure recovery, because each distribution switch maintains two equal cost paths in the routing table to every destination network. When one connection to the backbone fails, all routes immediately switch over to the remaining path in about one second after the link failure is detected.  

Join now!

Multilayer Design:                                                                                                                              The hierarchical approach we are going to use is called the "multilayer design." The multilayer design is modular and capacity scales as building blocks are added. Intelligent Layer 3 services reduce the scope of many typical ...

This is a preview of the whole essay