Networking: switching
Extracts from this document...
Introduction
Basic Concepts Circuit Switching Systems A circuit switched system is one where a dedicated connection must be set up between two nodes before they may communicate. For the duration of the communication, that connection may only be used by the same two nodes, and when the communication has ceased, the connection must be explicitly cancelled. A good example of this is the early telephone exchange systems, where one caller would ask the operator to connect them to a receiver, where the end result was a physical electrical connection between the subscriber's telephones for the duration of the call. The primary characteristics of a circuit switched network are fixed-bandwidth and low transmission delay once a connection has established. Also it can be quite expensive as when traffic is low on a virtual circuit unused transmission capacity is wasted i.e. during international or long distance calls the charges will add up until the call is ended, even when either party are not speaking. ...read more.
Middle
ATM (asynchronous transfer mode) is an international standard for a high-speed connection oriented, cell-switching technology. Cell switching closely resembles packet switching in that it breaks a data stream into packets which are then placed on lines that are shared by several streams. One major difference is that cells have a fixed size, each being 53 bytes while packets can have different sizes. References: telecom.tbi.net/switching.htm General Routing and Congestion Control Algorithms Routing is the act of moving information across an internetwork from a source to a destination. Along the way, at least one intermediate node typically is encountered. Routing involves two basic activities: determining optimal routing paths and transporting information groups through an internetwork. Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A metric is a standard of measurement, such as path length, reliability and bandwidth, which is used by routing algorithms to determine the optimal path to a destination. To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. ...read more.
Conclusion
Although bandwidth is a rating of the maximum attainable throughput on a link, routes through links with greater bandwidth do not necessarily provide better routes than routes through slower links. Load refers to the degree to which a network resource, such as a router, is busy. Load can be calculated in a variety of ways, including CPU utilization and packets processed per second. Monitoring these parameters on a continual basis can be resource-intensive itself. Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures. Although line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time. Congestion control is a technique for monitoring network utilization and manipulating transmission or forwarding rates for data frames to keep traffic levels from overwhelming the network medium. It gets its name because it avoids "network traffic jams." Congestion control algorithms prevent the network from entering congestive collapse. Congestive collapse is a situation where, although the network links are being heavily utilized, very little is being done. ...read more.
This student written piece of work is one of many that can be found in our AS and A Level Information Systems and Communication section.
Found what you're looking for?
- Start learning 29% faster today
- 150,000+ documents available
- Just £6.99 a month