In the next section we will outline the main contributors to multimedia system performance.
Networking Devices
One of the main causes of poor networking performance is due to devices used. Using unreliable equipment will lead to poor service and slow data transfer rates.
Coding and Decoding
For media streaming we use a programme called a codec. This works like a hardware driver. Each type of media file will have its own assigned codec which deals with the coding and decoding of data. Data compression algorithms are implemented to all multimedia graphics, audio, and video. If you want to see how this affects the performance you can measure the time taken to code the media stream.
Coding and decoding can diminish the bandwidth requirements over a link, and lead to less traffic. This is called data compression, or source coding. Coding and decoding can also be used to combat critical errors in transmission. Redundancy is added to cope efficiently with errors. This is called channel coding. The most common compression algorithms used in video are MPEG-2, MPEG-3.
Video Compression.
The most up to date range in video compression research is on Wavelet. This decompresses the picture into simple basis functions.
Spyware, Adaware and viruses
Malicious codes are almost certainly the number one cause for multimedia networking performance issues. Once a virus has been executed it can spread like wildfire infecting other programmes and files.
-
A boot sector virus infects hard disks, flash drives, and floppy disks. Once the computer is rebooted the infected disk can spread to the computers hard drive. This will consequently infect all USB drives and floppy disks thus you could spread viruses to other computers.
-
A macro virus becomes active when the infected document is launched in the target programme.
-
An e-mail virus is activated when the victim opens the infected attachment.
-
A worm replicates itself so you could be transmitting hundreds of these over the network. Worms are similar to viruses but certainly not the same.
-
A Trojan horse also know as malware used to gain control of a computer system without user acknowledgement. Normally disguised as software, Music, Image etcetera that looks harmless but when executed can be catastrophic. A Trojan comprises of two sections a client and server. The server runs on the unsuspected machine of the victim. The client is used to control the server that resides on the victims machine.
Network Reliability and Fault Tolerance
Multimedia networks are dependent on hardware and software functioning correctly. A number of hardware or software failures could leave the entire network redundant. These network failures where mostly coming from hardware related faults. Most recently network downtime was caused by Fiber optic cable damage, natural disasters, and computer hackers.
Bandwidth
The speed of a network is measured in bits per second. This amount is represented by the data rate or bandwidth available. Bandwidth is ”an a analogue communication, the difference between the highest and lowest frequencies that a transmission channel can carry” The bandwidth also contributes to the overall network performance. The amount of bits sent represents the flow of data traveling around the network.
Cache Memory
Multimedia systems are CPU intensive using the majority of the computers CPU power. Central processing Unit uses Cache memory to temporally store data. The data is placed in a location where the CPU can rapidly access data.
Congestion control
Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the network. Most networks start dropping packets when they are overloaded and this phenomenon is called congestion. Networks provide a congestion-control mechanism to prevent this problem. Congestion control aims to keep number of packets below a level at which performance falls off dramatically.
Factor that cause congestion
There are number of factor that will cause congestion some example like packet arrival rate exceeds the outgoing link capacity. In this case the network will start dropping the packets. Insufficient memory to store arriving packets. The receiving nodes buffer is not sufficient to store the incoming data packets or when traffic arrives in bursts in can lead to congestion too.
Network Congestion
Network congestion arises in network systems that include multiple links (see below figure) When two or more nodes would simultaneously try to transmit packets to one node there is a high probability that the number of packets would exceed the packet handling capacity of the network and lead to congestion.
Network performance is affected by the Hurst Parameter. An increase in the Hurst parameter can lead to a reduction in network performance. The extent to which heavy-tailedness degrades network performance is determined by how well congestion control is able to shape source traffic into an on-average constant output stream while conserving information .
In today's network environment with multimedia and other QoS sensitive traffic streams comprising a growing fraction of network traffic, second order performance measures in the form of “jitter” such as delay variation and packet loss variation are important to provisioning user specified QoS. Self-similar business is expected to exert a negative influence on second order performance measures .
Packet switching based services, such as the Internet are best-effort services, so degraded performance, although undesirable, can be tolerated. However, since the connection is contracted, ATM networks need to keep delays and jitter within negotiated limits .
Self-similar traffic exhibits the persistence of clustering which has a negative impact on network performance.
With Poisson traffic (found in conventional telephony networks), clustering occurs in the short term but smooths out over the long term.
With long-tail traffic, the bursty behaviour may itself be bursty, which exacerbates the clustering phenomena, and degrades network performance .
Many aspects of network quality of service depend on coping with traffic peaks that might cause network failures, such as
Cell/packet loss and queue overflow
Violation of delay bounds e.g. In video
Worst cases in statistical multiplexing
Poisson processes are well-behaved because they are stateless, and peak loading is not sustained, so queues do not fill. With long-range order, peaks last longer and have greater impact: the equilibrium shifts for a while.
Bibliography
[1]
The Penguin Concise Dictionary of Computing: Dick Pountain: Page 30