Wireless LANs, WLANs are now deployed at great pace both on company premises and in public hot spots.

Authors Avatar

Sheikh Dawood Ellahi        

Abstract

Wireless LANs, WLANs are now deployed at great pace both on company premises and in public hot spots.  Having greatly improved the mobility of professional users, this advantage obviously has also created new threats to security. Accordingly, new needs for authentication and authorization and possibly accounting and charging (“AAA”) of users did evolve.  In this context, the IEEE 802.1x standard provides a framework for such AAA tasks. Originally designed for wire-based LANs, it is now extensively used for wireless LANs such as the IEEE 802.11 family of wireless LANs. 802.1x uses the IETF Extensible Authentication Protocol (EAP), the latter supporting multiple authentication methods. This paper describes a proposal to use 802.1x based AAA-functionalities to realise public WLAN usage scenarios such as “closed community membership” and “pre-paid, pay-per-use”.  

Every user of computer is being affected due to rapid change in sizes of software. According to one survey the time is not so far when the computer memory will be sold in terabytes. But the question arises here, now we not need only large memories but also security and flexibility. Data sharing is one of the major issues of distributed computing. Actually distributed means branches of computers connected through wires or wireless mechanism. In distributed environment distribution of data can be done in such a way that while querying data, request can be sent on more then one computer from any data access point. Mobile computing is another hot issue in the field of computer sciences. From this not only our mind became free from spaghettis of wires but also it made our world a global village.

Security remained a main issue while creating any distributed system. Information leakage threat increases when computer send or receive any request or data from other computer. There are different techniques discussed later to make the environment secure.

The availability of wireless network connections to laptop computers and PDA’s has created interest in the issues surrounding mobile computing. However, enabling users to be genuinely mobile in their work requires more than a wireless connection. Distributed system services are needed to support the locating of people, equipment and software objects, and, especially for mobile multimedia applications, network transport protocols which can adapt to a wide range of networking conditions must be developed.

The evolution of IEEE 802.11-based Wireless Local Area Networks (WLANs) and the development of mobile devices like laptops have changed the usage of Ethernet networks.  Previously, computers were used to connect to an organization’s network statically. Now, WLANs and mobile devices, providing for ubiquitous computing, have greatly improved the mobility of professional users. However, this mobility advantage has obviously also created new threats to security. The corresponding problem is how to decide which one of the possibly many users in an open radio cell is allowed to access the network, what resources he or she may use and how to possibly account and charge the use of resources. The IETF calls this important task “AAA” – Authentication – Authorization – Accounting, or “Triple A” for short. In the given context, one framework to establish an AAA-framework is the IEEE 802.1x standard. Initially targeting all of the many IEEE 802 LANs and MANs, 802.1 xs is now extensively used for AAA of WLANs.

Studying the proposal given in Authorisation and charging in public WLANs using FreeBSD and 802 [Pekka Nikander, 2002], this paper describes the use of 802.1x to set up interesting public WLAN usage scenarios such as “closed community membership” and “pre-paid pay- per-use accounts”. Secondly this paper briefly describes concepts of IEEE 802.11, IEEE 802.1x, IETF EAP and IETF RADIUS. This also covers 802.11 based WLANs and different authentication methods, which leads how these technologies and authentication standards can support different public WLANs usage scenarios including pre-paid pay-per-usage accounts and community membership based accounts.

The idea of performing client-server computing by the transmission of executable programs between client and servers has been popularised in recent years. Different vendors had introduced many technologies to compete in the race of distributing software system, like CORBA, EJB, RMI, ODP, DCOM etc. In this paper the role of Microsoft Distributed Component Object Model, its advantages, disadvantages, its comparison with other distributed technologies has been discussed. Actually Microsoft Distributed Component Object Model is an object-oriented model designed to promote interoperability of software objects in distributed, heterogeneous environment. In this paper I try to avoid distinctions between COM and DCOM. But DCOM is enhancing form of COM, so at some places COM reference is necessary. Whenever distribution is concern, Microsoft relates DCOM with CORBA. CORBA provides the basic mechanism to use object on remote machines and it bases on object request broker architecture. 

        

It is not wrong in saying that technology is domineering current era. As computer sciences is rapidly rushing towards advancement, but its leaving a debate for us, “Is this innovation free from security threats, is system flexible and scalable to use in future”? It is very important to discuss these issues for every computer system and especially for distributed environments. Today there are many examples from which medium data is transferring from on system to another. GSM digital cellular telephone network is an example of system. Bluetooth is also another technique, as Frank Stajano (2002) described that “Bluetooth is an embedded radio system for short-range communication between small devices”. There are some security services provided with Bluetooth which are authentication and encryption. In this chapter we can find that which problems distributed systems are facing with respect to security, flexibility and scalability and how it can be resolve? Before going in depth of these issues there is little about how computer works in distributed networks environment.

Let’s start with wireless security. When we start using wireless you free your computer from the rat’s nest of wiring that is your LAN. But by sending your network traffic through the air, instead of via CAT-5 cables, you are broadcasting data including passwords to anyone who is listening within range. The simple solution is use encryption for applications that send information in clear text. FTP and Telnet are two week examples for security point of view. SSH and SFTP are the safe equivalents, and SSH can even be used to tunnel other protocols, such as POP3, which also sends clear passwords.

First we can enable the WEP feature on your wireless access point. This encrypts all of the data that traverses the wireless section of network. It doesn’t matter that WEP has been cracked, because few people in reality will be listening in and even fewer will be able to gather sufficient information to crack your key. If security is of utmost importance, place a firewall between the access point and rest of the LAN. This segments the relatively insecure systems away from main network. Next enable a VPN on this firewall and have it accept only traffic encrypted with the VPN client software. This will encrypt wireless traffic as far as firewall, which will pass the unencrypted traffic to the LAN using a cabled connection.  

  1. Securities in networks

Today security is the biggest barrier faced by mobile computing on the way to their progress. It’s not a new matter of discussion. It remained a threat from beginning of computer systems. Even since humans started communicating with each other, there has been a need to keep secrets. Same techniques of keeping secrets are required, when computers interact with one another. Actually technology is using in both positive and negative senses. Here is an example how it is using in negative sense. Whenever we hear the word hacker or cracker we get a negative image in our minds, everyday in newspapers, magazines, movies, TV etc we hear that some one got hacked due to which they had to suffer heavy looses. In general a hacker is considered to be a person who uses his skill with computers to try to gain unauthorised access to computer files or networks. Chris Brenton (2001) described that “Active Defence states that a hacker is a person who has deep understanding of computers and networking” further he says that hacker is some one who feels the need to go beyond the obvious. Means they are not the dumb person, it is different matter how they are using their capabilities. Following are the briefly explained some security techniques using in distributed network systems.  

  1. SSL Secure Socket Layer

SSL is a secure layer of protocols which runs between the layers of TCP/IP and higher-level protocols such as HTTP or IMAP. TCP/IP on behalf of the higher-level protocols helps to authenticate SSL- enabled client over SSL-enabled server. That also results to create an encrypted connection between client and server. SSL security technology not only helps to improve the safety of Internet communications but also on intranet. That’s why SSL is a standard for encrypted client/server communication between network devices. It has ability to maintain secure data traffic over the network that has very importance in the distributed environment. SSL generates a key after each encrypted transaction, which comes in two strengths, 40-bit and 128-bit. This strength refers to length of key generated by transaction. As the key length will increase it will difficult to break the encryption security.

  1. Encryption

Data encryption is generally considered the best technique securing data storage and transmission. According to techWeb (1999) statement “Encryption is the transformation of data using an algorithm, from one form to another utilizing one or more encryption keys during the transformation process”. It is the art of storing information in a form that allows it to be revealed to those you wish to see it yet, hides it from all others.  The original information to be hidden is called "plain text". The hidden information is called "cipher text". Encryption is any procedure to convert plain text into cipher text. Decryption is any procedure to convert cipher text into plain text. Public and private are key types of encryption. Encryption and decryption are two ways conversion of data from one form to another. While encryption, data changes to its original form depends on its algorithms. On the other hand if we run the same algorithm in reverse condition, then data again comes to its original form.

Supposing some wants to send credit card information from one computer to another over the Internet, what he will do is use public key to encrypt it and then the private key to decrypt it. Public key algorithms are very complex maths problems some are based on prime numbers, factoring, elliptic curves, logarithms etc. In the public key cryptosystem, the public can only determine private key if you solve the problem. There for the most important thing is that the problem should be so complex and time consuming that the attacker would think twice even before proceeding.

  1. Digital Signatures  

Digital signature is another way to make the system secure. “Digital Signature is an electronic code that is attached to a message or file that gives it a unique identity and allows you to certify that a message or file that is sent by you actually came from you”. (Felix Weber, 2001) In order to truly understand the implications of digital signatures one must understand what a digital signature is and how it works. Simply, a digital signature is a way for you to identify yourself electronically. Just like a hand-written signature identifies a person separately, a little piece of computer data will do the same for a computer. But how do digital signatures work and what do they protect or sign. Well suppose you want to send someone an email and you want the recipient to know that the email did in fact originate from you. The way you would do that is by using a digital signature. But in order for the digital signature to ensure authenticity there must be some way to verify that this signature came from you. This is accomplished by using key pairs. When you get a digital signature of your own you are assigned a public key and a private key. The private key is used to sign the document, which is then sent to the recipient. The recipient then uses your public key to verify the signature.

Here is an example of how digital signature works in distributed environment. Let's say computer A want to send a signed documents to computer B. A uses his private key to generate his digital signature or fingerprint. The private key creates a code series based on a complex mathematical algorithm that is embedded in the message that A sends. When B gets the electronic document he then uses A’s public key which he got from a public key server to then verify the digital signature and ensure that the message or document that he has received is in fact from A. The whole communication process between both of them is secure communication. Different Internet security companies issue a PIN number that is unique to a computer.

  1. How we can make a system secure?

While developing a security policy an organisation must identify those entities that are considered valuable enough to undergo security measures, with in these entities certain resources are more valuable than others and more focus should be given to them. Like in an electronic funds transfer system the exposure of the financial transactions likely will have more severe impact than the exposure of a personnel record of a customer. Following these principles would help in avoiding a lot of common security problems. That is true that this set of principles will not be able to cover every possible flaw that could show up, but it will minimise the chances of any kind of hacking or cracking and make the system more reliable. Here I am going to discuses a few important ones.

  1. Securing the weakest link

One of the most common analogies in the security community is that security is a chain of links. A secure system is most likely secure as the weakest link. Hackers will always look for and attack weakest parts of the system. It's probably no surprise that the hackers will always tend to go after low hanging fruit. If they target your system for whatever reason, they're going to take the path of least resistance. That means they'll try to attack the parts of the system that look weakest, and not the parts that look strong. A similar kind of logic is widely applicable to the physical world. There is always more money in a bank than a general store, but which one is more likely to be robbed?, the general store, of course. Why? Because banks tend to have much stronger security precautions; general stores are much easier targets. This principle has is hundred percent applicable in the software world, but most people don't pay any attention. In particular, Kirk Job-Sluder (2002) concluded that “Cryptography is always considered to be the weakest part of a software system”. Even if you use Secure Socket Layers1 with 512-bit RSA keys and 40-bit RC4 keys, which are considered incredibly weak cryptography, an attacker can probably find much easier ways to break in a system rather tan just attacking Cryptography. Sure, it is breakable, but doing so still requires a large computational effort. If the attacker wants access to the data that travels over the network, then they'll probably target one of the end points and try to find a flaw like a buffer overflow or memory leakage, and then they dig out the stage of data when it gets encrypted or decrypted. All the cryptography in the world cannot secure data if there is a buffer overflow.

  1. Social engineering: A common weak link

Sometimes it is not the software that is the weakest link in your system; it could also be the general surrounding considers social engineering when an attacker uses social manipulation to break into a system. Typically, a service centre will get a call from a sincere sounding user, who will talk the service professional for a password that should not be given away. This sort of attack is generally quite easy to launch, because customer service representatives do not like to deal with stress. If they are dealing with someone is really mad about not being able to get into his account, most of them will give away the password and will try to calm the situation down.

One good strategy is to limit the capabilities of technical support as much as possible. Like the entire Customer service representatives wouldn’t be allowed to peep or change the passwords of the users only a selected few persons should do it after a lot of questioning and enquiries. I remember when someone hacked my hotmail account I mailed hotmail’s technical staff for help and they asked me a few but very complex questions which could only be answered by a real account user, the questions were like, name the last three passwords you used, what date the account was last accessed by me, what approx. date did I create my account etc.

  1. Practice defence in depth

Here we can see the concepts behind in practicing defence in depth is to make the system risk free with many defensive techniques. So that if one layer of defence fails, another layer of defence will work, fixing the breach. Let us return to our example of providing security for a bank. Why is the typical bank more secure than the typical shopping store? Because there are a lot of security measures protecting the bank, there are a lot of security cameras security guards etc. If we look at the computer systems, always multiple defence systems should be installed, like if SSL doesn’t work firewalls should and if firewalls don’t encryption should.    

  1. Security failure

Software systems do have failure modes some get pretty unavoidable, what are avoidable are security problems related to failures. The problem is that when many systems fail in any way, they revert to insecure behaviour. In such systems, attackers only either wait that right kind of failure happen automatically or they try to create a right kind of cause for failure. Remote Method invocation (RMI) has a similar problem. When client and server wants to communicate over RMI and the server uses SSL or some other encryption protocol, the client does not support the protocol the server uses, the client downloads the implementation of proper socket from the server at runtime. This is a big security flow, because the server has not been authenticated at the time that the encryption interface was downloaded. An attacker could pretend to be the server, installing implementation of his own socket on each client, even when the already SSL installed on the client side. The problem is that if the client fails using default libraries in secure connection establishment, it will make a connection using any protocol an un-trusted entity gives it, thereby extending trust. The development teams and proper should solve these kinds of problems ensuring that system doesn’t run in an abnormal state.

  1. Be Reluctant to Trust

Programmers usually hide secretes in client code, just to minimise the server resources and assuming that their secret will be safe. But clever hackers always try to exploit these tricks and most of the time they are successful, I know that developers use a JavaScript code to validate credit card numbers on the client side, but if the hacker is clever enough he surly can generate a new credit card number out of the algorithm that verifies it. Trust is often easily extended in the area of social engineering that is the reason social engineering attacks are easily to launch.

Join now!
  1. How distributed system can be flexible?

Now distributed systems are running on mixture of mainframes servers and midrange platforms. Running different versions of UNIX, Novel NetWare and Windows NT, 2000, with workstations of different flavours of Microsoft Windows is not a big deal now. Because now systems became so complex and fast that it is possible that any company is running COBOL, PowerBuilder, or Java clients to talk to Sybase, Oracle, DB2, or IMS database at same time. It was old stage when development of new software and purchasing or replacing equipment was too crucial and costly. Because if ...

This is a preview of the whole essay