Implementation Of Intelligent Computer Lab Access Control  

Using Smart Card By Secure And Reliable Cryptography                                                              

Chapter 1: Client- Server Technology

1.1 Client-Server Concept and Architecture

The term "client/server" implies that clients and servers are separate logical entities that work together, usually over a network, to accomplish a task. Client/server is more than a client and a server communicating across a network. Client/server uses asynchronous and synchronous messaging techniques with the assistance of middle-ware to communicate across a network.

Client/Server uses this approach of a client (UI) and the server (database I/O) to provide its robust distributed capabilities. The company, Sigma has used this technique for over 15 years to allow its products to be ported to multiple platforms, databases, and transaction processors while retaining a product's marketability and enhanced functionality from decade to decade.

Sigma's client/server product uses an asynchronous approach of sending a message to request an action and receives a message containing the information requested. This approach allows the product to send intensive CPU processing requests to the server to perform and return the results to the client when finished.

Sigma's architecture is based on re-usability and portability. Sigma currently uses a standard I/O routine, which is mutually exclusive from the user interface. Sigma's current architecture supports character-based screens and a variety of databases where the user interface is independent of the database access. This architecture corresponds directly to the architecture used in a GUI Client/Server environment. Sigma's client/server product uses an asynchronous approach of sending a message to request an action and receives a message containing the information requested.

A traditional client/server application is that of the File Server where clients request files from the File Server. This results in the entire file being sent to the client but necessitates many message exchanges across the network. Another traditional client/server application is that of the Database Server where clients pass SQL requests to the server. The Database Server executes each SQL statement and passes the results back to the client.

Open Database Connectivity (ODBC) is often used by a client to send SQL requests to the server to process. ODBC provides a standard SQL interface for sending requests to the server. The Remote Procedure Call (RPC) is an extension of the traditional client/server model suited to transaction processing environments. It allows for the creation of a Transaction Server. Clients call a remote procedure and pass parameters to it. A single message allows the Transaction Server to execute stored (compiled) database statements and return the results to the client. This distribution of processing reduces network traffic and improves performance. Site autonomy can also be increased by limiting database modifications to locally executing applications.

Remote Procedure Call (RPC). The Remote Procedure Call (RPC) is a mechanism that allows programs to communicate with each other.

1.2 IP protocol

TCP sends each datagrams to IP. Of course it has to tell IP the Internet address of the computer at the other end. This is all IP is concerned about. It doesn't care about what is in the datagram, or even in the TCP header. IP's job is simply to find a route for the datagram and get it to the other end. In order to allow gateways or other intermediate systems to forward the datagram, it adds its own header. The main things in this header are the source and destination Internet address (32-bit addresses, like 128.6.4.194), the protocol number, and another checksum. The source Internet address is simply the address of source machine. The destination Internet address is the address of the other machine.

The protocol number tells IP at the other end to send the datagram to TCP. Although most IP traffic uses TCP, there are other protocols that can use IP, so IP have to been told which protocol to send the datagram to. Finally, the checksum allows IP at the other end to verify that the header wasn't damaged in transmit. TCP and IP have separate checksums. IP has to be able to verify that the header didn't get damaged in transmit, or it could send a message to the wrong place. It is both more efficient and safer to have TCP compute a separate checksum for the TCP header and data.

IP addresses are used to deliver packets of data across a network and have what is termed end-to-end significance. This means that the source and destination IP address remains constant as the packet traverses a network. Each time a packet travels through a router, the router will reference it's routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question. If a match is not found, then the packet may be forwarded to the router defined as the default gateway, or the router may drop the packet.

Packets are forwarded to a default router in the belief that the default router has more network information in its routing table and will therefore be able to route the packet correctly on to its final destination. This is typically used when connecting a LAN with PCs on it to the Internet. Each PC will have the router that connects the LAN to the Internet defined as its default gateway. A default gateway is seen in a routing table of a host as follows: the default route 0.0.0.0 will be listed as the destination network, and the IP address of the default gateway will be listed as the next hop router.

If the source and destination IP addresses remain constant as the packet works its way through the network, how is the next hop router addressed? In a LAN environment this is handled by the MAC (Media Access Control) address. The key point is that the MAC addresses will change every time a packet travels though a router, however, the IP addresses will remain constant. Subnet masks are essential tools in network design, but can make things more difficult to understand. Subnet masks are used to split a network into a collection of smaller subnetworks. This may be done to reduce network traffic on each subnetwork, or to make the internetwork more manageable as a whole.

1.3 Network Protocol in LAN

Most LANs connect workstations and personal computers. Each node (individual computer) in a LAN has its own CPU with which it executes programs, but it is also able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions.

The following characteristics differentiate one LAN from another:

  • Topology: The geometric arrangement of devices on the network. For example, devices can be arranged in a ring or in a straight line.
  • Protocols: The rules and encoding specifications for sending data. The protocols also determine whether the network uses peer-to-peer or client/server architecture.
  • Media: Devices can be connected by twisted-pair wire, coaxial cables, or fiber optic cables. Some networks do without connecting media altogether, communicating instead via radio waves.

1.3.1. File Transfer Protocol

The File Transfer Protocol (FTP) provides the basic elements of file sharing between hosts. FTP uses TCP to create a virtual connection for control information and then creates a separate TCP connection for data transfers. The control connection uses an image of the TELNET protocol to exchange commands and messages between hosts.

1.3.2. User Datagram Protocol

The User Datagram Protocol (UDP) provides a simple, but unreliable message service for transaction-oriented services. Each UDP header carries both a source port identifier and destination port identifier, allowing high-level protocols to target specific applications and services among hosts.

1.3.3. Transmission Control Protocol

TCP provides a reliable stream delivery and virtual connection service to applications through the use of sequenced acknowledgement with retransmission of packets when necessary. TCP uses a 32-bit sequence number that counts bytes in the data stream. Each TCP packet contains the starting sequence number of the data in that packet, and the sequence number (called the acknowledgment number) of the last byte received from the remote peer. With this information, a sliding-window protocol is implemented. Forward and reverse sequence numbers are completely independent, and each TCP peer must track both its own sequence numbering and the numbering being used by the remote peer.

1.4 Winsock 2.0 Architecture

Windows Sockets version 2.0 (WinSock 2) formalizes the API for a number of other protocol suites-- ATM, IPX/SPX, and DECnet--and allows them to coexist simultaneously. It still retains full backward compatibility with the existing 1.1--some of which is clarified further--so all existing WinSock applications can continue to run without modification (the only exception are WinSock 1.1 applications that use blocking hooks, in which case they need to be re-written to work without them).

WinSock 2 goes beyond simply allowing the coexistence of multiple protocol stacks, in theory it even allows the creation of applications that are network protocol independent. A WinSock 2 application can transparently select a protocol based on its service needs. The application can adapt to differences in network names and addresses using the mechanisms WinSock 2 provides.

  1. WinSock 2 Architecture

WinSock 2 has an all-new architecture that provides much more flexibility. The new WinSock 2 architecture allows for simultaneous support of multiple protocol stacks, interfaces, and service providers. There is still one DLL on top, but there is another layer below, and a standard service provider interface, both of which add flexibility.

WinSock 2 adopts the Windows Open Systems Architecture (WOSA) model, which separates the API from the protocol service provider. In this model the WinSock DLL provides the standard API, and each vendor installs its own service provider layer underneath. The API layer "talks" to a service provider via a standardized Service Provider Interface (SPI), and it is capable of multiplexing between multiple service providers simultaneously.

1.5 Transmission Control Protocol (TCP)

Initially, TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment in order to say ahead of growth in demand.

TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools. To insure that all types of systems from all vendors can communicate, TCP/IP is absolutely standardized on the LAN. However, larger networks based on long distances and phone lines are more volatile. New technologies arise and become obsolete within a few years. With cable TV and phone companies competing to build the National Information Superhighway, no single standard can govern citywide, nationwide, or worldwide communications.

The original design of TCP/IP as a Network of Networks fits nicely within the current technological uncertainty. TCP/IP data can be sent across a LAN, or it can be carried within an internal corporate SNA network, or it can piggyback on the cable TV service. Furthermore, machines connected to any of these networks can communicate to any other network through gateways supplied by the network vendor.

1.6.  Data packet transmission

Data packet transmission consists of a series of handshaking sequences where the sending side of the end node/repeater local port, point-to-point connection makes a request and the other side acknowledges the request. The sequence of sending a data packet transmission is requested be an end node and controlled by the repeater. When a data packet transmission is about to occur:

If an end node has a data packet ready to send, it transmits either a Request_Normal or Request_high control signal. Otherwise, the end node transmits the Idle_Up control signal.

  1. The repeater polls all local ports to determine which end nodes are requesting to send a data packet and at what priority level that request is (normal or high).
  2. The repeater selects the next end node with a high priority request pending. Ports are selected in port order. If not high priority requests are pending, then the next normal priority port is selected (in port order). This selection causes the selected port to receive the Grant signal. Packet transmission begins when the end node detects the Grant signal.
  3. The repeater then sends the Incoming signal to all other end nodes, alerting them to the possibility of an incoming packet. The repeater decodes the destination address from the frame being transmitted as it is being received.
  4. When an end node receives the Incoming control signal, it prepares to receive a packet by stopping the transmission of requests and listening on the media for the data packet.
  5. Once the repeater has decoded the destination address, the packet is delivered to the addressed end node or end nodes and to any promiscuous nodes. Those nodes not receiving the data packet receive the Idle_Down signal from the repeater.
  6. When the end node(s) receive the data packet, they return to their state prior to the reception o f the data packet, either sending an Idle_Up signal or making a request to send a data packet.

1.7.  Conclusion

WinSock 2 has an all-new architecture that provides much more flexibility. The new WinSock 2 architecture allows for simultaneous support of multiple protocol stacks, interfaces, and service providers. It is suitable for win32 platform however, it is designed back compatible that is mean even the win95 also can use it without conflict. The 32-bit wsock32.dll ships with Windows NT and Windows 95 and runs over the Microsoft TCP/IP stack. These 32-bit environments also have a winsock.dll file that acts as a "thunk-layer" to allow 16-bit WinSock applications to run over the 32-bit wsock32.dll. Conversely, Microsoft's Win32s installs a 32-bit wsock32.dll thunk layer in 16-bit Windows environments (Windows version 3.1 and Windows for Workgroups 3.11) over any vendor's WinSock DLL currently in use.

LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN. File Transfer Protocol (FTP), User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are commonly applies in LANs’ transactions. Data transmit using UDP is faster than TCP but TCP has better data security and data integrity compare to UDP.

The client server architecture basically consists of client machines and server machines. There are 2 form of client server architecture. The first one is that of the File Server where clients request files from the File Server. This results in the entire file being sent to the client but necessitates many message exchanges across the network. Another traditional client/server application is that of the Database Server where clients pass SQL requests to the server. The Database Server executes each SQL statement and passes the results back to the client.

The source and destination IP address remains constant as the packet traverses a network. The packet may be forwarded to the router defined as the default gateway, or the router may drop the packet. Packets are forwarded to a default router in the belief that the default router has more network information in its routing table and will therefore be able to route the packet correctly on to its final destination.


Chapter 2: Data Encryption and Cryptography Technology

2.1  Introduction To Encryption And Cryptography Technology

Encryption is the conversion of a piece of data or plaintext into a form, called a ciphertext, which cannot be easily understood by unauthorized people. Meanwhile, decryption is the process of converting encrypted data or ciphertext back into its original form so that it can be understood. The conversion of plaintext into ciphertext or vice versa, should be applied accompany with a cryptographic algorithm. Most encryption algorithm are based on the concept of complex mathematics that work in only one direction and are generally based on the difficulties of factoring verly large numbers ( keys) that are used for the encryption. These large numbers are the products of large prime numbers. Many encryption programs use one key for both encrypting and decrypting messages, which this is known as symmetric cryptography. This is a fast and simple method of encrypting messages and folders and is best for protecting local messages, files and folders.

Cryptography is the science of information security. Examples of cryptography techniques include microdots, merging words with images and etc. However, cryptography is most often associated with scrambling plaintext into ciphertext, then back again. Individuals who practice this field are known as cryptographers. Cryptography mainly consists of four objectives:

  • Confidentially – The information cannot be understood by anyone.
  • Integrity – The information cannot be altered in storage or transit between sender and receiver.
  • No-repudiation – The creator or sender of the information cannot deny his intentions in the creation or transmission of the information.
  • Authentication- The sender and receiver can identify each other’s identity and the origin or destination of the information.

2.2 DES (Data Encryption Standard) and Implementation

DES is the U.S. Government's Data Encryption Standard, a product cipher that operates on 64-bit blocks of data, using a 56-bit key. Triple DES is a product cipher, which like DES, operates on 64-bit data blocks. There are several forms, each of which uses the DES cipher 3 times. Some forms use two 56-bit keys and some use three. The DES “modes of operation'' may also be used with triple-DES.

Some people refer to E(K1,D(K2,E(K1,x))) as triple-DES. This method is intended for use in encrypting DES keys and IVs for ``Automated Key Distribution''. Its formal name is ``Encryption and Decryption of a Single Key by a KeyPair''. Others use the term ``triple-DES'' for E(K1,D(K2,E(K3,x))) or E(K1,E(K2,E(K3,x))).

Key encrypting keys may be a single DEA key or a DEA key pair. Key pairs should be used where additional security is needed (e.g., the data protected by the key(s) has a long security life). A key pair shall not be encrypted or decrypted using a single key.

Privacy protection using symmetric algorithm DES (the government-sponsored Data Encryption Standard) is relatively easy in small networks, requiring the exchange of secret encryption keys among each party. As a network proliferates, the secure exchange of secret keys becomes increasingly expensive and unwieldy. Consequently, this solution alone is impractical for even moderately large networks.

DES has an additional drawback, it requires sharing of a secret key. Each person must trust the other to guard the pair's secret key, and reveal it to no one. Since the user must have a different key for every person they communicate with, they must trust each and every person with one of their secret keys. This means that in practical implementations, secure communication can only take place between people with some kind of prior relationship, be it personal or professional.

Fundamental issues that are not addressed by DES are authentication and non-repudiation. Shared secret keys prevent either party from proving what the other may have done. Either can surreptitiously modify data and be assured that a third party would be unable to identify the culprit. The same key that makes it possible to communicate securely could be used to create forgeries in the other user's name.

2.3  RSA-based Cryptographic Schemes

The RSA algorithm was invented by Ronald L. Rivest, Adi Shamir, and Leonard Adleman in 1977. There are a variety of different cryptographic schemes and protocols based on the RSA algorithm in products all over the world. The RSAES-OAEP encryption scheme and the RSASSA-PSS signature scheme with appendix is recommended for new applications.

2.3.1 RSAES-OAEP (RSA Encryption Scheme - Optimal Asymmetric Encryption Padding) 

It is a public-key encryption scheme combining the RSA algorithm with the OAEP method. The inventors of OAEP are Mihir Bellare and Phillip Rogaway, with enhancements by Don B. Johnson and Stephen M. Matyas.

        

2.3.2 RSASSA-PSS (RSA Signature Scheme with Appendix - Probabilistic

           Signature Scheme)

It is an asymmetric signature scheme with appendix combining the RSA algorithm with the PSS encoding method. The inventors of the PSS encoding method are Mihir Bellare and Phillip Rogaway. During efforts to adopt RSASSA-PSS into the P1363a standards effort, certain adaptations to the original version of RSA-PSS were made by Bellare and Rogaway and also by Burt Kaliski (the editor of IEEE P1363a) to facilitate implementation and integration into existing protocols.

Here is a small example of RSA Plaintexts are positive integers up to 2^{512}. Keys are quadruples (p,q,e,d), with p a 256-bit prime number, q a 258-bit prime number, and d and e large numbers with (de - 1) divisible by (p-1)(q-1). We define E_K(P) = P^e mod pq, D_K(C) = C^d mod pq. All quantities are readily computed from classic and modern number theoretic algorithms (Euclid's algorithm for computing the greatest common divisor yields an algorithm for the former, and historically newly explored computational approaches to finding large `probable' primes, such as the Fermat test, provide the latter.)

Now E_K is easily computed from the pair (pq,e)---but, as far as anyone knows, there is no easy way to compute D_K from the pair (pq,e). So whoever generates K can publish (pq,e). Anyone can send a secret message to him; he is the only one who can read the messages.

The primary advantage of RSA public-key cryptography is increased security and convenience. Private keys never need to transmitted or revealed to anyone. In a secret-key system, by contrast, the secret keys must be transmitted (either manually or through a communication channel), and there may be a chance that an enemy can discover the secret keys during their transmission.

2.4  JavaTM Cryptography Architecture

The Java Security API is a new Java core API, built around the java.security package (and its subpackages). This API is designed to allow developers to incorporate both low-level and high-level security functionality into their Java applications. The first release of Java Security in JDK 1.1 contains a subset of this functionality, including APIs for digital signatures and message digests. In addition, there are abstract interfaces for key management, certificate management and access control. Specific APIs to support X.509 v3 certificates and other certificate formats, and richer functionality in the area of access control, will follow in subsequent JDK releases.

The Java Cryptography Extension (JCE) extends the JCA API to include encryption and key exchange. Together, it and the JCA provide a complete, platform-independent cryptography API. The JCE will be provided in a separate release because it is not currently exportable outside the United States

The Java Cryptography Architecture (JCA) was designed around these principles:

  • Implementation independence and interoperability
  • Algorithm independence and extensibility

Implementation independence and algorithm independence are complementary: their aim is to let users of the API utilize cryptographic concepts, such as digital signatures and message digests, without concern for the implementations or even the algorithms being used to implement these concepts. When complete algorithm-independence is not possible, the JCA provides developers with standardized algorithm-specific APIs. When implementation-independence is not desirable, the JCA lets developers indicate the specific implementations they require.

2.5  JavaTM Cryptography Extension (JCE) 1.2.1

The JavaTM Cryptography Extension (JCE) 1.2.1 is a package that provides a framework and implementations for encryption, key generation and key agreement, and Message Authentication Code (MAC) algorithms. Support for encryption includes symmetric, asymmetric, block, and stream ciphers. The software also supports secure streams and sealed objects.

JCE 1.2.1 is designed so that other qualified cryptography libraries can be plugged in as service providers, and new algorithms can be added seamlessly. (Qualified providers include those approved for export and those certified for domestic use only. Qualified providers are signed by a trusted entity.) JCE 1.2.1 supplements the JavaTM 2 platform, which already includes interfaces and implementations of digital signatures and message digests.

This release of JCE is a non-commercial reference implementation that demonstrates a working example of the JCE 1.2.1 APIs. A reference implementation is similar to a proof-of-concept implementation of an API specification. It is used to demonstrate that the specification is implementable and that various compatibility tests can be written against it.

A non-commercial implementation typically lacks the overall completeness of a commercial-grade product. While the implementation meets the API specification, it will be lacking things such as a fully-featured toolkit, sophisticated debugging tools, commercial-grade documentation and regular maintenance updates.

The Java 2 platform already has implementations and interfaces for digital signatures and message digests. JCE 1.2 was created to extend the Java Cryptography Architecture (JCA) APIs available in the Java 2 platform to include APIs and implementations for cryptographic services that were subjected to U.S. export control regulations. JCE 1.2 was released separately as an extension to the Java 2 platform, in accordance with U.S. export control regulations.

Important Features of JCE 1.2.1

  • Pure Java implementation.
  • Pluggable framework architecture that enables only qualified providers to be plugged in.
  • Exportable (in binary form only).
  • Single distribution of the JCE 1.2.1 software from Sun Microsystems for both domestic and global users, with jurisdiction policy files that specify that there are no restrictions on cryptographic strengths.

2.6 Conclusions

DES is available in software as content-encryption algorithm. Several people have made DES code available via ftp. Stig Ostholm [FTPSO]; BSD [FTPBK]; Eric Young [FTPEY]; Dennis Furguson [FTPDF]; Mark Riordan [FTPMR]; Phil Karn [FTPPK]. A Pascal listing of DES is also given in Patterson [PAT87]. Antti Louko <[email protected]> has written a version of DES with BigNum packages in [FTPAL]. Therefore, we are able to get the DES algorithm and use it in encrypt our administrator password, user passwords and server’s database.

RSA algorithm is a very popular encryption algorithm. There are collections of links to RSA-related documents on Internet. There are a variety of different cryptographic schemes and protocols based on the RSA algorithm in products all over the world. The most recommended RSA algorithm is RSAES-OAEP encryption scheme and the RSASSA-PSS signature scheme. We will look into their encoding method and find out a suitable choice for our server’s database encryption.

The "Java Cryptography Architecture" (JCA) refers to the framework for accessing and developing cryptographic functionality for the Java Platform. It encompasses the parts of the JDK 1.1 Java Security API related to cryptography (currently, nearly the entire API), as well as a set of conventions and specifications provided in this document. It introduces a "provider" architecture that allows for multiple and interoperable cryptography implementations.

The JavaTM Cryptography Extension (JCE) 1.2.1 is used in providing better encryption to system. This package requires JavaTM 2 SDK v 1.2.1 or later or JavaTM 2 Runtime Environment v 1.2.1 or later already installed. It is an extension for the Java Cryptography Architecture (JCA) APIs available in the Java 2 platform. In our project topic, Control Access of lab computer in a client server environment needs encryption for administrator database and smart card as well. Therefore, we studying JCE 1.2.1 and make use of its cryptographic strengths.


Chapter 3:  Smart Card Technology

3.1. About The Java Card Technology

In the Java Card specifications enable JavaTM technology to run on smart cards and other devices with limited memory. The Java Card API also allows applications written for one smart card platform enabled with Java Card technology to run on any other such platform. From this two new technology, Java smart card technology becomes more efficient to be used.

The Java Card Application Environment (JCAE) is licensed on an OEM-basis to smart card manufacturers, representing more than 90 percent of the worldwide smart card manufacturing capacity.

There are several unique benefits of the Java Card technology, such as:

  • Platform Independent - Java Card technology applets that comply with the Java Card API specification will run on cards developed using the JCAE - allowing developers to use the same Java Card technology applet to run on different vendors' cards.
  • Multi-Application Capable - Multiple applications can run on a single card. In the Java programming language, the inherent design around small, downloadable code elements makes it easy to securely run multiple applications on a single card.
  • Post-Issuance of Applications - The installation of applications, after the card has been issued, provides card issuers with the ability to dynamically respond to their customer's changing needs. For example, if a customer decides to change the frequent flyer program associated with the card, the card issuer can make this change, without having to issue a new card.
  • Flexible - The Object-Oriented methodology of the Java Card technology provides flexibility in programming smart cards.
  • Compatible with Existing Smart Card Standards - The Java Card API is compatible with formal international standards, such as, ISO7816, and industry-specific standards, such as, Europay/Master Card/Visa (EMV).

3.2. What are Smart Cards and Their Benefits?

Smart cards a card similar by size to today's plastic card, that has a chip embedded on it. By adding a chip to the card it becomes a smart card with power to serve many different uses. As an access control device smart cards   make   personal   and   business data available only to appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. There are 2 kinds of smart cards: "intelligent” smart cards contains a central processing unit-A CPU, that actually has the ability to store and secure information and " make decision" as required by the card issuer's specific applications needs. Because  "intelligent" cards offer a "read / write" capability, new information can be added and processed.

The second type is called memory card. Memory cards are primarily information storage cards, that contain stored value, which the user can spend" in payphone, retail, vending or related transactions. The intelligence of the integrated circuit chip in both types of cards allows them to protect the information being stored from damage or theft. For these reason smart cards are much more secure then magnetic stripe cards, which carry the information in the outside of the card and can be easily copied. There are also contacts less smart card. Contact less smart card don’t require contact smart card reading, but are recognized by contact less smart card terminal which has to be near by!

Join now!

The technology is already in place to allow the consumers to combine services, such as their credit cards, long distance services, and ATM cards into one card. Smart cards can also perform other functions including providing security for Internet users and allowing travellers to check into hotels. Smart cards provide data portability, security and convenience. Smart cards help businesses evolve and expand their products and services in a changing global market place. Banks, telecommunications companies, software and hardware companies and airlines all have the opportunity to tailor their card products and services to better differentiate their offerings and brands. ...

This is a preview of the whole essay