CLIENT/SERVER TECHNOLOGY:MAINTENANCE AND ADMINISTRATION OF C / S SYSTEMS

6. MAINTENANCE AND ADMINISTRATION OF C / S SYSTEMS

Although system maintenance and administration is a complicated task even for a single centralized system, the complexity increases significantly due to the scalability, heterogeneity, security, distri- bution, naming, and so on in a C / S system. Efficient network and system-management tools are critical for reliable operation of distributed computing environments for C / S systems. In recent years, open and multivendor technologies have been adopted in construction of C / S systems. To adminis- trate and maintain those systems totally, a standardized system management is needed for equipment from different vendors.

Architecture of System Management
OSI Management Framework

The OSI has defined five functional areas of management activities that are involved in distributed system management:

1. Configuration management: This involves collecting information on the system configuration and managing changes to the system configuration. Inventory control, installation, and version control of hardware and software are also included in this area.

2. Fault management: This involves identifying system faults as they occur, isolating the cause of the faults, and correcting them by contingency fallback, disaster recovery and so on.

3. Security management: This involves identifying locations of sensitive data and securing the system access points as appropriate to limit the potential for unauthorized intrusions. Encryp- tion, password requirements, physical devices security, and security policy are also included in this area.

4. Performance management: This involves gathering data on the usage of system resources, analyzing these data, and acting on the performance prediction to maintain optimal system

Client-Server Technology-0411

performance. Real-time and historical statistical information about traffic volume, resource usage, and congestion are also included in this area.

5. Accounting management: This involves gathering data on resource utilization, setting usage shares, and generating charging and usage reports.

System Management Architecture

The items that are managed by system management can be classified into three layers: physical layer, operating system and network protocol layer, and application layer, as shown in Figure 17.

1. The physical layer includes client devices, server devices, and network elements, including LANs, WANs, computing platforms, and systems and applications software.

2. The operating system and network protocol layer includes the items for managing communi- cation protocols to ensure interoperability between some units. In recent years, the adoption of TCP / IP standard protocol for realizing the Internet has been increasing. In a system based on the TCP / IP protocol stack, SNMP can be used for system management.

3. The application layer includes the items for managing system resources, such as CPU capacity and memory.

The system management architecture consists of the following components. These components may be either physical or logical, depending on the context in which they are used:

• A network management station (NMS) is a centralized workstation or computer that collects data from agents over a network, analyzes the data, and displays information in graphical form.

• A managed object is a logical representation of an element of hardware or software that the management system accesses for the purpose of monitor and control.

• An agent is a piece of software within or associated with a managed object that collects and stores information, responds to network management station requests, and generates incidental messages.

• A manager is software contained within a computer or workstation that controls the managed objects. It interacts with agents according to rules specified within the management protocol.

• A management information base (MIB) is a database containing information of use to network management, including information that reflects the configuration and behavior of nodes, and parameters that can be used to control its operation.

Network Management Protocol

An essential function in achieving the goal of network management is acquiring information about the network. A standardized set of network management protocols has been developed to help extract the necessary information from all network elements. There are two typical standardized protocols

for network management: simple network management protocol (SNMP), developed under Internet sponsorship, and common management information protocol (CMIP), from ISO (International Organization for Standardization) and ITU-T (International Telecommunication Union- Telecommunication Standardization Sector).

SNMP is designed to work with the TCP / IP protocol stack and establishes standards for collecting and for performing security, performance, fault, accounting, and configuration functions associated with network management. The communication protocol for the SNMP is UDP, which is a very simple, unacknowledged, connectionless protocol. CMIP is designed to support a richer set of network-management functions and work with all systems conforming to OSI standards. Both SNMP and CMIP use an object-oriented technique to describe information to be managed, where the software describing actions is encapsulated with the rest of agent code within the managed object. CMIP requires considerably more overhead to implement than SNMP.

Because SNMP is the most widely implemented protocol for network management today, an SNMP management system is described below. An SNMP management system consists of the fol- lowing components, as shown in Figure 18:

1. An SNMP agent is a software entity that resides on a managed system or a target node, maintains the node information, and reports on its status to managers.

2. An SNMP manager is a software entity that performs management tasks by issuing manage- ment requests to agents.

3. An MIB is a database containing the node information. It is maintained by an agent.

SNMP is an asynchronous request / response protocol that supports the following operations (Ver- sion 2):

• Get: a request issued by a manager to read the value of a managed object

• GetNext: a request made by a manager to traverse an MIB tree

Client-Server Technology-0412

• GetBulk: a command issued by a manager, by which an agent can return as many successor variables in the MIB tree as will fit in a message

• Set: a request issued by a manager to modify the value of a managed object

• Trap: a notification issued from an agent in the managed system to a manager that some unusual event has occurred

• Inform: a command sent by a manager to other managers, by which managers can exchange management information

In this case, the managed system is a node such as a workstation, personal computer, or router. HP’s OpenView and Sun Microsystem’s SunNet Manager are well-known commercial SNMP managers.

System management functions are easily decomposed into many separate functions or objects that can be distributed over the network. It is a natural idea to connect those objects using CORBA ORB for interprocess communications. CORBA provides a modern and natural protocol for representing managed objects, defining their services, and invoking their methods via an ORB. Tivoli Management Environment (TME) is a CORBA-based system-management framework that is rapidly being adopted across the distributed UNIX market.

Security Management

C / S systems introduce new security threats beyond those in traditional host-centric systems. In a C / S system, it is more difficult to define the perimeter, the boundary between what you are protecting and the outside world. From the viewpoint of distributed systems, the problems are compounded by the need to protect information during communication and by the need for the individual components to work together. The network between clients and servers is vulnerable to eavesdropping crackers,

who can sniff the network to obtain user IDs and passwords, read confidential data, or modify information. In addition, getting all of the individual components (including human beings) of the system to work as a single unit requires some degree of trust.

To manage security in a C / S system, it is necessary to understand what threats or attacks the system is subject to. A threat is any circumstance or event with the potential to cause harm to a system. A system’s security policy identifies the threats that are deemed to be important and dictates the measures to be taken to protect the system.

Threats

Threats can be categorized into four different types:

1. Disclosure or information leakage: Information is disclosed or revealed to an unauthorized person or process. This involves direct attacks such as eavesdropping or wiretapping or more subtle attacks such as traffic analysis.

2. Integrity violation: The consistency of data is compromised through any unauthorized change to information stored on a computer system or in transit between computer systems.

3. Denial of service: Legitimate access to information or computer resources is intentionally blocked as a result of malicious action taken by another user.

4. Illegal use: A resource is used by an unauthorized person or process or in an unauthorized way.

Security Services

In the computer communications context, the main security measures are known as security services. There are some generic security services that would apply to a C / S system:

Authentication: This involves determining that a request originates with a particular person or process and that it is an authentic, nonmodified request.

Access control: This is the ability to limit and control the access to information and network resources by or for the target system.

Confidentiality: This ensures that the information in a computer system and transmitted infor- mation are accessible for reading only by authorized persons or processes

Data integrity: This ensures that only authorized persons or processes are able to modify data in a computer system and transmitted information.

Nonrepudiation: This ensures that neither the sender nor the receiver of a message is able to deny that the data exchange occurred.

Security Technologies

There are some security technologies fundamental to the implementation of those security services.

Cryptography Cryptographic systems or cryptosystems can be classified into two dis- tinct types: symmetric (or secret-key) and public-key (or asymmetric) cryptosystems. In a symmetric cryptosystem, a single key and the same algorithm are used for both encryption and decryption. The most widely used symmetric cryptosystem is the Data Encryption Standard (DES), which is the U.S. standard for commercial use. In a public-key cryptosystem, instead of one key in a symmetric cryp- tosystem, two keys are employed to control the encryption and the decryption respectively. One of these keys can be made public and the other is kept secret. The best-known the public-key crypto- system is RSA, developed by Rivest, Shamir, and Adleman at MIT (1978).

The major problem in using cryptography is that it is necessary to disseminate the encryption / decryption keys to all parties that need them and ensure that the key distribution mechanism is not easily compromised. In a public-key cryptosystem, the public key does not need to be protected, alleviating the problem of key distribution. However, a public key also needs to be distributed with authentication for protecting it from frauds. Public-key cryptosystems have some advantages in key distribution, but implementation results in very slow processing rates. For example, encryption by RSA is about 1000 times slower than by DES in hardware and about 100 times slower than DES in software. For these reasons, public-key cryptosystems are usually limited to use in key distribution and the digital signature, and symmetric cryptosystems are used to protect the actual data or plaintexts.

Data integrity and data origin authentication for a message can be provided by hash or message digest functions. Cryptographic hash functions involve, instead of using keys, mapping a potentially large message into a small fixed-length number. Hash functions are used in sealing or digital signature processes, so they must be truly one-way, that is, it must be computationally infeasible to construct an input message hashed to a given digest or to construct two messages hashed to the same digest. The most widely used hash function is message digest version 5 (MD5).

Authentication Protocol In the context of a C / S system, authentication is the most important of the security services because other security services depend on it in some way. When a client wishes to establish a secure channel between the client and a server, the client and the server will wish to identify each other by authentication. There are three common protocols for imple- menting authentication: three-way handshake authentication, trusted-third-party authentication, and public-key authentication. One of the trusted third-party protocols is Kerberos, a TCP / IP-based net- work authentication protocol developed as a part of the project Athena at MIT. Kerberos permits a client and a server to authenticate each other without any message going over the network in the clear. It also arranges for the secure exchange of session encryption keys between the client and the server. The trusted third-party is sometimes called an authentication server.

A simplified version of the third-party authentication in Kerberos is shown in Figure 19. Kerberos protocol assumes that the client and the server each share a secret key, respectively Kc and Ks, with the authentication server. In Figure 19, [M ]K denotes the encryption of message M with key K.

1. The client first sends a message to the authentication server that identifies both itself and the server.

2. The authentication server then generates a timestamp T, a lifetime L, and a new session key K and replies to the client with a two-part message. The first part, [T, L, K, IDs]Kc , encrypts the three values T, L, and K, along with the server’s identifier IDs, using the key Kc. The second part, [T, L, K, IDc]Ks, encrypts the three values T, L, and K, along with the client’s identifier IDc using the key Ks.

3. The client receives this message and decrypts only the first part. The client then transfers the second part to the server along with the encryption [IDc, T ]K of IDc and T using the session key K, which is decrypted from the first part.

4. On receipt of this message, the server decrypts the first part, [T, L, K, IDc]Ks, originally encrypted by the authentication server using Ks, and in so doing recovers T, K, and IDc. Then the server confirms that IDc and T are consistent in the two halves of the message. If they are consistent, the server replies with a message [T + 1]K that encrypts T + 1 using the session key K.

5. Now the client and the server can communicate with each other using the shared session key K.

Message Integrity Protocols There are two typical ways to ensure the integrity of a message. One uses a public-key cryptosystem such as RSA to produce a digital signature, and the other uses both a message digest such as MD5 and a public-key cryptosystem to produce a digital

Client-Server Technology-0413

signature. In the latter type, a hash function is used to generate a message digest from the message content requiring protection. The sender encrypts the message digest using the public-key cryptosys- tem in the authentication mode; the encryption key is the private key of the sender. The encrypted message digest is sent an appendix along with the plaintext message. The receiver decrypts the appendix using the corresponding decryption key (the public key of the sender) and compares it with the message digest that is computed from the received message by the same hash function. If the two are the same, then the receiver is assured that the sender knew the encryption key and that the message contents were not changed en route.

Access Control Access control contributes to achieving the security goals of confiden- tiality, integrity, and legitimate use. The general model for access control assumes a set of active entities, called subjects, that attempt to access members of a set of resources, called objects. The access-control model is based on the access control matrix, in which rows correspond to subjects (users) and columns correspond to objects (targets). Each matrix entry states the access actions (e.g., read, write, and execute) that the subject may perform on the object. The access control matrix is implemented by either:

Capability list: a row-wise implementation, effectively a ticket that authorizes the holder (sub- ject) to access specified objects with specified actions

Access control list (ACL): a column-wise implementation, also an attribute of an object stating which subjects can invoke which actions on it

Web Security Protocols: SSL and S-HTTP As the Web became popular and commercial enterprises began to use the Internet, it became obvious that some security services such as integrity and authentication are necessary for transactions on the Web. There are two widely used protocols to solve this problem: secure socket layer (SSL) and secure HTTP (S-HTTP). SSL is a general- purpose protocol that sits between the application layer and the transport layer. The security services offered by the SSL are authentication of the server and the client and message confidentiality and integrity. The biggest advantage of the SSL is that it operates independently of application-layer protocols. HTTP can also operate on top of SSL, and it is then often denoted HTTPS. Transport Layer Security (TLS) is an Internet standard version of SSL and is now in the midst of the IETF standardization process. Secure HTTP is an application-layer protocol entirely compatible with HTTP and contains security extensions that provide client authentication, message confidentiality and in- tegrity, and nonrepudiation of origin.

Firewall Because the Internet is so open, security is a critical factor in the establishment and acceptance of commercial applications on the Web. For example, customers using an Internet banking service want to be assured that their communications with the bank are confidential and not tampered with, and both they and the bank must be able to verify each other’s identity and to keep authentic records of their transactions. Especially, corporate networks connected to the Internet are liable to receive attacks from crackers of the external network. The prime technique used commer- cially to protect the corporate network from external attacks is the use of firewalls.

A firewall is a collection of filters and gateways that shields the internal trusted network within a locally managed security perimeter from external, untrustworthy networks (i.e., the Internet). A firewall is placed at the edge of an internal network and permits a restricted set of packets or types of communications through. Typically, there are two types of firewalls: packet filters and proxy gateways (also called application proxies).

• A packet filter functions by examining the header of each packet as it arrives for forwarding to another network. It then applies a series of rules against the header information to determine whether the packet should be blocked or forwarded in its intended direction.

• A proxy gateway is a process that is placed between a client process and a server process. All incoming packet from the client is funneled to the appropriate proxy gateway for mail, FTP, HTTP, and so on. The proxy then passes the incoming packets to the internal network if the access right of the client is verified.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM