COMPUTER NETWORKING:THE ROLE OF COMPUTER NETWORKING
THE ROLE OF COMPUTER NETWORKING
Computer networks (see Figure 1) are frequently considered as analogous to the nervous system in the human body. Indeed, like the nerves, which connect the more or less autonomous components in the body, the links of a computer network provide connections between the otherwise stand-alone units in a computer system. And just like the nerves, the links of the network primarily serve as communication channels between the computers. This is also the basis of the global network of networks, the Internet.
The main task in both cases is information transfer. Connected and thus communicating elements of the system can solve their tasks more efficiently. Therefore, system performance is not just the sum of the performance of the components but much more, thanks to the effect of the information exchange between the components.
Computer networks consist of computers (the basic components of the system under considera- tion), connecting links between the computers, and such additional things as devices making the information transfer as fast, intelligent, reliable, and cheap as possible.
• Speed depends on the capacity of the transmission lines and the processing speed of the addi- tional devices, such as modems, multiplexing tools, switches, and routers. (A short description of these devices is given in Section 4, together with more details about transmission line ca- pacity.)
• Intelligence of the network depends on processing capabilities as well as stored knowledge of these active devices of the network. (It is worth mentioning that network intelligence is different from the intelligence of the interconnected computers.)
• Reliability stems from, first of all, the decreased risk of losing, misinterpreting, or omitting necessary information, as well as from the well preprocessed character of the information avail- able within the system.
• Cost is associated with a number of things. The most important direct costs are purchase costs, installation costs, and operational costs. Indirect costs cover a wide range, from those stemming from faulty system operation to those associated with loss of business opportunity. In the plan- ning of the system, the cost / performance ratio must always be taken into consideration.
It is important not to forget that adequate system operation requires adequate operation of all components.
Obviously, adequate operation of each component in a system requires intensive interaction among them. However, direct information transfer between these components makes the interaction much faster and more reliable than traditional means of interaction. This means highly elevated adequacy in the behavior of each component and thus of the system itself.
Another important capability of a system consisting of connected and communicating components is that the intelligence of the individual components is also connected and thus each component can rely not only on its own intelligence but also on the knowledge and problem solving capability of the other components in the system. This means that the connections between the computers in the network result in a qualitatively new level of complexity in the system and its components. The network connections allow each system component to:
• Provide information to its partners
• Access the information available from the other system components
• Communicate while operating (thus, to continuously test, correct, and adapt its own operation)
• Utilize highly sophisticated distant collaboration among the system components
• Exploit the availability of handling virtual environments*
These features are described briefly below. As will be shown below, these basic features pave the way for:
• The systematic construction of network infrastructures
• The continuous evolution of network services
• The rapid proliferation of network-based applications
• The exponentially increasing amount of information content accumulated in the network of interconnected computers
Information Access
Information access is one of the key elements of what networking provides for the users of a computer connected to other computers. The adequate operation of a computer in a system of interconnected computers naturally depends on the availability of the information determining what, when, and how to perform with it. Traditionally, the necessary information was collected through ‘‘manual’’ methods: through conversation and through traditional information carriers: first paper, and then, in the com- puter era, magnetic storage devices.
These traditional methods were very inefficient, slow, and unreliable. The picture has now totally changed through computer networking. By the use of the data communication links connecting com- puters and the additional devices making possible the transfer of information, all the information preprocessed at the partner sites can be easily, efficiently, immediately, and reliably downloaded from the partner computers to the local machine.
The model above works well until individual links are supposed to serve only the neighbor computers in accessing information at each other’s sites. But this was the case only at the very beginning of the evolution of networking. Nowadays, millions of computers can potentially access information stored in all of the other ones. Establishing individual links between particular pairs of these computers would be technically and economically impossible.
The solution to the problem is a hierarchical model (see Figure 2). Very high-speed backbones (core network segments) interconnect major nodes located at different geographical regions, and somewhat lower-speed access lines transfer the information from / to these nodes to / from those servers within the region, which then take care of forwarding / collecting the information traffic in their local
* Handling virtual environments means that the individual components don’t communicate continuously with the outside world but systematically use a mirrored synthetic picture about their embedding. This is done by utilizing the information accumulated by their neighbors and thus constructing and using a virtual reality model about the outside world. Of course, direct access to this outside world may provide additional information by which such a virtual reality model can be completed so that the result is a so-called augmented reality model.
area. Local area networks (LANs) arise from this way of solving the information traffic problem. The hierarchy can be further fragmented, and thus metropolitan area networks (MANs) and wide area networks (WANs) enter the picture. Although this model basically supposes tree-like network configurations (topologies), for sake of reliability and routing efficiency, cross-connections (redundant routes) are also established within this infrastructure.
If a network user wishes to access information at a networked computer somewhere else, his or her request is sent through the chain of special networking devices (routers and switches; see Section 4) and links. Normally this message will be sliced into packets and the individually transferred packets will join possibly thousands of packets from other messages during their way until the relevant packets, well separated, arrive at their destination, recombine, and initiate downloading of the re- quested information from that distant site to the local machine. The requested information will then arrive to the initiator by a similar process. If there is no extraordinary delay anywhere in the routes, the full process may take just a few seconds or less. This is the most important strength of accessing information through the network.
Note that the described method of packet-based transmission is a special application of time division multiplexing (TDM). The TDM technique utilizes a single physical transmission medium (cable, radiated wave, etc.) for transmitting multiple data streams coming from different sources. In this application of TDM, all the data streams are sliced into packets of specified length of time, and these packets, together with additional information about their destination, are inserted into time frames of the assembled new, higher-bit rate data stream. This new data stream is transmitted through a well-defined segment of the network. At the borders of such network segments, the packets may be extracted from the assembled data stream and a new combination of packets can be assembled by a similar way. By this technique, packets from a source node may even arrive to their destination node through different chains of network segments. More details about the technique will be given in Sections 6.2 and 6.3.
The next subsection describes the basic prerequisites at the site providing the information. First a basic property of the data communication links will be introduced here.
The capability of transmitting / processing high-volume traffic by the connecting lines in the net- work depends on their speed / bandwidth / capacity. Note that these three characteristic properties are equivalent in the sense that high bandwidth means high speed and high speed means high capacity. Although all three measures might be characterized by well-defined units, the only practically applied unit for them is the one belonging to the speed of the information transfer, bits per second (bps). However, because of the practical speed values, multiples of that elementary unit are used, namely Kbps, Mbps, Gbps, and more recently, Tbps (kilobits, megabits, gigabits, and terabits per second, respectively, which means thousands, millions, billions and thousand billions of bits per second).
Information Provision
Access to information is not possible if the information is not available. To make information ac- cessible to other network users, information must be provided. Provision of information by a user (individual or corporate or public) is one of the key contributors to the exponential development of networking and worldwide network usage. How the information is provided, whether by passive methods (by accessible databases or, the most common way today, the World Wide Web on the Internet) or by active distribution (by direct e-mail distribution or by using newsgroups for regular or irregular delivery), is a secondary question, at least if ease of accessing is not the crucial aspect.
If the information provided is freely accessible by any user on the network, the information is public. If accessibility is restricted, the information is private. Private information is accessible either to a closed group of users or by virtually any user if that user fulfils the requirements posed by the owner of the information. Most frequently, these requirements are only payment for the access. An obvious exception is government security information.
All information provided for access has a certain value, which is, in principle, determined by the users’ interest in accessing it. The higher the value, the more important is protection of the related information. Providers of valuable information should take care of protecting that information, partly to save ownership but also in order to keep the information undistorted. Faulty, outdated, misleading, irrelevant information not only lacks value but may also cause problems to those accessing it in the belief that they are getting access to reliable content.
Thus, information provision is important. If the provider makes mistakes, whether consciously or unconsciously, trust will be lost. Providing valuable (relevant, reliable, and tested) information and taking care of continuous updating of the provided information is critical if the provider wants to maintain the trustworthiness of his information.
The global network (the millions of sites connected by the network) offers an enormous amount of information. The problem nowadays is not simply to find information about any topic but rather to find the best (most relevant and most reliable) sources of the required information. Users interested in accessing proper information should either turn to reliable and trusted sources (providers) or take the information they collect through the Internet and test it themselves.
That is why today the provision of valuable information is separated from ownership. Information brokerage plays an increasingly important role in where to access adequate information. Users looking for information about any topic should access either those sources they know and trust or the sites of brokers who take over the task of testing the relevance and reliability of the related kinds of information. Information brokerage is therefore an important new kind of network-based service.
However, thoroughly testing the validity, correctness, and relevance of the information is difficult or even impossible, especially because the amount of the stored and accessible information increases extremely fast. The final answer to the question of how to control the content (i.e., the information available worldwide) is still not known. One of the possibilities (classifying, rating, and filtering) is dealt with in more detail in a later section.
Besides the difficulty of searching for adequate information, there is a further important aspect associated with the increasing interest worldwide in accessing information. It relates to the volume of information traffic generated by the enormous number of attempts to download large amounts of information.
Traffic volume can be decreased drastically by using the caching technique (see Figure 3). In the case of frequently accessed information sources, an enormous amount of traffic can be eliminated by storing the frequently requested information in dedicated servers that are much closer to the user than the original information provider. The user asking for these kinds of information doesn’t even know if the requested information arrives at the requesting site from such a cache server or from the primary source of the information. Hierarchical cache server systems help to avoid traffic congestion on network links carrying intensive traffic.
Electronic Communication
Providing and accessing information, although extremely important, is just one element of information traffic through the network. Electronic communication in general, as far as the basic function of the network operation is concerned, involves a lot more.
The basic function of computer networks is to make it possible to overcome distance, timing, and capability problems.
• Distance is handled by the extremely fast delivery of any messages or other kinds of transmitted information at any connected site all over the world. This means that users of the network (again individual, corporate, or public) can do their jobs or perform their activities so that distances virtually disappear. Users or even groups of users can talk to each other just as if they were in direct, traditional contact. This makes, for example, remote collaboration a reality by allowing the formation of virtual teams.
• Timing relates to the proper usage of the information transmitted. In the most common case, information arrives at its destination in accordance with the best efforts method of transmission provided by basic Internet operation. This means that information is carried to the target site as soon as possible but without any guarantee of immediate delivery. In most cases this is not a problem because delays are usually only on the order of seconds or minutes. Longer delays usually occur for reasons other than simple transmission characteristics. On the other hand, e- mail messages are most often read by the addressees from their mail servers much later than their arrival. Although obviously this should not be considered the basis of network design, the fact that the recipient doesn’t need to be at his or her machine when the message arrives is important in overriding timing problems. However, some information can be urgently needed, or real-time delivery may even be necessary, as when there are concurrent and interacting processes at distant sites. In these cases elevated-quality services should be provided. Highest quality of service (QoS) here means a guarantee of undisturbed real-time communication.
• Capability requirements arise from limited processing speed, memory, or software availability at sites having to solve complex problems. Networked communication helps to overcome these kinds of problems by allowing distributed processing as well as quasisimultaneous access to distributed databases or knowledge bases. Distributed processing is a typical application where intensive (high-volume) real-time (high-speed) traffic is assumed, requiring high-speed connec- tion through all the routes involved in the communication.
The latter case already leads us to one of the more recent and most promising applications of the network. Although in some special areas, including highly intensive scientific computations, distrib- uted processing has been used for some time, more revolutionary applications are taking place in networked remote collaboration. This new role of computer networking is briefly reviewed in the following section.
Networked Collaboration
Computer networks open new ways of increasing efficiency, elevating convenience, and improving cost and performance of collaboration. The basis of this overall improvement is provided by the properties of computer networks mentioned above, their ability to help overcome distance, timing, and capability problems. The workplace can thus be dislocated or even mobile, the number of col- laborators can be theoretically infinite, the cooperating individuals or groups can be involved in joint activities and in shifted time periods (thus even geographical time zone problems can be virtually eliminated), and the collaborating partners can input their information without having to move high- volume materials (books and other documents) to a common workplace. These are all important issues in utilizing the opportunities that networks provide for collaborating when communities are geographically dispersed.
However, the key attribute of networked collaboration is not simply the possibility of avoiding having to collect all the collaborating partners at a well-defined venue and time for a fixed time period. The fact that the network interconnects computers, rather than human users, is much more useful. The reason is that networked computers are not just nodes in a network but also intelligent devices. They not only help exchange information between the collaborating individuals or groups, but also provide some important additional features.
• The information processing capability of the computers at the network nodes of the collaborating parties may also solve complex problems.
• The intelligence of each computer connected and participating in the joint activities is shareable among the collaborating partners.
• The intelligence in these distant computers can be joined to solve otherwise unsolvable problems by utilizing an enormously elevated computing power.
However, appropriate conditions for efficient and successful collaboration assume transmission of complex information. This information involves not only the data belonging to the joint task but also the multimedia information stemming from the networked character of the collaboration. This may be a simple exchange of messages, such as e-mail conversation among the participants, but may also be a true videoconference connection between the cooperating sites. Thus, in many cases networked collaboration requires extremely high bandwidths. Real-time multimedia transmission, as the basis of these applications, supposes multi-Mbps connectivity throughout. Depending on the quality de- mands, the acceptable transmission speed varies between one or two Mbps and hundreds of megabits per second. The latter speeds have become a reality with the advent of gigabit networking.
If the above conditions are all met, networked collaboration provides multiple benefits:
• Traditional integration of human contributions
• Improvement in work conditions
• Integration of the processing power of a great many machines
• High-quality collaboration attributes through high-speed multimedia transmission
The highest level of networked collaboration is attained by integrated environments allowing truly integrated activities. This concept will be dealt with later in the chapter. The role of the World Wide Web in this integration process, the way of building and using intranets for, among others, networked collaboration, and the role of the different security aspects will also be outlined later.
Virtual Environments
Network-based collaboration, by using highly complex tools (computers and peripherals) and high- speed networking, allows the participating members of the collaborating team to feel and behave just as if they were physically together, even though the collaboration is taking place at several distant sites. The more the attributes of the environment at one site of the collaboration are communicated to the other sites, the more this feeling and behavior can be supported. The environment itself cannot be transported, but its attributes help to reconstruct a virtual copy of the very environment at a different site. This reconstruction is called a virtual environment.
This process is extremely demanding as far as information processing at the related sites and information transfer between distant sites are concerned. Although the available processing power and transmission speed increase exponentially with time, the environment around us is so complex that full copying remains impossible. Thus, virtual environments are only approximated in practice.
Approximation is also dictated by some related practical reasons:
• Practical feasibility (even if a certain level of copying the target environment is theoretically possible, it would require the implementation of prohibitively complex practical solutions)
• Cost effectiveness (although the practical solution is feasible, the cost is so high that the ap- plication doesn’t allow the very level of emulation)
• Intended approximation (although a certain level of true copying would be possible and cost effective, the application requires a model of the virtual copy rather than the true original because of experimental applications, trials regarding modifying the environment, etc.)
• Inaccessible environments (the environment to be copied is inaccessible either because of being outside the connected sites or because of transient lack of some prerequisites)
In such cases, virtual environments are substituted for by virtual reality models. These models provide virtual reality by mimicking the supposed environment but not copying it. However, in some applications a mix of virtual copies and virtual models is needed. In augmented reality, transmitted information about the attributes of the considered real environment is used together with model parameters about a model environment to build up a virtual construct for the supposed environment.
Virtual constructs are essential tools in a number of applications utilizing the concept of tele- presence and / or teleimmersion by users of networked computers. Teleinstruction, telemedicine, and different types of teleworking are examples of how the development of networked communication and collaboration leads to an increasing number of applications using virtual and augmented reality.
Comments
Post a Comment