DECISION SUPPORT SYSTEMS:MODEL BASE MANAGEMENT SYSTEMS

MODEL BASE MANAGEMENT SYSTEMS

There are four primary objectives of a DBMS:

1. To manage a large quantity of data in physical storage

2. To provide logical data structures that interact with humans and are independent of the structure used for physical data storage

3. To reduce data redundancy and maintenance needs and increase flexibility of use of the data, by provision of independence between the data and the applications programs that use it

4. To provide effective and efficient access to data by users who are not necessarily very sophis- ticated in the microlevel details of computer science.

Many support facilities will typically be provided with a DBMS to enable achievement of these purposes. These include data dictionaries to aid in internal housekeeping and information query, retrieval, and report generation facilities to support external use needs.

The functions of a model base management system (MBMS), a structure for which is illustrated in Figure 10, are quite analogous to those of a DBMS. The primary functions of a DBMS are separation of system users, in terms of independence of the application, from the physical aspects of database structure and processing. In a similar way, a MBMS is intended to provide independence between the specific models that are used in a DSS and the applications that use them. The purpose of a MBMS is to transform data from the DBMS into information that is useful for decision making. An auxiliary purpose might also include representation of information as data such that it can later be recalled and used.

The term model management system was apparently first used over 15 years ago (Will 1975). Soon thereafter, the MBMS usage was adopted in Sprague and Carlson (1982) and the purposes of an MBMS were defined to include creation, storage, access, and manipulation of models. Objectives for a MBMS include:

1. To provide for effective and efficient creation of new models for use in specific applications

2. To support maintenance of a wide range of models that support the formulation, analysis, and interpretation stages of issue resolution

3. To provide for model access and integration, within models themselves as well as with the DBMS

4. To centralize model base management in a manner analogous to and compatible with database management

5. To ensure integrity, currency, consistency, and security of models.

Just as we have physical data and logical data processing in a DBMS, so also do we have two types of processing efforts in a MBMS: model processing and decision processing (Applegate et al.

1986). A DSS user would interact directly with a decision processing MBMS, whereas the model processing MBMS would be more concerned with provision of consistency, security, currency, and other technical modeling issues. Each of these supports the notion of appropriate formal use of models that support relevant aspects of human judgment and choice.

Several ingredients are necessary for understanding MBMS concepts and methods. The first of these is a study of formal analytical methods of operations research and systems engineering that support the construction of models that are useful in issue formulation, analysis, and interpretation. Because presenting even a small fraction of the analytical methods and associated models in current use would be a mammoth undertaking, we will discuss models in a somewhat general context. Many discussions of decision relevant models can be found elsewhere in this Handbook, most notably in Chapters 83 through 102. There are also many texts in this area, as well as two recent handbooks (Sage and Rouse 1999a; Gass and Harris 2000).

Models and Modeling

In this section, we present a brief description of a number of models and methods that can be used as part of a systems engineering-based approach to problem solution or issue resolution. Systems engineering (Sage 1992, 1995; Sage and Rouse 1999a; Sage and Armstrong 2000) involves the application of a general set of guidelines and methods useful to assist clients in the resolution of issues and problems, often through the definition, development, and deployment of trustworthy sys- tems. These may be product systems or service systems, and users may often deploy systems to support process-related efforts. Three fundamental steps may be distinguished in a formal systems- based approach that is associated with each of these three basic phases of system engineering or problem solving:

1. Problem or issue formulation

2. Problem or issue analysis

3. Interpretation of analysis results, including evaluation and selection of alternatives, and imple- mentation of the chosen alternatives

These steps are conducted at a number of phases throughout a systems life cycle. As we indicated earlier, this life cycle begins with definition of requirements for a system through a phase where the system is developed to a final phase where deployment, or installation, and ultimate system main- tenance and retrofit occur. Practically useful life cycle processes generally involve many more than three phases, although this larger number can generally be aggregated into the three basic phases of definition, development, and deployment. The actual engineering of a DSS follows these three phases of definition of user needs, development of the DSS, and deployment in an operational setting. Within each of these three phases, we exercise the basic steps of formulation, analysis, and interpretation. Figure 11 illustrates these three steps and phases and much more detail is presented in Sage (1992, 1995), Sage and Rouse (1999a), Sage and Armstrong (2000), and in references contained therein.

Issue, or Problem, Formulation Models

The first part of a systems effort for problem or issue resolution is typically concerned with problem or issue formulation, including identification of problem elements and characteristics. The first step in issue formulation is generally that of definition of the problem or issue to be resolved. Problem

Decision Support Systems-0078

definition is generally an outscoping activity because it enlarges the scope of what was originally thought to be the problem. Problem or issue definition will ordinarily be a group activity involving those familiar with or impacted by the issue or the problem. It seeks to determine the needs, con- straints, alterables, and social or organizational sectors affecting a particular problem and relationships among these elements.

Of particular importance are the identification and structuring of objectives for the policy or alternative that will ultimately be chosen. This is often referred to as value system design, a term apparently first used by Hall (1969) in one of his seminal works in systems engineering. Generation of options (Keller and Ho 1990) or alternative courses of action is a very important and often neglected portion of a problem solving effort. This option generation, or system or alternative syn- thesis, step of issue formulation is concerned primarily with the answers to three questions:

1. What are the alternative approaches for attaining objectives?

2. How is each alternative approach described?

3. How do we measure attainment of each alternative approach?

The answers to these three questions lead to a series of alternative activities or policies and a set of activities measures.

Several of the methods that are particularly helpful in the identification of issue formulation elements are based on principles of collective inquiry (McGrath 1984). The term collective inquiry refers to the fact that a group of interested and motivated people is brought together in the hope that they will stimulate each other’s creativity in generating elements. We may distinguish two groups of collective inquiry methods here, depending upon whether or not the group is physically present at the same physical location.

Brainstorming, Synectics, and Nominal Group Techniques These approaches typically require a few hours of time, a group of knowledgeable people gathered in one place, and a group leader or facilitator. The nominal group technique is typically better than brainstorming in reducing the influence of dominant individuals. Both methods can be very productive: 50–150 ideas or ele- ments may be generated in less than one hour. Synectics, based on problem analogies, might be very appropriate if there is a need for truly unconventional, innovative ideas. Considerable experience with the method is a requirement, however, particularly for the group leader. The nominal group technique is based on a sequence of idea generation, discussion, and prioritization. It can be very useful when an initial screening of a large number of ideas or elements is needed. Synectics and brainstorming are directly interactive group methods, whereas nominal group efforts are nominally interactive in that the members of the group do not directly communicate.

Questionnaires, Survey, and Delphi Techniques These three methods of collective in- quiry do not require the group of participants to gather at one place and time, but they typically take more time to achieve results than the methods above. In most questionnaires and surveys a large number of participants is asked, on an individual basis, to generate ideas or opinions that are then processed to achieve an overall result. Generally no interaction is allowed among participants in this effort. Delphi usually provides for written anonymous interaction among participants in several rounds. Results of previous rounds are fed back to participants, and they are asked to comment, revise their views as desired, and so on. The results of using the Delphi technique can be very enlightening, but it usually takes several weeks or months to complete the effort.

Use of some of the many structuring methods, in addition to leading to greater clarity of the problem formulation elements, will typically lead also to identification of new elements and revision of element definitions. Most structuring methods contain an analytical component and may therefore be more properly labeled as analysis methods. The following element structuring aids are among the many modeling aids available that are particularly suitable for the issue formulation step.

There are many approaches to problem formulation (Volkema 1990). In general, these approaches assume that ‘‘asking’’ will be the predominant approach used to obtain issue formulation elements. Asking is often the simplest approach. Valuable information can often be obtained from observation of an existing and evolving system or from study of plans and other prescriptive documents. When these three approaches fail, it may be necessary to construct a ‘‘trial’’ system and determine issue formulation elements through experimentation and iteration with the trial system. These four methods (asking, study of an existing system, study of a normative systems, experimentation with a prototype system) are each very useful for information and system requirements determination (Davis 1982).

Models for Issue Analysis

The analysis portion of a DSS effort typically consists of two steps. First, the options or alternatives defined in issue formulation are analyzed to assess their expected impacts on needs and objectives. This is often called impact assessment or impact analysis. Second, a refinement or optimization effort is often desirable. This is directed towards refinement or fine-tuning a potentially viable alternative through adjustment of the parameters within that alternative so as to obtain maximum performance in terms of needs satisfaction, subject to the given constraints.

Simulation and modeling methods are based on the conceptualization and use of an abstraction, or model, that hopefully behaves in a similar way as the real system. Impacts of alternative courses of action are studied through use of the model, something that often cannot easily be done through experimentation with the real system. Models are, of necessity, dependent on the value system and the purpose behind utilization of a model. We want to be able to determine the correctness of predictions based on usage of a model and thus be able to validate the model. There are three essential steps in constructing a model:

1. Determine those issue formulation elements that are most relevant to a particular problem.

2. Determine the structural relationships among these elements.

3. Determine parametric coefficients within the structure.

We should interpret the word ‘‘model’’ here as an abstract generalization of an object or system. Any set of rules and relationships that describes something is a model of that thing. The MBMS of a DSS will typically contain formal models that have been stored into the model base of the support system. Much has been published in the area of simulation realization of models, including a recent handbook (Banks 1998).

Gaming methods are basically modeling methods in which the real system is simulated by people who take on the roles of real-world actors. The approach may be very appropriate for studying situations in which the reactions of people to each others actions are of great importance, such as competition between individuals or groups for limited resources. It is also a very appropriate learning method. Conflict analysis (Fraser and Hipel 1984; Fang et al. 1993) is an interesting and appropriate game theory-based approach that may result in models that are particularly suitable for inclusion into the model base of a MBMS. A wealth of literature concerning formal approaches to mathematical games (Dutta 1999).

Trend extrapolation or time series forecasting models, or methods, are particularly useful when sufficient data about past and present developments are available, but there is little theory about underlying mechanisms causing change. The method is based on the identification of a mathematical description or structure that will be capable of reproducing the data. Then this description is used to extend the data series into the future, typically over the short to medium term. The primary concern is with input–output matching of observed input data and results of model use. Often little attention is devoted to assuring process realism, and this may create difficulties affecting model validity. While such models may be functionally valid, they may not be purposefully or structurally valid.

Continuous-time dynamic-simulation models, or methods, are generally based on postulation and qualification of a causal structure underlying change over time. A computer is used to explore long- range behavior as it follows from the postulated causal structure. The method can be very useful as a learning and qualitative forecasting device. Often it is expensive and time consuming to create realistic dynamic simulation models. Continuous-time dynamic models are quite common in the physical sciences and in much of engineering (Sage and Armstrong 2000).

Input-output analysis models are especially designed for study of equilibrium situations and re- quirements in economic systems in which many industries are interdependent. Many economic data formats are directly suited for the method. It is relatively simple conceptually, and can cope with many details. Input–output models are often very large.

Econometrics or macroeconomic models are primarily applied to economic description and fore- casting problems. They are based on both theory and data. Emphasis is placed an specification of structural relations, based upon economic theory, and the identification of unknown parameters, using available data, in the behavioral equations. The method requires expertise in economics, statistics, and computer use. It can be quite expensive and time consuming. Macroeconomic models have been widely used for short- to medium-term economic analysis and forecasting.

Queuing theory and discrete event simulation models are often used to study, analyze, and forecast the behavior of systems in which probabilistic phenomena, such as waiting lines, are of importance. Queuing theory is a mathematical approach, while discrete-event simulation generally refers to com- puter simulation of queuing theory type models. The two methods are widely used in the analysis and design of systems such as toll booths, communication networks, service facilities, shipping terminals, and scheduling.

Regression analysis models and estimation theory models are very useful for the identification of mathematical relations and parameter values in these relations from sets of data or measurements. Regression and estimation methods are used frequently in conjunction with mathematical modeling, in particular with trend extrapolation and time series forecasting, and with econometrics. These methods are often also used to validate models. Often these approaches are called system identifi- cation approaches when the goal is to identify the parameters of a system, within an assumed struc- ture, such as to minimize a function of the error between observed data and the model response.

Mathematical programming models are used extensively in operations research systems analysis and management science practice for resource allocation under constraints, planning or scheduling, and similar applications. It is particularly useful when the best equilibrium or one-time setting has to be determined for a given policy or system. Many analysis issues can be cast as mathematical programming problems. A very significant number of mathematical programming models have been developed, including linear programming, nonlinear programming, integer programming, and dy- namic programming. Many appropriate reference texts, including Hillier and Lieberman (1990, 1994), discuss this important class of modeling and analysis tools.

Issue Interpretation Models

The third step in a decision support systems use effort starts with evaluation and comparison of alternatives, using the information gained by analysis. Subsequently, one or more alternatives are selected and a plan for their implementation is designed. Thus, an MBMS must provide models for interpretation, including evaluation, of alternatives.

It is important to note that there is a clear and distinct difference between the refinement of individual alternatives, or optimization step of analysis, and the evaluation and interpretation of the sets of refined alternatives that result from the analysis step. In some few cases, refinement or optim- ization of individual alternative decision policies may not be needed in the analysis step. More than one alternative course of action or decision must be available; if there is but a single policy alternative, then there really is no decision to be taken at all. Evaluation of alternatives is always needed. It is especially important to avoid a large number of cognitive biases in evaluation and decision making. Clearly, the efforts involved in the interpretation step of evaluation and decision making interact most strongly with the efforts in the other steps of the systems process. A number of methods for evaluation and choice making are of importance. A few will be described briefly here.

Decision analysis (Raiffa 1968) is a very general approach to option evaluation and selection. It involves identification of action alternatives and possible consequences, identification of the proba- bilities of these consequences, identification of the valuation placed by the decision maker upon these consequences, computation of the expected value of the consequences, and aggregation or summa- rization of these values for all consequences of each action. In doing this we obtain an evaluation of each alternative act, and the one with the highest value is the most preferred action or option.

Multiple-attribute utility theory (Keeney and Raiffa 1976) has been designed to facilitate com- parison and ranking of alternatives with many attributes or characteristics. The relevant attributes are identified and structured and a weight or relative utility is assigned by the decision maker to each basic attribute. The attribute measurements for each alternative are used to compute an overall worth or utility for each alternative. Multiple attribute utility theory allows for various types of worth structures and for the explicit recognition and incorporation of the decision maker’s attitude towards risk in the utility computation.

Policy Capture (or Social Judgment) Theory (Hammond et al. 1980) has also been designed to assist decision makers in making values explicit and known. It is basically a descriptive approach toward identification of values and attribute weights. Knowing these, one can generally make deci- sions that are consistent with values. In policy capture, the decision maker is asked to rank order a set of alternatives. Then alternative attributes and their attribute measures or scores are determined by elicitation from the decision maker for each alternative. A mathematical procedure involving regression analysis is used to determine that relative importance, or weight, of each attribute that will lead to a ranking as specified by the decision maker. The result is fed back to the decision maker, who typically will express the view that some of his or her values, in terms of the weights associated with the attributes, are different. In an iterative learning process, preference weights and / or overall rankings are modified until the decision maker is satisfied with both the weights and the overall alternative ranking.

Many efforts have been made to translate the theoretical findings in decision analysis to practice. Klein (1998), Matheson and Matheson (1998), and Hammond et al. (1999) each provide different perspectives relative to these efforts.

Model Base Management

As we have noted, an effective model base management system (MBMS) will make the structural and algorithmic aspects of model organization and associated data processing transparent to users of the MBMS. Such tasks as specifying explicit relationships between models to indicate formats for models and which model outputs are input to other models are not placed directly on the user of a MBMS but handled directly by the system. Figure 11 presents a generic illustration of a MBMS. It shows a collection of models or model base, a model base manager, a model dictionary, and con- nections to the DBMS and the DGMS.

A number of capabilities should be provided by an integrated and shared MBMS of a DSS (Barbosa and Herko 1980; Liang 1985) construction, model maintenance, model storage, model manipulation, and model access (Applegate et al. 1986). These involve control, flexibility, feedback, interface, redundancy reduction, and increased consistency:

1. Control: The DSS user should be provided with a spectrum of control. The system should support both fully automated and manual selection of models that seem most useful to the user for an intended application. This will enable the user to proceed at the problem-solving pace that is most comfortable given the user’s experiential familiarity with the task at hand. It should be possible for the user to introduce subjective information and not have to provide full information. Also, the control mechanism should be such that the DSS user can obtain a recommendation for action with this partial information at essentially any point in the problem- solving process.

2. Flexibility: The DSS user should be able to develop part of the solution to the task at hand using one approach and then be able to switch to another modeling approach if this appears preferable. Any change or modification in the model base will be made available to all DSS users.

3. Feedback: The MBMS of the DSS should provide sufficient feedback to enable the user to be aware of the state of the problem-solving process at any point in time.

4. Interface: The DSS user should feel comfortable with the specific model from the MBMS that is in use at any given time. The user should not have to supply inputs laboriously when he or she does not wish to do this.

5. Redundancy reduction: This should occur through use of shared models and associated elim- ination of redundant storage that would otherwise be needed.

6. Increased consistency: This should result from the ability of multiple decision makers to use the same model and the associated reduction of inconsistency that would have resulted from use of different data or different versions of a model.

In order to provide these capabilities, it appears that a MBMS design must allow the DSS user to:

1. Access and retrieve existing models

2. Exercise and manipulate existing models, including model instantiation, model selection, and model synthesis, and the provision of appropriate model outputs

3. Store existing models, including model representation, model abstraction, and physical and logical model storage

4. Maintain existing models as appropriate for changing conditions

5. Construct new models with reasonable effort when they are needed, usually by building new models by using existing models as building blocks

A number of auxiliary requirements must be achieved in order to provide these five capabilities. For example, there must be appropriate communication and data changes among models that have been combined. It must also be possible to locate appropriate data from the DBMS and transmit it to the models that will use it.

It must also be possible to analyze and interpret the results obtained from using a model. This can be accomplished in a number of ways. In this section, we will examine two of them: relational MBMS and expert system control of an MBMS. The objective is to provide an appropriate set of models for the model base and appropriate software to manage the models in the model base; inte- gration of the MBMS with the DBMS; and integration of the MBMS with the DGMS. We can expand further on each of these needs. Many of the technical capabilities needed for a MBMS will be analogous to those needed for a DBMS. These include model generators that will allow rapid building of specific models, model modification tools that will enable a model to be restructured easily on the basis of changes in the task to be accomplished, update capability that will enable changes in data to be input to the model, and report generators that will enable rapid preparation of results from using the system in a form appropriate for human use.

Like a relational view of data, a relational view of models is based on a mathematical theory of relations. Thus, a model is viewed as a virtual file or virtual relation. It is a subset of the Cartesian product of the domain set that corresponds to these input and output attributes. This virtual file is created, ideally, through exercising the model with a wide spectrum of inputs. These values of inputs and the associated outputs become records in the virtual file. The input data become key attributes and the model output data become content attributes.

Model base structuring and organization is very important for appropriate relational model man- agement. Records in the virtual file of a model base are not individually updated, however, as they

are in a relational database. When a model change is made, all of the records that comprise the virtual file are changed. Nevertheless, processing anomalies are possible in relational model man- agement. Transitive dependencies in a relation, in the form of functional dependencies that affect only the output attributes, do occur and are eliminated by being projected into an appropriate normal form.

Another issue of considerable importance relates to the contemporary need for usable model base query languages and to needs within such languages for relational completeness. The implementation of joins is of concern in relational model base management just as it is in relational database man- agement. A relational model join is simply the result of using the output of one model as the input to another model. Thus, joins will normally be implemented as part of the normal operation of software, and a MBMS user will often not be aware that they are occurring. However, there can be cycles, since the output from a first model may be the input to a second model, and this may become the input to the first model. Cycles such as this do not occur in relational DBMS.

Expert system applications in MBMS represent another attractive possibility. Four different po- tential opportunities exist. It might be possible to use expert system technology to considerable advantage in the construction of models (Hwang 1985; Murphy and Stohr 1986), including decisions with respect to whether or not to construct models in terms of the cost and benefits associated with this decision. AI and expert system technology may potentially be used to integrate models. This model integration is needed to join models. AI and expert system technology might be potentially useful in the validation of models. Also, this technology might find potential use in the interpretation of the output of models. This would especially seem to be needed for large-scale models, such as large linear programming models. While MBMS approaches based on a relational theory of models and expert systems technology are new as of this writing, they offer much potential for implementing model management notions in an effective manner. As has been noted, they offer the prospect of data as models (Dolk 1986) that may well prove much more useful than the conventional information systems perspective of ‘‘models as data.’’

Comments

Popular posts from this blog

MATERIAL-HANDLING SYSTEMS:STORAGE SYSTEMS

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY