ed on predetermined dimensions, thereby hiding unnecessary detail. For example, it is possible to specify application systems for individual users in terms of generic software modules. Similar abstractions are performed for other aspects of theions are performed for other aspects of the problem, including data and processor requirements. In the second stage, the abstracted problem is solved and expressed in terms of an abstracted solution—in this case the specificationed solution—in this case the specification of generic processing capacities and data storage requirements at the desktop, local server, and global server levels. In the final stage, the abstracted solution is restored to the necessary ored solution is restored to the necessary or desired level of detail. This would include mapping the processing requirements to a specific platform, including specification of processor type(s), memory and secondary storage requirements. In addition, the network bandwidth requirements can be used to select appropriate type and capacity of appropriate communication links. Once a suitable generic solution is identified, it may be refined and transformed into a more specific form. the consequences of alternative resource allocation and application design strategies on the architectures needed to support them. The techniques presented here allow an organization to simultaneously evaluate multiple alternatives for an enterprise computing architecture. The evaluation framework presented in this study employed criteria of hardware cost, operating cost, and redundancy, but the methodology is easily extended to include other criteria. Prior approaches have typically produced single designs, and offered little if any basis for evaluation or comparison. The procedure also permits the derivation of enterprise computing architectures that reflect specific design objectives, such as the distribution of data stores and application programs within and among sites, the extent of application partitioning, etc. This offers the potential to effectively manage the architecture design by putting the designer in control of specifying preferences and objectives. While this poses a risk that some alternatives may be excluded when specifying preferences, the benefits of allowing the designer to specify and manipulate preferences clearly outweigh that risk. This research makes an important methodological contribution by demonstrating that an integrated approach combining multiple techniques is possible, useful, and practical. Simulation modeling, rule-based reasoning, and heuristic classification have all been used to reduce otherwise intractable design problems to manageable proportions. This research combines the strengths of each of these techniques to facilitate consideration of problems of broader scope than before. The use of simulation-based procedures for estimating server capacities is also appealing. While more exact techniques like queueing theory are available, they tend to exhibit problems of tractability when applied to real-world problems. A simulation-based approach provides more realistic and cost-effective estimation of server capacity requirements. Heuristic-based approaches have also enjoyed considerable success in the literature, but most often in the context of narrow subsets of larger problem domains. We have suggested, and offer evidence to support the notion that these approaches are also applicable to these larger domains. This methodology is flexible and adaptable to permit consideration of a variety of problem scenarios, beyond the scope of the cases presented. The growth of Internet and intranet applications raises considerable interest in techniques to plan for their successful adoption and exploitation. The techniques developed and presented here are capable of accommodating these applications. The modular design of interchangeable components in the Information Architect also facilitates meaningful comparison of alternatives across multiple sets of technology offerings. Given the nature of problem addressed, it offers unique opportunities for collaboration between organizations, researchers, and technology providers.
[1]
James C. Wetherbe,et al.
Key Issues in Information Systems Management: 1994-95 SIM Delphi Results
,
1996,
MIS Q..
[2]
C. E. Houstics.
Module allocation of real-time applications to distributed systems
,
1990
.
[3]
Catherine E. Houstis,et al.
Module Allocation of Real-Time Applications to Distributed Systems
,
1987,
IEEE Trans. Software Eng..
[4]
Hasan Pirkul,et al.
Computer and Database Location in Distributed Computer Systems
,
1986,
IEEE Transactions on Computers.
[5]
Stephen D. Burd,et al.
Decision Support for Supercomputer Acquisition
,
1991,
Oper. Res..
[6]
Yechiam Yemini,et al.
NEST: a network simulation and prototyping testbed
,
1990,
CACM.
[7]
William J. Clancey,et al.
Heuristic Classification
,
1986,
Artif. Intell..
[8]
Robert W. Zmud,et al.
The Influence of IT Management Practice on IT Use in Large Organizations
,
1994,
MIS Q..
[9]
Mahadev Satyanarayanan,et al.
The Influence of Scale on Distributed File System Design
,
1992,
IEEE Trans. Software Eng..
[10]
George Stuart Nezlek.
Architectures for cooperative computing: a knowledge-based approach
,
1998
.
[11]
James C. Wetherbe,et al.
Key issues in information systems management
,
1987
.
[12]
Hemant K. Jain.
A Comprehensive Model for the Design of Distributed Computer Systems
,
1987,
IEEE Transactions on Software Engineering.