An integrated approach to enterprise computing architectures

ed on predetermined dimensions, thereby hiding unnecessary detail. For example, it is possible to specify application systems for individual users in terms of generic software modules. Similar abstractions are performed for other aspects of theions are performed for other aspects of the problem, including data and processor requirements. In the second stage, the abstracted problem is solved and expressed in terms of an abstracted solution—in this case the specificationed solution—in this case the specification of generic processing capacities and data storage requirements at the desktop, local server, and global server levels. In the final stage, the abstracted solution is restored to the necessary ored solution is restored to the necessary or desired level of detail. This would include mapping the processing requirements to a specific platform, including specification of processor type(s), memory and secondary storage requirements. In addition, the network bandwidth requirements can be used to select appropriate type and capacity of appropriate communication links. Once a suitable generic solution is identified, it may be refined and transformed into a more specific form. the consequences of alternative resource allocation and application design strategies on the architectures needed to support them. The techniques presented here allow an organization to simultaneously evaluate multiple alternatives for an enterprise computing architecture. The evaluation framework presented in this study employed criteria of hardware cost, operating cost, and redundancy, but the methodology is easily extended to include other criteria. Prior approaches have typically produced single designs, and offered little if any basis for evaluation or comparison. The procedure also permits the derivation of enterprise computing architectures that reflect specific design objectives, such as the distribution of data stores and application programs within and among sites, the extent of application partitioning, etc. This offers the potential to effectively manage the architecture design by putting the designer in control of specifying preferences and objectives. While this poses a risk that some alternatives may be excluded when specifying preferences, the benefits of allowing the designer to specify and manipulate preferences clearly outweigh that risk. This research makes an important methodological contribution by demonstrating that an integrated approach combining multiple techniques is possible, useful, and practical. Simulation modeling, rule-based reasoning, and heuristic classification have all been used to reduce otherwise intractable design problems to manageable proportions. This research combines the strengths of each of these techniques to facilitate consideration of problems of broader scope than before. The use of simulation-based procedures for estimating server capacities is also appealing. While more exact techniques like queueing theory are available, they tend to exhibit problems of tractability when applied to real-world problems. A simulation-based approach provides more realistic and cost-effective estimation of server capacity requirements. Heuristic-based approaches have also enjoyed considerable success in the literature, but most often in the context of narrow subsets of larger problem domains. We have suggested, and offer evidence to support the notion that these approaches are also applicable to these larger domains. This methodology is flexible and adaptable to permit consideration of a variety of problem scenarios, beyond the scope of the cases presented. The growth of Internet and intranet applications raises considerable interest in techniques to plan for their successful adoption and exploitation. The techniques developed and presented here are capable of accommodating these applications. The modular design of interchangeable components in the Information Architect also facilitates meaningful comparison of alternatives across multiple sets of technology offerings. Given the nature of problem addressed, it offers unique opportunities for collaboration between organizations, researchers, and technology providers.