Towards New Languages for Systems Modeling
暂无分享,去创建一个
This paper discusses what the future modeling environments could look like. To tackle with ever increasing complexity of process models, higher level of abstraction needs to be exploited. It is noticed that the most natural way to connect low-level models to high-level tools is simulation. Based on such semantic grounding, new description formalisms can perhaps be implemented. 1. NEW CHALLENGES Because of the fieldbuses, and because of the modern sensor technology, etc., the availability of the industrial processes has been enhanced considerably. There is an explosion of structureless data facing us. The problem is that there do not exist enough domain area experts that could analyze the data and rewrite the models for the processes appropriately. Automatic modeling systems would be invaluable – systems that could not only adapt the model parameters within a predetermined structural framework, but also determine the structures themselves without too much human intervention. The modeling problems are attacked by utilizing different kinds of description formalisms. One major approach is to define more and more general formalisms (like Java language) for system description: In such environments, anything can be expressed, but this means that large numbers of expressions are needed to restrict the expressional power to only the essential phenomena. On the other hand, more and more specialized description formalisms are being introduced: In such tailored environments, individual models can be defined using minimum effort. For example, in the field of systems modeling, there exist tailored languages like VHDL 1076.1-1999 and Modelica, both of them letting the expert to define the system structure in a high-level language. In addition to such text-oriented formalisms, different kinds of graphical languages and environments have been developed to provide for easy access to model structure manipulation. In the fields of artificial and computational intelligence, new kinds of more and more sophisticated approaches have been developed. However, the unifying view is missing, and it seems that the efforts typically boil down to more or less extensive software projects, forgetting about the genericity objective. When looking at the contemporary modeling tools and methodologies, it seems that there are at least three major obstacles preventing us from implementing truly smart modeling environments: These problems include crispness of representations, missing connection to system semantics, and insufficient level of abstraction. These issues are briefly discussed in what follows. Crispness of representations. The modeling languages are typically symbolic; however, to be able to automatically adapt the models according to measurement data there should exist some kind of continuity properties what comes to the model structures. Is it possible to avoid crispness? At least in principle, some kind of numeric languages can be defined – but it seems that continuity on the low level does not assure continuity on the larger scale (see [3]). Extensions of fuzzy logic, like “computing with words”, seem to lack the expressional power of symbolic representations when the conceptual structures are collapsed onto the real axis. Simple pruning of the complex structures does not help. However, it turns out that – at least when speaking of dynamic systems – useful results can be found when, rather than collapsing, the “problem space” is inflated: Single concepts are represented as continuous distributions in a highdimensional space, so that the crisp values can be interpreted as projections of these high-dimensional objects. The structural complexity can be transformed to some extent into dimensional complexity that can be mastered using multivariate statistics. Missing connection to system semantics. To implement a smart system capable of reacting to measurements appropriately, some kind of understanding is necessary; the system somehow has to capture the meaning or semantics of the models and data structures. General-purpose modeling languages are purely syntactical; there is no mechanism to add any kind of “semantic tags” that would connect the model to the system being modeled. Speaking of semantics, of course, one is facing the eternal challenges of artificial intelligence – but in some special fields, like when modeling dynamic systems, this semantics truly can be captured. Even though one cannot promise the truth, one can reach relevance (see below). Insufficient level of abstraction. The role of the model is to abstract phenomena, hiding the irrelevant details. Whereas today’s modeling tools and related mathematical machinery – differential equations, etc. – are excellent for analysis and manipulation of simple systems, the models are not scalable when complexity is increased; managing the models typically becomes increasingly (exponentially) more cumbersome when new constructs are added. Another issue is that there exist so many of those mathematical tools that the hybrid models containing mutually incompatible model types become too heterogeneous to be efficiently maintained. It can be claimed that higher level of abstraction can be reached only through emergence – old approaches have to be abandoned, and qualitatively different tools are needed. To have something relevant emerge, the domain-area semantics has to be somehow encapsulated in the constructs. So, how to functionalize system semantics, and how this semantics facilitates emergence of multivariate statistical structures – these issues are discussed next. First, let us look at the modeling problem from a wider perspective. 2. COMPLEX SYSTEMS AND EMERGENCE Stephen Wolfram proposes [10] that everything in the Nature could be explained in terms of simple “programs” that are just iterated long enough – complexity is an emergent phenomenon. However, despite his intuitively appealing claims, something is missing – relevance. It is extremely hard to imagine how the complexity in an automation system, for example, could be explained in terms of cellular automata (as proposed by Wolfram). Such all-embracing paradigms, being too general, cannot explain individual domain fields sufficiently, and more domain-oriented tools are necessary. The idea of emergence still has a lot of potential. When large numbers of simple elements or operations are combined, the behavior of the total system may become very difficult to manage using the tools that were appropriate when studying the subsystems alone. Qualitatively the most relevant way of looking at the system may change altogether. As an example, take an example from physics (modeling of gases): Elementary particles behave stochastically (orbitals, etc.); atoms behave deterministically (Newtonian ideal gas model); atom groups behave statistically (statistical mechanics); large volumes behave deterministically (state described in terms of pressures and temperatures); large gas units behave stochastically (turbulence); perfectly stirred volumes behave deterministically (ideal mixers with low-order ordinary differential equation models). Even though the gas flows can in principle be reduced to elementary particles, the most economic (and comprehensible) way to capture relevant phenomena alternates from level to level: From statistical (stochastic) to deterministic and back – and this happens various times! A reasonable modeling tool takes these transitions into account. Is it also in the case of complex industrial process models that when abstracting, or looking the complexity in a perspective, the next level above the structurally deterministic component level captured by explicit models and formulas should be statistical? And how to reach this higher level? Closer analysis is here needed. 3. PROCESS AND ITS SEMANTICS Study a vessel as shown in Fig. 1. On the physical level, it is natural to think that the incoming flow is the process input, and outgoing flow is the output. However, on the information processing level, the physical details are no more of primary interest: It is the flow of information that is the main thing, meaning that the ways to affect the process must be interpreted as the actual inputs – in this case, the total flow – and it is the measurements that dictate what should be regarded as outputs – in this case, the tank level. So, the models that are routinely studied in automation systems already are abstractions of the physical system. However, this kind of abstraction is not yet “high” enough: Feedback loops, for example, just collapse to more complex processes, with no kind of structural simplification (elimination of parameters, for example) taking place. Yet another minor step upward is needed.
[1] Peter Norvig,et al. Artificial Intelligence: A Modern Approach , 1995 .
[2] Stephen Wolfram,et al. A New Kind of Science , 2003, Artificial Life.
[3] Heikki Hyötyniemi. From intelligent models to smart ones , 1999 .
[4] Herbert A. Simon,et al. The Sciences of the Artificial , 1970 .