Using sampling theory to build a more universal forest vegetation database

Greater demands on forest resources require that larger amounts of information be readily available to decision-makers. To provide more information faster, databases must be developed that are more comprehensive and easier to use. Data modeling is a process for building more comprehensive and flexible databases by emphasizing fundamental relationships over existing or traditional business operations. A hierarchical series of models is developed during data modeling, beginning with a conceptual model of the activity of interest. Building on the conceptual model, a logical model is constructed that captures specific details, but does so without regard for the eventual implementation software. Finally, the logical model is transformed into a physical model that is combined with application software to comprise the actual database. We show how sampling theory was used in a conceptual data model to provide an integrating framework for identifying fundamental relationships. By using sampling theory the final data structure organizes forest vegetation data gathering as a scientific process, rather than as specific business functions.