今日推荐

2006 - NeuroImage

An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest

In this study, we have assessed the validity and reliability of an automated labeling system that we have developed for subdividing the human cerebral cortex on magnetic resonance images into gyral based regions of interest (ROIs). Using a dataset of 40 MRI scans we manually identified 34 cortical ROIs in each of the individual hemispheres. This information was then encoded in the form of an atlas that was utilized to automatically label ROIs. To examine the validity, as well as the intra- and inter-rater reliability of the automated system, we used both intraclass correlation coefficients (ICC), and a new method known as mean distance maps, to assess the degree of mismatch between the manual and the automated sets of ROIs. When compared with the manual ROIs, the automated ROIs were highly accurate, with an average ICC of 0.835 across all of the ROIs, and a mean distance error of less than 1 mm. Intra- and inter-rater comparisons yielded little to no difference between the sets of ROIs. These findings suggest that the automated method we have developed for subdividing the human cerebral cortex into standard gyral-based neuroanatomical regions is both anatomically valid and reliable. This method may be useful for both morphometric and functional studies of the cerebral cortex as well as for clinical investigations aimed at tracking the evolution of disease-induced changes over time, including clinical trials in which MRI-based measures are used to examine response to treatment.

1987

The jackknife, the bootstrap, and other resampling plans

0 阅读

The Jackknife Estimate of Bias The Jackknife Estimate of Variance Bias of the Jackknife Variance Estimate The Bootstrap The Infinitesimal Jackknife The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replications (Half-Sampling) Random Subsampling Nonparametric Confidence Intervals.

2001 - Wiley-Interscience series in systems and optimization

Multi-objective optimization using evolutionary algorithms

From the Publisher: Evolutionary algorithms are relatively new, but very powerful techniques used to find solutions to many real-world search and optimization problems. Many of these problems have multiple objectives, which leads to the need to obtain a set of optimal solutions, known as effective solutions. It has been found that using evolutionary algorithms is a highly effective way of finding multiple effective solutions in a single simulation run. · Comprehensive coverage of this growing area of research · Carefully introduces each algorithm with examples and in-depth discussion · Includes many applications to real-world problems, including engineering design and scheduling · Includes discussion of advanced topics and future research · Features exercises and solutions, enabling use as a course text or for self-study · Accessible to those with limited knowledge of classical multi-objective optimization and evolutionary algorithms The integrated presentation of theory, algorithms and examples will benefit those working and researching in the areas of optimization, optimal design and evolutionary computing. This text provides an excellent introduction to the use of evolutionary algorithms in multi-objective optimization, allowing use as a graduate course text or for self-study.

2004 - Structural and Multidisciplinary Optimization

Survey of multi-objective optimization methods for engineering

A survey of current continuous nonlinear multi-objective optimization (MOO) concepts and methods is presented. It consolidates and relates seemingly different terminology and methods. The methods are divided into three major categories: methods with a priori articulation of preferences, methods with a posteriori articulation of preferences, and methods with no articulation of preferences. Genetic algorithms are surveyed as well. Commentary is provided on three fronts, concerning the advantages and pitfalls of individual methods, the different classes of methods, and the field of MOO as a whole. The Characteristics of the most significant methods are summarized. Conclusions are drawn that reflect often-neglected ideas and applicability to engineering problems. It is found that no single approach is superior. Rather, the selection of a specific method depends on the type of information that is provided in the problem, the user’s preferences, the solution requirements, and the availability of software.

2003 - Technometrics

Statistical Models and Methods for Lifetime Data

histories are grouped to give monthly counts of events, the information needed to compute Nelson’s variance estimate for the sample MCF, described in Chapter 4, has been lost. Instead, Nelson describes and illustrates how to compute a simple “naive” variance estimate and conŽ dence intervals that would be correct under the assumption of an underlying nonhomogeneous Poisson process model. Chapter 6 describes methods for analyzing recurrence data when events can be divided into categories. Examples include different failure modes for a system and gender of a child born. The basic underlying model is that each category has its own population MCF. Under weak assumptions, the MCFs for individual event types can be added to give the MCF for a speciŽ ed set of event types. Knowing the MCF for each category would allow one to answer questions about the MCF for all types of events combined, for a particular type of event, and for any chosen subset of the event types. Each of these questions is illustrated with data on failures of traction motors for subway cars. For example, a reliability analyst can estimate the MCF for a system under the assumption that one or more failure modes can be eliminated. As Nelson points out, the MCF estimators presented in this chapter do not require independence of the underlying stochastic processes that generate the different kinds of events. This is in contrast to the life data competing-risk model where the assumption of independence is critical for making inferences about the effect of removing a failure mode (see, e.g., chapt. 5 of Nelson 1982). For the MCF model, there is, however, the tacit assumption that eliminating a failure mode will not affect the MCF functions of the other failure modes. Although the methods based on the nonparametric recurrence stochastic process model are versatile and require minimal assumptions, in more complicated situations they cannot be applied without careful thought. Consider, for example, a repairable system that has a replaceable unit with two failure modes (A and B), both of which are caused by a common mechanism (e.g., corrosion). Due to the common cause, the times to failure for the two components are highly correlated. When the component fails from either A or B, it is replaced, censoring the other mode. If this censoring of the other mode is naively overlooked, and an engineering change is made to eliminate one of the failure modes (say A), then only the symptom has been Ž xed and there will be a corresponding increase in the MCF for mode B. Then looking at the past data for the occurrence of mode B to estimate the MCF of mode B alone would be misleading. Chapter 7 presents methods for comparing sample MCFs to see whether they differ statistically. The methods are based on an estimate of the difference between two MCFs of processes to be compared and conŽ dence intervals computed for this difference. The methods are illustrated by comparing treatments for recurrent bladder tumors and replacements of two different batches of locomotive breaking grids. Both pointwise and simultaneous (over time) comparisons are described. There is also brief discussion of multiple comparison methods that would be needed if more than two MCFs are to be compared. Chapter 8 provides a very useful survey of other topics closely related to the methods presented in the main part of the book. Topics described include Poisson process and nonhomogeneous Poisson process models, renewal process models, models with covariates, and other models. As with Wayne Nelson’s other books, this book contains a valuable collection of interesting actual applications and corresponding data that we can expect to be used in future publications by other people doing research in this area. An Excel workbook that contains all of the data in the book is available at http:// www.siam.org/books/sa10. In summary, this is an important and interesting book from which most statisticians and many others who analyze data will derive beneŽ t. As the easyto-use tools described here become more commonly known, I predict that the methods will be much more widely used.

1980 - IEEE Transactions on Pattern Analysis and Machine Intelligence

Digital Image Enhancement and Noise Filtering by Use of Local Statistics

Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance. These algorithms are nonrecursive and do not require the use of any kind of transform. They share the same characteristics in that each pixel is processed independently. Consequently, this approach has an obvious advantage when used in real-time digital image processing applications and where a parallel processor can be used. For both the additive and multiplicative cases, the a priori mean and variance of each pixel is derived from its local mean and variance. Then, the minimum mean-square error estimator in its simplest form is applied to obtain the noise filtering algorithms. For multiplicative noise a statistical optimal linear approximation is made. Experimental results show that such an assumption yields a very effective filtering algorithm. Examples on images containing 256 × 256 pixels are given. Results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing.

2003 - ACM SIGGRAPH 2003 Papers

Poisson image editing

Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless importation of both opaque and transparent source image regions into a destination region. The second set is based on similar mathematical ideas and allows the user to modify the appearance of the image seamlessly, within a selected region. These changes can be arranged to affect the texture, the illumination, and the color of objects lying in the region, or to make tileable a rectangular selection.

1998 - IEEE Trans. Pattern Anal. Mach. Intell.

Fingerprint Image Enhancement: Algorithm and Performance Evaluation

In order to ensure that the performance of an automatic fingerprint identification/verification system will be robust with respect to the quality of input fingerprint images, it is essential to incorporate a fingerprint enhancement algorithm in the minutiae extraction module. We present a fast fingerprint enhancement algorithm, which can adaptively improve the clarity of ridge and valley structures of input fingerprint images based on the estimated local ridge orientation and frequency. We have evaluated the performance of the image enhancement algorithm using the goodness index of the extracted minutiae and the accuracy of an online fingerprint verification system. Experimental results show that incorporating the enhancement algorithm improves both the goodness index and the verification accuracy.

2003 - Evolutionary Computation

Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES)

This paper presents a novel evolutionary optimization strategy based on the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). This new approach is intended to reduce the number of generations required for convergence to the optimum. Reducing the number of generations, i.e., the time complexity of the algorithm, is important if a large population size is desired: (1) to reduce the effect of noise; (2) to improve global search properties; and (3) to implement the algorithm on (highly) parallel machines. Our method results in a highly parallel algorithm which scales favorably with large numbers of processors. This is accomplished by efficiently incorporating the available information from a large population, thus significantly reducing the number of generations needed to adapt the covariance matrix. The original version of the CMA-ES was designed to reliably adapt the covariance matrix in small populations but it cannot exploit large populations efficiently. Our modifications scale up the efficiency to population sizes of up to 10n, where n is the problem dimension. This method has been applied to a large number of test problems, demonstrating that in many cases the CMA-ES can be advanced from quadratic to linear time complexity.

1993

Practical neural network recipes in C

Foundations. Classification. Autoassociation. Time Series Prediction. Function Approximation. Multilayer Feedforward Networks. Eluding Local Minimai: Simulated Annealing. Eluding Local Minima II: Genetic Optimisation. Regression and Neural Networks. Designing Feedforward Network Architectures. Interpreting Weights: How Does This Thing Work? Probalistic Neural Networks. Functional Link Networks. Hybrid Networks. Designing the Training Set. Preparing Input Data. Fuzzy Data and Processing. Unsupervised Training. Evaluating Performance of Neural Networks. Hybrid Networks. Designing the Training Set. Preparing Input Data. Fuzzy Data and Processing. Unsupervised Training. Evaluating Performance of Neural Networks. Confidence Measures. Optimizing the Decision Threshold. Using the NEURAL Program. Appendix. Bibliography. Index.

1993 - INTERCHI

A mathematical model of the finding of usability problems

For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. The model can be used to plan the amount of evaluation required to achieve desired levels of thoroughness or benefits. Results of early tests can provide estimates of the number of problems left to be found and the number of additional evaluations needed to find a given fraction. With quantitative evaluation costs and detection values, the model can estimate the numbers of evaluations at which optimal cost/benefit ratios are obtained and at which marginal utility vanishes. For a “medium” example, we estimate that 16 evaluations would be worth their cost, with maximum benefit/cost ratio at four.

2002 - Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600)

Scalable multi-objective optimization test problems

After adequately demonstrating the ability to solve different two-objective optimization problems, multi-objective evolutionary algorithms (MOEAs) must show their efficacy in handling problems having more than two objectives. In this paper, we suggest three different approaches for systematically designing test problems for this purpose. The simplicity of construction, scalability to any number of decision variables and objectives, knowledge of exact shape and location of the resulting Pareto-optimal front, and ability to control difficulties in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions are the main features of the suggested test problems. Because of these features, they should be useful in various research activities on MOEAs, such as testing the performance of a new MOEA, comparing different MOEAs, and having a better understanding of the working principles of MOEAs.

2007 - SIAM J. Numer. Anal.

A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

In this paper we propose and analyze a stochastic collocation method to solve elliptic partial differential equations with random coefficients and forcing terms (input data of the model). The input data are assumed to depend on a finite number of random variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It can be seen as a generalization of the stochastic Galerkin method proposed in [I. Babuscka, R. Tempone, and G. E. Zouraris, SIAM J. Numer. Anal., 42 (2004), pp. 800-825] and allows one to treat easily a wider range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method.

1999 - Evolutionary Computation

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems

In this paper, we study the problem features that may cause a multi-objective genetic algorithm (GA) difficulty in converging to the true Pareto-optimal front. Identification of such features helps us develop difficult test problems for multi-objective optimization. Multi-objective test problems are constructed from single-objective optimization problems, thereby allowing known difficult features of single-objective problems (such as multi-modality, isolation, or deception) to be directly transferred to the corresponding multi-objective problem. In addition, test problems having features specific to multi-objective optimization are also constructed. More importantly, these difficult test problems will enable researchers to test their algorithms for specific aspects of multi-objective optimization.

2013 - ACM Trans. Graph.

Screened poisson surface reconstruction

Poisson surface reconstruction creates watertight surfaces from oriented point sets. In this work we extend the technique to explicitly incorporate the points as interpolation constraints. The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation. In contrast to other image and geometry processing techniques, the screening term is defined over a sparse set of points rather than over the full domain. We show that these sparse constraints can nonetheless be integrated efficiently. Because the modified linear system retains the same finite-element discretization, the sparsity structure is unchanged, and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.

2008 - SIAM J. Numer. Anal.

A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data

This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.

1984 - Communications in Mathematical Physics

Quantum Ito's formula and stochastic evolutions

Using only the Boson canonical commutation relations and the Riemann-Lebesgue integral we construct a simple theory of stochastic integrals and differentials with respect to the basic field operator processes. This leads to a noncommutative Ito product formula, a realisation of the classical Poisson process in Fock space which gives a noncommutative central limit theorem, the construction of solutions of certain noncommutative stochastic differential equations, and finally to the integration of certain irreversible equations of motion governed by semigroups of completely positive maps. The classical Ito product formula for stochastic differentials with respect to Brownian motion and the Poisson process is a special case.

2001 - ICC 2001. IEEE International Conference on Communications. Conference Record (Cat. No.01CH37240)

Power efficient organization of wireless sensor networks

Wireless sensor networks have emerged recently as an effective way of monitoring remote or inhospitable physical environments. One of the major challenges in devising such networks lies in the constrained energy and computational resources available to sensor nodes. These constraints must be taken into account at all levels of the system hierarchy. The deployment of sensor nodes is the first step in establishing a sensor network. Since sensor networks contain a large number of sensor nodes, the nodes must be deployed in clusters, where the location of each particular node cannot be fully guaranteed a priori. Therefore, the number of nodes that must be deployed in order to completely cover the whole monitored area is often higher than if a deterministic procedure were used. In networks with stochastically placed nodes, activating only the necessary number of sensor nodes at any particular moment can save energy. We introduce a heuristic that selects mutually exclusive sets of sensor nodes, where the members of each of those sets together completely cover the monitored area. The intervals of activity are the same for all sets, and only one of the sets is active at any time. The experimental results demonstrate that by using only a subset of sensor nodes at each moment, we achieve a significant energy savings while fully preserving coverage.

1996

A convergent adaptive algorithm for Poisson's equation

0 阅读

We construct a converging adaptive algorithm for linear elements applied to Poisson’s equation in two space dimensions. Starting from a macro triangulation, we describe how to construct an initial triangulation from a priori information. Then we use a posteriors error estimators to get a sequence of refined triangulation and approximate solutions. It is proved that the error, measured in the energy norm, decreases at a constant rate in each step until a prescribed error bound is reached. Extension to higher-order elements in two space dimension and numerical results are included.

1987 - Bulletin of the American Meteorological Society

A Comprehensive Ocean-Atmosphere Data Set

Development is described of a Comprehensive Ocean-Atmosphere Data Set (COADS)—the result of a cooperative project to collect global weather observations taken near the ocean's surface since 1854, primarily from merchant ships, into a compact and easily used data set. As background, a historical overview is given of how archiving of these marine data has evolved from 1854, when systematic recording of shipboard meteorological and oceanographic observations was first established as an international activity. Input data sets used for COADS are described, as well as the processing steps used to pack input data into compact binary formats and to apply quality controls for identification of suspect weather elements and duplicate marine reports. Seventy-million unique marine reports for 1854–1979 were output from initial processing. Further processing is described, which created statistical summaries for each month of each year of the period, using 2° latitude × 2° longitude boxes. Monthly summary products are a...

论文关键词

neural network sensor network wireless sensor network wireless sensor differential equation genetic algorithm monte carlo data mining particle swarm optimization electric vehicle supply chain particle swarm swarm optimization optimization problem partial differential equation deep neural network energy storage optimization algorithm life cycle partial differential digital image deep neural proposed algorithm evolutionary algorithm random variable monte carlo method optimization method heat exchanger multi-objective optimization differential evolution swarm optimization algorithm optimization model life cycle assessment gaussian proces image enhancement proposed approach covariance matrix local search optimization approach speech enhancement strategy based dirichlet boundary condition differential evolution algorithm energy performance based on deep building design input datum collocation method poisson proces supply chain network robust optimization poisson equation multi-objective optimization problem transport equation multi-objective evolutionary algorithm contrast enhancement chain network multi-objective genetic algorithm fingerprint image multi-objective evolutionary performance metric enhancement algorithm pareto optimal elliptic partial differential enhancement based diffusion proces histogram equalization time complexity pareto front multi-objective genetic automatic image power efficient multi-objective particle swarm compound poisson underwater image image enhancement algorithm elliptic partial hadamard matrice multi-objective optimization algorithm multi-objective optimization model image enhancement method image enhancement based multi-objective particle test problem evolutionary multi-objective optimization image contrast enhancement many-objective optimization mri scan speech enhancement based evolutionary multi-objective sparse grid multi-objective optimization method mathematical optimization adaptive histogram equalization noise filtering random input covariance matrix adaptation color image enhancement compound poisson proces multi-objective optimization approach pareto-optimal solution underwater image enhancement self-adaptive differential evolution adaptive histogram fingerprint image enhancement stochastic collocation pareto optimal set convergence criterion limited adaptive histogram matrix adaptation irregular domain gradient field solving multi-objective optimization contrast limited adaptive equation with random low-light image stochastic collocation method self-adaptive differential low-light image enhancement local statistic real-world optimization problem input data set solving poisson equation retrofit strategy building retrofit digital image enhancement solving poisson pareto dominance robust multi-objective contrast limited limited adaptive pareto-optimal front robust multi-objective optimization hybrid multi-objective evolutionary input fingerprint image mesh editing random input datum discrete poisson automatic image enhancement java-based framework multi-objective optimization procedure multi-objective metaheuristic grid stochastic collocation sparse grid stochastic enhancement and noise optimization problems involve field manipulation grid stochastic leucaena pulverulenta data set