Issues in Distance Learning
This review of literature and research into the effectiveness of distance education systems deals with a number of factors which affect their success or failure. These include the influence of distance learning theory upon instructional design and delivery, redefining the roles of partners in distance education teams, media selection, technology adoption, change implementation, methods and strategies to increase interactivity, inquiry, and active learning, learner characteristics and modes of learning, teacher mediation and learner support, operational issues, policy and management issues, and cost/ benefit tradeoffs. It is intended as a companion piece to Sherry and Morse’s (1994) training needs assessment.
VIS: A System for Verification and Synthesis
ion Manual abstraction can be performed by giving a file containing the names of variables to abstract. For each variable appearing in the file, a new primary input node is created to drive all the nodes that were previously driven by the variable. Abstracting a net effectively allows it to take any value in its range, at every clock cycle. Fair CTL model checking and language emptiness check VIS performs fair CTL model checking under Buchi fairness constraints. In addition, VIS can perform language emptiness checking by model checking the formula EG true. The language of a design is given by sequences over the set of reachable states that do not violate the fairness constraint. The language emptiness check can be used to perform language containment by expressing the set of bad behaviors as another component of the system. If model checking or language emptiness fail, VIS reports the failure with a counterexample, (i.e., behavior seen in the system that does not satisfy the property for model checking, or valid behavior seen in the system for language emptiness). This is called the “debug” trace. Debug traces list a set of states that are on a path to a fair cycle and fail the CTL formula. Equivalence checking VIS provides the capability to check the combinational equivalence of two designs. An important usage of combinational equivalence is to provide a sanity check when re-synthesizing portions of a network. VIS also provides the capability to test the sequential equivalence of two designs. Sequential verification is done by building the product finite state machine, and checking whether a state where the values of two corresponding outputs differ, can be reached from the set of initial states of the product machine. If this happens, a debug trace is provided. Both combinational and sequential verification are implemented using BDD-based routines. Simulation VIS also provides traditionaldesign verification in the form of a cycle-based simulator that uses BDD techniques. Since VIS performs both formal verification and simulation using the same data structures, consistency between them is ensured. VIS can generate random input patterns or accept user-specified input patterns. Any subtree of the specified hierarchy may be simulated.
Statistics for near independence in multivariate extreme values
We propose a multivariate extreme value threshold model for joint tail estimation which overcomes the problems encountered with existing techniques when the variables are near independence. We examine inference under the model and develop tests for independence of extremes of the marginal variables, both when the thresholds are fixed, and when they increase with the sample size. Motivated by results obtained from this model, we give a new and widely applicable characterisation of dependence in the joint tail which includes existing models as special cases. A new parameter which governs the form of dependence is of fundamental importance to this characterisation. By estimating this parameter, we develop a diagnostic test which assesses the applicability of bivariate extreme value joint tail models. The methods are demonstrated through simulation and by analysing two previously published data sets.
Information Theoretic Learning - Renyi's Entropy and Kernel Perspectives
This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesian approaches that require a much larger computational cost. This is possible because of a non-parametric estimator of Renyis quadratic entropy that is only a function of pairwise differences between samples. The book compares the performance of ITL algorithms with the second order counterparts in many engineering and machine learning applications. Students, practitioners and researchers interested in statistical signal processing, computational intelligence, and machine learning will find in this book the theory to understand the basics, the algorithms to implement applications, and exciting but still unexplored leads that will provide fertile ground for future research.
NUSMV: A New Symbolic Model Verifier
This paper describes NUSMV, a new symbolic model checker developed as a joint project between Carnegie Mellon University (CMU) and Istituto per la Ricerca Scientifica e Tecnolgica (IRST). NUSMV is designed to be a well structured, open, flexible and documented platform for model checking. In order to make NUSMV applicable in technology transfer projects, it was designed to be very robust, close to the standards required by industry, and to allow for expressive specification languages. NUSMV is the result of the reengineering, reimplementation and extension of SMV [6], version 2.4.4 (SMV from now on). With respect to SMV, NUSMV has been extended and upgraded along three dimensions. First, from the point of view of the system functionalities, NUSMV features a textual interaction shell and a graphical interface, extended model partitioning techniques, and allows for LTL model checking. Second, the system architecture of NUSMV has been designed to be highly modular and open. The interdependencies between different modules have been separated, and an external, state of the art BDD package [8] has been integrated in the system kernel. Third, the quality of the implementation has been strongly enhanced. This makes of NUSMV a robust, maintainable and well documented system, with a relatively easy to modify source code. NUSMV is available at http://nusmv.irst.itc.it/.
Bounded model checking
Symbolic model checking with Binary Decision Diagrams (BDDs) has been successfully used in the last decade for formally verifying finite state systems such as sequential circuits and protocols. Since its introduction in the beginning of the 90’s, it has been integrated in the quality assurance process of several major hardware companies. The main bottleneck of this method is that BDDs may grow exponentially, and hence the amount of available memory restricts the size of circuits that can be verified efficiently. In this article we survey a technique called Bounded Model Checking (BMC), which uses a propositional SAT solver rather than BDD manipulation techniques. Since its introduction in 1999, BMC has been well received by the industry. It can find many logical errors in complex systems that can not be handled by competing techniques, and is therefore widely perceived as a complementary technique to BDD-based model checking. This observation is supported by several independent comparisons that have been published in the last few years.
Precision measurement of neutrino oscillation parameters with KamLAND.
The KamLAND experiment has determined a precise value for the neutrino oscillation parameter Deltam21(2) and stringent constraints on theta12. The exposure to nuclear reactor antineutrinos is increased almost fourfold over previous results to 2.44 x 10(32) proton yr due to longer livetime and an enlarged fiducial volume. An undistorted reactor nu[over]e energy spectrum is now rejected at >5sigma. Analysis of the reactor spectrum above the inverse beta decay energy threshold, and including geoneutrinos, gives a best fit at Deltam21(2)=7.58(-0.13)(+0.14)(stat) -0.15+0.15(syst) x 10(-5) eV2 and tan2theta12=0.56(-0.07)+0.10(stat) -0.06+0.10(syst). Local Deltachi2 minima at higher and lower Deltam21(2) are disfavored at >4sigma. Combining with solar neutrino data, we obtain Deltam21(2)=7.59(-0.21)+0.21 x 10(-5) eV2 and tan2theta12=0.47(-0.05)+0.06.
OP-ELM: Optimally Pruned Extreme Learning Machine
In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.
Natural Questions: A Benchmark for Question Answering Research
We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.
Genome Expansion and Gene Loss in Powdery Mildew Fungi Reveal Tradeoffs in Extreme Parasitism
From Blight to Powdery Mildew Pathogenic effects of microbes on plants have widespread consequences. Witness, for example, the cultural upheavals driven by potato blight in the 1800s. A variety of microbial pathogens continue to afflict crop plants today, driving both loss of yield and incurring the increased costs of control mechanisms. Now, four reports analyze microbial genomes in order to understand better how plant pathogens function (see the Perspective by Dodds). Raffaele et al. (p. 1540) describe how the genome of the potato blight pathogen accommodates transfer to different hosts. Spanu et al. (p. 1543) analyze what it takes to be an obligate biotroph in barley powdery mildew, and Baxter et al. (p. 1549) ask a similar question for a natural pathogen of Arabidopsis. Schirawski et al. (p. 1546) compared genomes of maize pathogens to identify virulence determinants. Better knowledge of what in a genome makes a pathogen efficient and deadly is likely to be useful for improving agricultural crop management and breeding. A group of papers analyzes pathogen genomes to find the roots of virulence, opportunism, and life-style determinants. Powdery mildews are phytopathogens whose growth and reproduction are entirely dependent on living plant cells. The molecular basis of this life-style, obligate biotrophy, remains unknown. We present the genome analysis of barley powdery mildew, Blumeria graminis f.sp. hordei (Blumeria), as well as a comparison with the analysis of two powdery mildews pathogenic on dicotyledonous plants. These genomes display massive retrotransposon proliferation, genome-size expansion, and gene losses. The missing genes encode enzymes of primary and secondary metabolism, carbohydrate-active enzymes, and transporters, probably reflecting their redundancy in an exclusively biotrophic life-style. Among the 248 candidate effectors of pathogenesis identified in the Blumeria genome, very few (less than 10) define a core set conserved in all three mildews, suggesting that most effectors represent species-specific adaptations.
Evolutionary extreme learning machine
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.
A supercritical carbon dioxide cycle for next generation nuclear reactors
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2004.
The INQUERY Retrieval System
As larger and more heterogeneous text databases become available, information retrieval research will depend on the development of powerful, efficient and flexible retrieval engines. In this paper, we describe a retrieval system (INQUERY) that is based on a probabilistic retrieval model and provides support for sophisticated indexing and complex query formulation. INQUERY has been used successfully with databases containing nearly 400,000 documents.
Pricing and hedging derivative securities in markets with uncertain volatilities
We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely, but is assumed instead to lie between two extreme values σminand σmax. These bounds could be inferred from extreme values of the implied volatilities of liquid options, or from high-low peaks in historical stock- or option-implied volatilities. They can be viewed as defining a confidence interval for future volatility values. We show that the extremal non-arbitrageable prices for the derivative asset which arise as the volatility paths vary in such a band can be described by a non-linear PDE, which we call the Black-Scholes-Barenblatt equation. In this equation, the 'pricing' volatility is selected dynamically from the two extreme values, σmin, σmax, according to the convexity of the value-function. A simple algorithm for solving the equation by finite-differencing or a trinomial tree is presented. We show that this model captures the importance of diversification in managing derivatives positions. It can be used systematically to construct efficient hedges using other derivatives in conjunction with the underlying asset.
A first course in order statistics
Basic Distribution Theory Discrete Order Statistics Order Statistics from Some Specific Distributions Moment Relations, Bounds, and Approximations Characterizations Using Order Statistics Order Statistics in Statistical Inference Asymptotic Theory Record Values Bibliography Indexes.
Dynamic partial-order reduction for model checking software
We present a new approach to partial-order reduction for model checking software. This approach is based on initially exploring an arbitrary interleaving of the various concurrent processes/threads, and dynamically tracking interactions between these to identify backtracking points where alternative paths in the state space need to be explored. We present examples of multi-threaded programs where our new dynamic partial-order reduction technique significantly reduces the search space, even though traditional partial-order algorithms are helpless.
Evidence of trends in daily climate extremes over southern and west Africa
Received 31 May 2005; revised 10 January 2006; accepted 23 March 2006; published 21 July 2006. [1] There has been a paucity of information on trends in daily climate and climate extremes, especially from developing countries. We report the results of the analysis of daily temperature (maximum and minimum) and precipitation data from 14 south and west African countries over the period 1961–2000. Data were subject to quality control and processing into indices of climate extremes for release to the global community. Temperature extremes show patterns consistent with warming over most of the regions analyzed, with a large proportion of stations showing statistically significant trends for all temperature indices. Over 1961 to 2000, the regionally averaged occurrence of extreme cold (fifth percentile) days and nights has decreased by � 3.7 and � 6.0 days/decade, respectively. Over the same period, the occurrence of extreme hot (95th percentile) days and nights has increased by 8.2 and 8.6 days/decade, respectively. The average duration of warm (cold) has increased (decreased) by 2.4 (0.5) days/decade and warm spells. Overall, it appears that the hot tails of the distributions of daily maximum temperature have changed more than the cold tails; for minimum temperatures, hot tails show greater changes in the NW of the region, while cold tails have changed more in the SE and east. The diurnal temperature range (DTR) does not exhibit a consistent trend across the region, with many neighboring stations showing opposite trends. However, the DTR shows consistent increases in a zone across Namibia, Botswana, Zambia, and Mozambique, coinciding with more rapid increases in maximum temperature than minimum temperature extremes. Most precipitation indices do not exhibit consistent or statistically significant trends across the region. Regionally averaged total precipitation has decreased but is not statistically significant. At the same time, there has been a statistically significant increase in regionally averaged daily rainfall intensity and dry spell duration. While the majority of stations also show increasing trends for these two indices, only a few of these are statistically significant. There are increasing trends in regionally averaged rainfall on extreme precipitation days and in maximum annual 5-day and 1-day rainfall, but only trends for the latter are statistically significant.
Person re-identification by probabilistic relative distance comparison
Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.
Content-based image retrieval systems: A survey
In many areas of commerce, government, academia, and hospitals, large collections of digital im- ages are being created. Many of these collections are the product of digitizing existing collections of analogue photographs, diagrams, drawings, paintings, and prints. Usually, the only way of search- ing these collections was by keyword indexing, or simply by browsing. Digital images databases however, open the way to content-based searching. In this paper we survey some technical aspects of current content-based image retrieval systems.
Model Checking of Probabalistic and Nondeterministic Systems
The temporal logics pCTL and pCTL* have been proposed as tools for the formal specification and verification of probabilistic systems: as they can express quantitative bounds on the probability of system evolutions, they can be used to specify system properties such as reliability and performance. In this paper, we present model-checking algorithms for extensions of pCTL and pCTL* to systems in which the probabilistic behavior coexists with nondeterminism, and show that these algorithms have polynomial-time complexity in the size of the system. This provides a practical tool for reasoning on the reliability and performance of parallel systems.
time series software development information retrieval regression model image retrieval maximum likelihood knowledge base retrieval system model checking distance learning real-time system question answering extreme learning machine learning machine information retrieval system extreme learning order statistic content-based image retrieval temporal logic rate control formal method statistical inference weibull distribution nuclear reactor visual attention image retrieval system question answering system carnegie mellon university binary decision diagram java virtual machine answering system atrial fibrillation carnegie mellon memory network random sequence mellon university extreme programming southeast asia research issue model checker extreme event belief revision visual question answering bounded model checking symbolic model visual question abstract model extreme value theory bounded model symbolic model checking automated storage statistically significant bibliography index arithmetic logic unit model checking technique extreme value distribution model checking algorithm extreme weather south pacific interactive information retrieval sample variance multivariate extreme open-domain question answering model checking based state of knowledge extreme temperature answering question question answering dataset extreme rainfall open-domain question question answering track extreme precipitation daily temperature logic model checking answering track symbolic model checker desired property counterexample-guided abstraction refinement sat-based model checking temperature extreme extreme precipitation event climate extreme formal methods community extreme storm climate event sat-based model precipitation extreme french polynesia image question answering lazy abstraction severe thunderstorm modeling of extreme silo (dataset) pipeline (computing) word list by frequency reactor device component reactor (software) united state