Computational intelligence techniques especially neural networks have been attracting a large number of researchers' attentions in the past three decades. It has been well known that conventional learning methods on neural networks have apparent drawbacks and limitations including: (1) slow learning speed, (2) trivial human tuned parameters, and (3) complicated learning algorithms. Extreme Learning Machine (ELM) is an emerging learning technique proposed for generalized single-hidden layer feedforward networks (SLFNs). ELM can overcome the abovementioned drawbacks and limitations of the conventional computational intelligence techniques. Distinguished from the conventional learning theory, the essence of ELM is that the hidden layer of the generalized SLFNs need not be tuned. One of the typical implementations of ELM is that the hidden layer parameters of ELM can be randomly generated. There may have different ways to obtain the output weights of ELM. This special issue of Neurocomputing includes 34 original papers selected from the papers presented at the International Workshop of Extreme Learning Machines (ELM 2012), Singapore, 11–13 December 2012. Amongst these contributions in this special issue, some developments on both theoretical aspects and various domain applications can be found. The following summarizes these works: