This chapter introduces the concept of supervised learning and how it may relate to synaptic plasticity in the nervous system, particularly in the cerebellum. It also helps in learning to implement singlelayer and multilayer neural network architectures using supervised learning rules to solve particular problems. Perceptrons were the first neural networks to be developed and happened to employ a supervised learning rule. Inspired by the latest neuroscience research of the day, McCulloch and Pitts (1943) suggested that neurons might be able to implement logical operations. Specifically, they proposed a neuron with two binary inputs (0 or 1), a threshold that can be met or not, and a binary output. The chapter also examines multilayer supervised networks, which defines that perceptrons are vaunted for their ability to implement and solve logical functions, it came as quite a shock when Minsky and Papert (1959) showed that a single layer perceptron cannot solve a rather elementary logical function: XOR. This finding also implies that all similar networks can solve only linearly separable problems.