The present article is divided into four major sections dealing with the application of acoustical indices to the prediction of speech recognition performance. In the first section, two acoustical indices, the Articulation Index (AI) and the Speech Transmission Index (STI), are described. In the next section, the effectiveness of the AI and the STI in describing the performance of normal-hearing and hearing-impaired subjects listening to spectrally distorted (filtered) and temporarily distorted (reverberant) speech is examined retrospectively. In the third section, the results of a prospective investigation that examined the recognition of nonsense syllables under conditions of babble competition, filtering and reverberation are described. Finally, in the fourth section, the ability of the acoustical indices to describe the performance of 10 hearing-impaired listeners, 5 listening in quiet and 5 in babble, is examined. It is concluded that both the AI and the STI have significant shortcomings. A hybrid index, designated mSTI, which takes the best features from each procedure, is described and demonstrated to be the best alternative presently available.