STATISTICAL TESTS OF THE LOGNORMAL DISTRIBUTION AS A BASIS FOR INTEREST RATE CHANGES

In modeling the periodic change in the interest rate of a'given maturity, the lognormal distribution is frequently assumed. This paper identifies several implicit assumptions underlying the use of this distribution, tests those assumptions against historical interest rates, and presents additional information from those tests. I. INTRODUCTION The stochastic generation of future yield curves, economic scenario testing in asset/liability management, and option-pricing techniques are some applications of the use of the lognormal distribution for modeling periodic changes in interest rates for periods up to 30 years or longer. The validity of the results, and conclusions drawn from those results, may be called into question if the application of the lognormal distribution fails to conform adequately to experience. There is strong motivation to confirm the applicability of the lognormal distribution. The lognormal distribution is applied to the modeling of interest rates in the following manner (see [9]): Let I, represent the interest rate at time t. The "lognormal assumption" often used is that the distribution of In(I,÷l/It) is normal with mean zero and constant variance. Thus It+l=I,×e"xzt, where cr is the standard deviation (volatility) and Zt is a standard normal random variable. The above equation is a special case of It+ 1 =It x e~'t+'~,z,, where Iz, is the mean or drift of the interest rate for period t and o't is the standard deviation or volatility of the rate over the same period. The often-used combination is that of mean equal to zero and constant standard deviation over time. The last equation can be written in the form