give a summary of my “Expert Information” method originally published in 1992. (2) From this summary, and from rereading the original paper, I recognize that in that paper I failed to adequately communicate my idea, probably because the idea at that time was newly formed in my mind. From where I am now, I see that the paper glossed over and left fuzzy a part of the process that should be very sharp and clear. I would like to remedy that defect in the following. With regard to the difficult question: “How should we combine probability distributions from different experts?” my suggestion is that we bypass it. We can bypass it by not asking the experts for their probability distributions. Instead, we ask each expert independently what evidence, information, and experience he/she has, relevant to the question at hand. We write these items of evidence down very carefully, and collect them in a combined list. Then, using the experts as a group, we work over these items, clarifying, coalescing, refining, adding new items that come to mind, and all the time being sure to distinguish the actual evidence (i.e., “what happened”) from the experts’ interpretation of what happened. At the end of this process we should have a single, agreed upon “consensus body of evidence.” Then, together with the group, we process this combined body of evidence, item by item, through Bayes’ theorem to arrive finally at a posterior probability curve that expresses our joint state of knowledge about the parameter of interest. For example, suppose the parameter l denotes the failure fraction, on demand, of a piece of safety equipment. By its definition the numerical value of this parameter must lie in the interval [0, 1]. Let us suppose now that we have made explicit, and compiled into the consensus list, all the evidence items we have relevant to the numerical value of this parameter. If we have truly included in this list all the relevant evidence, then our probability distribution “prior to” this evidence is a flat curve on the interval [0, 1]. Let us denote this distribution by p 0 ( l ). Now take the first item, E 1 , on the evidence list and process it through Bayes’ theorem. The result is the probability curve p ( l | E 1 ), which we could also call p 1 ( l ). We next do the same with the second item, E 2 , using p 1 ( l ) as the prior in Bayes theorem. The resulting posterior is now p 2 ( l ) 5 p ( l | E 1 , E 2 ). Continuing this way we obtain, at the end of the list, p N ( l ), where N is the number of evidence items on the list. p N ( l ) is now our final probability distribution. It represents our collective “state of knowledge” about the numerical value of l based on all the evidence we have. I like to call it our “credibility distribution.” It tells us, for any number, x , in the interval [0, 1], how much credibility can be logically assigned to the hypothesis that x is the correct value for l . This process is what I call the “Expert Information” process, as opposed to the “Expert Opinion” process. It attempts to “Let the Evidence Speak” rather than the personalities, positions, politics, reputations, opinions, wishful thinkings, etc. This expert information process looks really neat, and in principle it is. But now I have to own up to the fact that doing it is not so easy. It is often quite demanding philosophically and tactically. Defining the problem right in the first place can be difficult, getting the consensus body of evidence can require considerable effort, and using Bayes’ theorem can be quite challenging. Within Bayes’ theorem is the term p ( E | l ), known as the “likelihood function.” This is the crucial term, in fact, in the theorem. Evaluating it is easy or hard depending on the type of evidence item. For certain types of evidence, such as the result of a pure, random sampling experiment, evaluating the likelihood is easy. I call such types of evidence “direct” evidence. Unfortunately, most of the evidence we have to deal with is “indirect” to some degree. Developing the likelihood function for such types of evidence can require considerable insight and ingenuity, together with a bag of tricks. I discuss this point in an article in a previous issue of this journal (3) and give some examples elsewhere. (2,4)