Computational aspects of the DC analysis of transistor networks

For a general transistor-resistor network a method is proposed to obtain, in some normed linear vector space Rn, the numerical solution of the dc equation G(U) = C for a given constant vector C = C*. At each step k, k = 0, 1,…, a vector Ck is chosen from a certain set Γk and the equation solved for C = Ck by a Newtonian iteration yielding the locally unique solution Uk. Convergence in a finite number of steps is proved for a sequence {Ck} on the straight line through C° and C*, provided this line does not contain a point C between C° and C*, such that at the solution C of G(U) = C, the derivative G'(Ū) is singular. Otherwise another initial point C° must be chosen, or the path of {Ck be altered to reach C*. The method is restricted to a closed and bounded subset of Rn, in which all the solutions of the dc equation have to lie. To find all the solutions, this bounded subset can be covered with balls. The method proposed in this paper is also useful in the analysis of a transistor-resistor circuit, where often the important problem arises of whether the circuit will admit its correct dc bias, since this problem can be best understood by investigating the uniqueness of solutions of the dc equation.