The dynamic properties of large, sparsely connected neural networks are investigated. The input connections of each neuron are chosen at random with an average connections per neuron C that does not increase with the size of the network. The neurons are binary elements that evolve according to a stochastic single-spin-flip dynamics. Similar networks have been introduced and studied by Derrida, Gardner, and Zippelius [Europhys. Lett. 4, 167 (1987)] in the context of associative memory and automata. We investigate cases where some of the neurons receive inputs only from external sources and not from the network. These inputs may be random or uniform. The relationship between the geometric properties of the networks and their collective dynamic behavior is studied. Macroscopic clusters as well as internal feedback loops appear when Cg1. However, the dynamic feedback is weak as the length of the typical loops is of the order of ln N. As a result, cooperative long-time behavior appears only at a value of C, C=${C}_{0}$, that is higher than unity. The cooperative behavior is manifested by the existence of two distinct equilibrium phases with opposite magnetizations. In addition, when the inputs are uniform they determine uniquely the state of the network, thus destroying its bistability. Only at a higher value of C, C=${C}_{1}$g${C}_{0}$, a large fraction of the neurons is completely screened from the dynamic influence of the inputs, leading to a bistable behavior even in the presence of the inputs. These results imply that the performance of these networks as input-output systems may depend critically on the degree of connectivity.