Babylonian method of computing the square root: Justifications based on fuzzy techniques and on computational complexity

When computing a square root, computers still, in effect, use an iterative algorithm developed by the Babylonians millennia ago. This is a very unusual phenomenon, because for most other computations, better algorithms have been invented - even division is performed, in the computer, by an algorithm which is much more efficient that division methods that we have all learned in school. What is the explanation for the success of the Babylonians' method? One explanation is that this is, in effect, Newton's method, based on the beast ideas from calculus. This explanations works well from the mathematical viewpoint - it explains why this method is so efficient, but since the Babylonians were very far from calculus, it does not explain why this method was invented in the first place. In this paper, we provide two possible explanations for this method's origin. We show that this method naturally emerges from fuzzy techniques, and we also show that it can be explained as (in some reasonable sense) the computationally simplest techniques.