When computing a square root, computers still, in effect, use an iterative algorithm developed by the Babylonians millennia ago. This is a very unusual phenomenon, because for most other computations, better algorithms have been invented - even division is performed, in the computer, by an algorithm which is much more efficient that division methods that we have all learned in school. What is the explanation for the success of the Babylonians' method? One explanation is that this is, in effect, Newton's method, based on the beast ideas from calculus. This explanations works well from the mathematical viewpoint - it explains why this method is so efficient, but since the Babylonians were very far from calculus, it does not explain why this method was invented in the first place. In this paper, we provide two possible explanations for this method's origin. We show that this method naturally emerges from fuzzy techniques, and we also show that it can be explained as (in some reasonable sense) the computationally simplest techniques.
[1]
Vladik Kreinovich,et al.
Applications of Continuous Mathematics to Computer Science
,
1997
.
[2]
D. Thompson,et al.
A History of Greek Mathematics
,
1922,
Nature.
[3]
Hung T. Nguyen,et al.
A First Course in Fuzzy Logic
,
1996
.
[4]
Donald Estep,et al.
The Square Root of Two
,
2004
.
[5]
David R. O'Hallaron,et al.
Computer Systems: A Programmer's Perspective
,
1991
.
[6]
George J. Klir,et al.
Fuzzy sets and fuzzy logic - theory and applications
,
1995
.