Are we in love with cyber insecurity?

Almost 40 years ago, as a student, I earned extra money by coding and testing programs for the administrative processes of a building hardware company. It was still the time of punch cards. In addition to coding programs in RPG3, I had to design sets of test data and describe the expected outputs to show the system designers and code reviewers that the test sets covered all the decision logic branches of the programs. However, a successful run on a test set after debugging a program was just a first step. When I announced that a program was ready, the system designer walked to the garbage bin adjacent to the card punch machines and collected a stack of a few hundred cards that were rejected because they contained errors. This was the second test set. If even one of those cards was not rejected by my program, I had a difficult time. The coding standard was to perform rigorous input validation of each and every data field before any data could be moved to the company databases. This was one of the early examples of the principle of self-protection. In 1978, an analysis of the almost daily crashes of our mainframe operating system led us conclude that most of the input buffers of the system programs were unguarded. Even simple user program errors would cause buffer overflows and overwrite executable code that, in turn, led to crashes of the entire mainframe. In a major effort, we patched and secured more than 100 system utilities. The code was sent to the system manufacturer via a non-standard software error reporting route. In hindsight, this could be considered to be a form of responsible disclosure.