What humans can't do: a review of Derek Partridge, The Seductive Computer by Derek Partridge
暂无分享,去创建一个
With just a glance at the title, one might imagine that The Seductive Computer is yet another rail against the ambitions of artificial intelligence, a twenty-first century update of Hubert Dreyfus’ 1972 classic,What Computers Can’t Do.Derek Partridge’s slim volume is, however, much more interesting than that. It concerns what humans can’t do, and in particular, the causes and consequences of human inabilities to understand the information technology (IT) systems that increasingly operate the human world. This is a topic that Partridge, who has spent his career researching and teaching software engineering, understands deeply and worries about not just for theoretical but also for utterly practical reasons. Why should philosophers, cognitive scientists and AI researchers care about, much less read, a critique of software engineering practice written by a computer scientist? The short answer is that IT system development is an imaginative process, and IT system implementation enables the testing of human imagination for self-consistency in a way that few, if any, other disciplines can match. At bottom, self-consistency is a philosophical issue, and IT system development is, from this perspective, a grand exercise in experimental philosophy, a systematic if perhaps unintentional delving into the relationships between certainty and delusion, and between human agency and the world’s raw insistence on evolving according to its own principles. It is, moreover, experimental philosophy done under the best possible conditions, experimental philosophy in which all statements must, eventually, be expressed in an uncompromising formal language and rigorously examined for grammatical correctness, self-consistency, and consistency with all statements made previously. It is experimental philosophy done as if logical positivism had won the day, and imposed absolute clarity of expression across the board. Indeed, the operational success of an IT system over an extended period of use that probes its behaviour in response to a wide variety of inputs is arguably the best operational definition of conceptual self-consistency that is currently available. All IT systems begin with a simple statement of desired behaviour, from printing out ‘Hello World’ to flying an airliner or running the global financial system. They proceed from desire to specification: under conditions X, do Y. As the level of detail with which the desired behaviour is described increases, the specifications inevitably get defensive: What happens if the user hits ‘Alt’ instead of ‘Shift’ when typing an upper-case letter? Do X. What happens if two users issue commands to change a database entry at exactly the same time? Do Y. What happens if the system administrator logs on from an IP address that has never been seen before? Do Z. Eventually the specifications – often tens of thousands of pages of specifications – are detailed enough to be expressed in formal, executable code. Then they can be put to the test: does a machine executing the code actually perform the behaviour that was specified, and is the behaviour that was, at the end of the day, actually specified the behaviour that was originally desired? In principle,