Bart Verheij’s paper (this volume, p. 187) on argumentation support software (ASS) gives an excellent account of the past and present of ASS for legal reasoning, and offers some tantalizing glimpses of what the future may have to offer. In my reply, I want to focus on one particular aspect of his presentation, the use of ASS as a teaching tool and in particular a tool for the teaching of reasoning with facts and evidence. Generally speaking, there are good reasons to be sceptical when artificial intelligence (AI) systems are presented as teaching aids. The search for commercial strength legal expert systems that perform autonomously the tasks of human experts has so far proved largely elusive. Two related issues in particular have been identified as recurrent problems. The first is robustness, i.e. the ability to deal with new scenarios not anticipated by the developers. Systems are said to be robust if they remain operational in circumstances for which they were not designed. In the context of criminal evidence, for instance, robustness would require adaptability to unforeseen crime scenarios. This objective is difficult to achieve because low-volume major crimes tend to be virtually unique. Each major crime scenario potentially consists of a unique set of circumstances, while many conventional AI techniques have difficulties in handling previously unseen problem settings. This then results in the second problem, the knowledge acquisition bottleneck. Reasoning about evidence in legal settings is knowledge intensive, requiring input from a broad range of scientific disciplines and also formal representations of large chunks of everyday knowledge. In teaching environments by contrast, the educator has control over the type of problems they choose, their complexity and relevant parameters and features. This brings teaching applications seemingly closer to the ‘worked examples’ or prototypes that are so often the result of the research programs by small teams of academics that dominate the AI and law field—including projects by the author of this reply. Verheij deserves considerable credit for resisting the temptation to see teaching applications just as a simpler task for AI research. Of particular value is his emphasis on rigorous empirical evaluation of the effectiveness of his systems in a teaching environment, and the systematic way in which evaluations that he has carried out in the past influence his theoretical analysis of the problem. This type of evidence-based approach to software-supported teaching in law has so far been missing. Indeed, with few exceptions such as Hall and Zeleznikow (2001), there has been little research into the empirical valuation of legal AI in general. His conclusions are refreshingly honest too, identifying some potential problems in his own approach and indicating a whole range of possible extensions and even wholesale revisions. My observations and comments will elaborate on these findings. In
[1]
Earl Hunt,et al.
Individual Differences in the Verification of Sentence-Picture Relationships
,
1978
.
[2]
W. Quine.
The two dogmas of empiricism
,
1951
.
[3]
Donald A. Norman,et al.
Things That Make Us Smart: Defending Human Attributes In The Age Of The Machine
,
1993
.
[4]
Herbert A. Simon,et al.
Why a Diagram is (Sometimes) Worth Ten Thousand Words
,
1987
.
[5]
Henry Prakken,et al.
Formalising arguments about the burden of persuasion
,
2007,
ICAIL.
[6]
S. Papert.
The children's machine: rethinking school in the age of the computer
,
1993
.
[7]
Jean-Gabriel Ganascia,et al.
A Structuralist Approach Towards Computational Scientific Discovery
,
2004,
Discovery Science.
[8]
Eric Hammer,et al.
Towards a model theory of Venn diagrams
,
1996
.
[9]
Henry Prakken,et al.
Dialogues about the burden of proof
,
2005,
ICAIL '05.
[10]
John Zeleznikow,et al.
Acknowledging insufficiency in the evaluation of legal knowledge-based systems: strategies towards a broadbased evaluation model
,
2001,
ICAIL '01.
[11]
I. Lakatos.
Falsification and the Methodology of Scientific Research Programmes
,
1976
.
[12]
Ray Welland,et al.
Comprehension of diagram syntax: an empirical study of entity relationship notations
,
2004,
Int. J. Hum. Comput. Stud..
[13]
Saul M. Kassin,et al.
Computer-Animated Displays and the Jury: Facilitative and Prejudicial Effects
,
1997
.
[14]
Paul Maharg,et al.
Law, Learning, Technology: Reiving Ower the Borders
,
2000
.