Computational Toxicology is Now Inseparable from Experimental Toxicology

preferred deadline, at the annual meeting of the Society of Toxicology (SOT) in San Antonio, Texas. At this year’s meeting, we had an excellent debate, on 11 March 2013, on the motion that “In the Near Foreseeable Future, Much of Toxicity Testing Can be Replaced by Computational Approaches”. The motion was proposed by an SOT member and opposed by a member of the European Societies of Toxicology (EUROTOX). At the climax of an entertaining and lively debate, about three times the number of members of the audience voted against the motion than voted for it. However, the crux of the debate was not that computational toxicology is of no use, but what do “near” and “foreseeable” mean. Both debaters, and seemingly the majority of the very large audience, agreed that computational methods will be crucial to toxicology in the future. It’s just that the issues are how we will get there, how long it will take, and who will pay for it. This week also marked another important milestone, namely, the ban of marketing of products tested on animals for cosmetics ingredients within the European Union that came into force on 11 March 2013.1 This, combined with the deadline of 31 May 2013 for the submission of REACH (Reg istration, Evaluation, Authorisation and Restriction of Chemicals) system dossiers to the European Chemicals Agency (ECHA) for compounds manufactured in, or imported into, the European Union at over 100 tonnes per year,2 has resulted in an immense interest and research effort in computational toxicology. The European Union has resourced the research through the Framework Programmes at the cost of over 200 million euros.3 This issue of ATLA marks the completion of one such initiative — the CADASTER project — and the aim of this editorial is to put CADASTER into the context of these broader efforts of computational toxicology. It is acknowledged that CADASTER is focused on environmental effects and endpoints, whereas computational toxicology in toto includes human health as well. But the issues and concerns are essentially the same, whatever the endpoint. So, what is computational toxicology? The simple answer is that I am not sure that I know anymore! It is becoming an over-used and increasingly misunderstood phrase that seems much more commonplace in the new world of 21st Century Toxicology.4 It seems to mean many things, so, inevitably, this editorial will lose, confuse or simply irritate some readers. In it, I am broadly discussing aspects such as the use of (Quantitative) Structure–Activity Relationship ([Q]SAR) modelling, category formation and read-across for toxicity prediction. These are some of the main issues covered by the excellent EU CADASTER project which are described in this issue. However, let’s not forget that modelling for toxicity prediction goes well beyond my limited knowledge and what is described here — it potentially reaches into the world of systems biology and virtual organs and organisms! At the heart of modelling are the data. CADASTER has provided some insight into the databases available and has brought together data suitable for modelling. However, the reality is that with a very small number of exceptions (typically in acute aquatic toxicity), there are very few examples of public data sets of toxicity values that have been developed with modelling in mind, i.e. which are based around definable mechanisms of action or exploring chemical space. Consider, for instance, the analogous situation in drug discovery — medicinal chemists have been probing mechanisms and hypotheses by the directed synthesis of compounds and testing for decades. This is one possible aspect that needs to be picked up — in other words, the use of intelligent testing to betterdefine the chemical space associated with a mechanism. This is something now being put forward as part of the Adverse Outcome Pathway paradigm. Whilst we don’t have the exact data sets we require, we are definitely gaining access to more data and more information. To illustrate the possibilities and variety of sources, consider the depth and complexity of the OECD’s eChemPortal,5 the rich data resources of the OECD QSAR Toolbox,6 or the detailed toxicogenomics data in the Open TG-GATEs database.7 All are relatively recent innovations, are all freely available, and they all provide the possibility of data mining and modelATLA 41, 1–4, 2013 1