Abstract
Introduction: Structure–activity modelling for predicting toxicology as a discipline is now 50 years old, and great strides have been taken in developing methods for the physicochemical analysis of molecules and their toxicity evaluation, both essential stages in modelling. Computational toxicology also has huge potential for speeding up the screening and prioritisation of chemicals for further testing and for reducing the numbers of expensive and time-consuming conventional tests. Yet, the realisation of this potential has been largely stifled by many problems inherent in developing and validating new structure–activity models of toxicity.
Areas covered: Problems of computational toxicology discussed include i) the use of inappropriate molecular descriptors and tools that are not transparent; ii) the undetected existence of chemicals that cause large changes in toxicity with only small differences in molecular structure (causing ‘activity cliffs’ in the structure–activity landscape); iii) spurious correlations between structure and activity; iv) lack of quality control of toxicity data; v) difficulties in determining predictivity for novel chemicals; and vi) an over-reliance on complex mathematics and statistics.
Expert opinion: Greater emphasis needs to be placed on i) the selection of training and test sets of chemicals to enable both internal and external validation of models to be undertaken for more accurate assessment of model predictivity and ii) the use of recently developed techniques for characterising structure–activity landscapes.
Notes
This box summarises key points contained in the article.