1,155
Views
10
CrossRef citations to date
0
Altmetric
Computational Sciences in Drug Metabolism and Toxicology: Editorial

Computational science in drug metabolism and toxicology

, PhD (Science and Research Staff)
Pages 781-784 | Published online: 14 May 2010

Abstract

Computational scientific tools involving construction and testing of models, screening and data mining for drug and chemical induced toxicities and metabolism have significantly grown in experimental use to help guide product development and assist by enhancing certain areas of regulatory decision making. This themed issue of the journal entitled Computational Science in Drug Metabolism & Toxicology contains state-of-the-art review articles and perspectives covering a diversity of in silico approaches. Computational science tools have a strong potential for expediting our further understanding of drug metabolism and toxicity and are continually being developed and validated. The reader will gain an understanding of the current state of in silico tools and modeling approaches aimed at reducing these liabilities. In addition, how these tools are tested and developed for use in drug safety to support drug development efforts and a review of how they are used to predict genotoxic liabilities are covered in this issue. Computational science tools when properly validated and used judiciously can lend themselves as enablers to support drug safety assessment in investigative and applied settings.

1. Introduction to the themed issue

In this issue of Expert Opinion on Drug Metabolism & Toxicology, the theme entitled Computational Science in Drug Metabolism and Toxicology is brought to a heightened attention through a series of state-of-the-art review articles that address in an impactful way varied types of computational science tools. These tools are also commonly referred to as ‘in silico’ methods and the experts authoring these papers offer their perspectives and opinions spanning a wide range of standpoints from academics, industry and government public health institutions. This editorial serves to introduce this themed issue and make a few important points with respect to the current state of computational methods in drug metabolism and toxicology.

This issue begins with regulatory perspectives on the use of computational toxicology tools from the European Commission's Joint Research Centre in Computational Toxicology and the FDA, Center for Food Safety and Applied Nutrition (CFSAN). The FDA/CFSAN although not working in the realm of pharmaceuticals is focused on the applied use of certain computational approaches mainly involving predictive quantitative structure–activity relationship (QSAR) modeling to assist in their regulatory decision making on the toxicity of food additive chemicals present in food contact materials. Thus, human safety considerations are also paramount. QSAR predictive modeling has been researched and tested at the FDA/Center for Drug Evaluation and Research for several years. Recent reviews, commentary and research reports on computational toxicology approaches focused on QSAR evaluations and use at the FDA can be found Citation[1-3]. In the EU, clearly there are well-recognized reasons for using computational approaches. For example, those driven by the legislation on the Registration, Evaluation, Authorisation and Restriction of Chemicals known as REACH, which entered into force on 1 June 2007 Citation[4]. Thus, these regulatory perspectives are very valuable in light of the necessity for stringently validated predictive methods. Also important is scientific oversight to the process of validating a computational model or tool in order to ensure high quality and good science towards accuracy especially for models accessible to the public or affecting decision making in public health Citation[1,5].

For public health regulators, evaluating the accuracy of alternative approaches such as in silico QSAR methods to predict genetic toxicity and carcinogenicity should be of keen interest because safety is of paramount importance in the protection of patients and consumers. Predictive performance comparisons using animal carcinogen databases such as the ISSCAN database with in silico QSAR methods is described in this issue in the article authored by Dr Romualdo Benigni of the Istituto Superiore de Sanita', Rome, Italy. These expert assessments are refreshing and necessary to ensure predictive QSARs that are available commercially are taken with caution before acceptance of their use. Recent reviews on the limitations of QSARs and modeling are available noting it is rare and unrealistic that a QSAR prediction alone would support on a stand-alone basis the safety of a chemical in the US Citation[1,2,6]. The article by the FDA/CFSAN in this issue also touches on this subject Citation[7].

Of wide interest is a review article in this issue on computational prediction of genotoxicity led by Dr Russell Naven. The article emphasizes the importance of the Ames mutagenicity test and the role and performance of structure–activity relationship/QSAR methods for this assay. Strategies for genotoxicity prediction in drug development are described including the value and timing of predictions in the industry drug development stages. The authors also offer expert explanations as to why the current state of QSAR predictions can be limited, outstanding issues with the so-called global (multi-purpose) QSARs and where improvements should be made. The article by Dr Benigni also makes similar points echoing the issues with global QSARs for predicting mutagenicity and carcinogenicity.

As many know, however, there are many impeding factors in drug development. Large databases containing millions of compounds are used in industry to help identify compounds with metabolic stability and low safety liabilities using in silico drug screening programs. An enlightening review in this direction appears in this issue with a focus on machine learning algorithms for screening the hERG protein and CYP isoforms authored by Dr Anthony Klon. The article addresses the predictive performance characteristics of machine learning algorithms including random forests, Gaussian processes, neural networks and support vector machines, and regression methods used in models to predict hERG inhibition is provided. In addition, a representative set of recent models in the literature for predicting inhibition and induction of major CYP enzymes is presented. Dr Yoshifumi Fukunishi continues the discussion on in silico drug screening with an article on the importance of ensemble methods in ligand-binding poses. The final section of this issue is especially focused on computational modeling for CYP enzymes. Dr Klon provided coverage of this in his article but it is also addressed in-depth in reviews led by Dr Roy Vaz and Dr Gabriele Cruciani. Dr Vaz's article provides an excellent and well-balanced overview of the various types of in silico methods for predicting CYP-mediated drug metabolism. The article includes current uses of in silico metabolism work in drug discovery and the challenges. Dr Cruciani's article provides a crystal clear account for non-specialists and specialists in the field regarding the importance of not only chemical reactivity of a drug but incorporation of enzyme recognition (CYP flexibility) of the drug in predicting enzyme–substrate interaction and the production of subsequent metabolites. His paper provides an excellent case using specific examples of drugs on why reliable predictions can be generated through the computational simulation of P450 recognition and reactivity of drug that is not training set-dependent, based on fragments or imprecise scoring functions in docking methods, nor does it require databases or QSARs but is only principally limited by requiring a crystal structure of the enzyme. Importantly, the procedure relates predictions back to molecular structure facilitating drug design; for example, when it is necessary to remove CYP inhibition potential of drug.

2. Conclusions

Insight into what approaches are being investigated and how industry and regulatory bodies view and actually use computational methods for drug metabolism and toxicology will be important knowledge gathered over time and is reflected in the content of this themed issue. As there are many dimensions to computational methods both literally (2D and 3D) and metaphorically, it is impossible to have covered them all in a single issue or even a single book. However, it is concluded that innovations with query tools, ADME drug screening, toxicity database mining and prediction and modeling of molecular structure (both drug and CYP metabolizing enzymes), primarily with safety application in mind, are fundamentally important in the computational sciences for pharmaceuticals.

3. Expert opinion

The increase and interest in harnessing computer power over the past 20 years has intensified recently and ultimately led to the wide spectrum of computational science tools that are available. Computational science tools serve as enablers for high speed data movement and calculations that are forming the basis of an information knowledge base being used in analytics, as data aggregators and in decision support (e.g., risk and safety assessment). Because of this cross over, and in fact, its inherent reliance on multiple different scientific fields (computer science, toxicology, pharmacology, molecular biology, pathology, biochemistry chemistry and risk analysis), the computational sciences for toxicology and drug metabolism is an integrated science. In a practical sense, what this means is that experts in modeling for example should really be relying on expert decision making and consultation from the subspecialties of toxicology and metabolism in order to construct meaningful computational models. By doing so, it is ensured that the best, most informed and accurate informatics system are deployed. This will also help alleviate the ‘black box’ syndrome that plagues many commercial computational platforms. Therefore, informed decisions are based on a whole process understanding of models. There are examples of technology innovation plans that are structured to provide new tools and methodologies (e.g., models) not via simple collaboration, but rather through a thorough understanding of the computational model. What should also be kept in mind is that every informatics system has a shelf life that pervades as more and more data arise and ‘models’ are developed. Thus, updating computational models and informatics systems is critical; however, standards are necessary for such updates in terms of having a unifying and systematic approach to it. There are legitimate concerns with over and under collection of data, and approaches used in data standardization to ensure quality.

The Cancer Bioinformatics Grid program known as caBIG led by the US National Institutes of Health/National Cancer Institutes is an excellent example of scientific community engagement, and an administrative framework being designed to assist with the vexing IT problems that resonate in many institutions (URL http://www.cabig.cancer.gov/). The broad goal of caBIG is to enable personalized medicine in cancer and beyond by deploying electronic infrastructure in biomedicine through sharing data and knowledge in the cancer community. Likewise, the goals of computational science in drug metabolism and toxicology are generally to enable researchers, regulators, clinicians and patients to benefit from newly built reliable in silico tools via data sharing and standardization to accelerate the synthesis of definitive answers on complex biological problems that impact human health. The computational sciences can be thought of as a high speed intervention. The fundamental principles of computational sciences for toxicology and drug metabolism could reflect those of caBIG: open access, open development and open source. These principles are a key to what is in silico's highest potential, which is to serve as an enabler. The FDA also recognizes the potential opportunities for computational methods to support decision making in its regulatory responsibility of human pharmaceutical products. The FDA Critical Path Initiative launched in 2004 identified computer-based predictive models as a method for new product development toolkits Citation[8]. It is hoped that computational tools could reduce the risk of regulated medical products by making better use of accumulated knowledge without delaying the already rigorous review and approval process for new therapies. Consistent with this overall initiative and currently moving forward is the establishment of the FDA/CDER Computational Science Center (CDER CSC) Citation[9]. Currently, CDER CSC projects include developing data standards and expanding the use of electronic review tools. A recent public DIA FDA/CDER CBER Computational Science Annual meeting held in Bethesda, MD, USA (22 and 23 March 2010) described many of the CDER CSC ongoing projects with scientific computing that support the work efforts of the FDA including the efficacy, safety and product quality of pharmaceuticals over the entire product life-cycle. The basic foundation of the CDER CSC is to provide a core computing infrastructure, facilitate data management, streamline processes, and provide skilled staff and supporting tools under organizational governance. Technology alone will not help decide strategic priorities but needs processes, public health issues and dedicated people in a complete roadmap for success for improving quality and efficiency in the FDA's mission of promoting and protecting public health.

There is much work to do in the computational science arena. Institutions, for example interested in predictive models, need to develop data standards for toxicity databases. The creation of a set of content standards would help streamline the process of considering modeling data and enable the improvement of quality (e.g., accuracy), reducing error in the computational approach. Scientists need to be cognizant of information modeling. That is how models are actually constructed and conceived from the beginning: asking questions as to what data, and decision making and expertise go into the whole modeling process. This sort of involvement will ensure a critical assessment of the validity of computational approaches before predictions, and information derived from informatics is taken seriously. Current success stories are not necessarily the original computational platforms which still require significant updating Citation[6].

Of importance to mention is that the computational sciences are evidence-based Citation[10]. According to Hartung, there are 10 defining characteristics of evidence-based toxicology which are listed below Citation[11].

  • Promotes the consistent use of transparent and systematic processes to reach robust conclusions and sound judgments.

  • Addresses societal values and expectations and is socially responsible.

  • Displays a willingness to check the assumptions on which current toxicological practice is based to facilitate continuous improvement.

  • Recognizes the need to provide for the effective training and development of professional toxicologists.

  • Acknowledges a requirement for new and improved tools for critical evaluation and quantitative integration of scientific evidence.

  • Embraces all aspects of toxicological practice and all types of evidence of which use is made in hazard identification, risk assessment and retrospective analyses of causation.

  • Ensures the generation and use of the best scientific evidence.

  • Includes all branches of toxicological science: human health assessment, environmental and ecotoxicology and clinical toxicology.

  • Acknowledges and builds on the achievements and contributions of evidence-based medicine/evidence-based healthcare.

  • Fosters the integration of expert judgment with the best possible external evidence.

Born out of evidence-based medicine, evidence-based toxicology has clear implications with respect to data handling and appraisal of data (i.e., evidence) used in methods development and causation analysis Citation[12,13]. These characteristics remind us how the computational sciences are enabling. Science is an evolving process of hypothesis testing and development, and yes to take back to the drawing board when needed. How in silico tools in toxicology and metabolism evolve should be dependent on good science. This will ultimately lead to a bright future for these tools.

Declaration of interest

The author states no conflict of interest and has received no payment in preparation of this manuscript or issue.

Acknowledgement

This research report is not an official US Food and Drug Administration guidance or policy statement. No official support or endorsement by the US Food and Drug Administration is intended or should be inferred.

Bibliography

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.