1,823
Views
2
CrossRef citations to date
0
Altmetric
Reviews

Guide for collecting and reporting metadata on protocol variables and parameters from slide-based histotechnology assays to enhance reproducibility

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

The central tenet of scientific research is the rigorous application of the scientific method to experimental design, analysis, interpretation, and reporting of results. In order to confer validity to a hypothesis, experimental details must be transparent and results must be reproducible. Failure to achieve this minimum indicates a deficiency in rationale, design, and/or execution, necessitating further experimental refinement or hypothesis reformulation. More importantly, rigorous application of the scientific method advances scientific knowledge by enabling others to identify weaknesses or gaps that can be exploited by new ideas or technology that inevitably extend, improve, or refine a hypothesis. Experimental details, described in manuscript materials and methods, are the principal vehicle used to communicate procedures, techniques, and resources necessary for experimental reproducibility. Recent examination of the biomedical literature has shown that many published articles lack sufficiently detailed methodological information to reproduce experiments. There are few broadly established practice guidelines and quality assurance standards in basic biomedical research. The current paper provides a framework of best practices to address the lack of reporting of detailed materials and methods that is pervasive in histological slide-based assays. Our goal is to establish a structured framework that highlights the key factors necessary for thorough collection of metadata and reporting of slide-based assays.

Introduction

Ever since Ioannidis published his seminal manuscript detailing the repercussions of irreproducible science, there has been growing need to ensure reproducibility in the scientific literature [Citation1]. Surveys of various stakeholders have shown that the majority believe that a significant rigor and reproducibility problem exists in biomedical and public health research [Citation2,Citation3]. Freedman et al. defined ‘irreproducibility’ as the ‘existence and propagation of one or more errors, flaws, inadequacies or omissions that prevent the replication of results’ [Citation4]. Based on this definition, Freedman estimated that in the USA approximately $28 billion dollars per year is spent on irreproducible biomedical science research [Citation4]. Random sampling of the biomedical literature suggested that articles generally lack sufficient rigor and transparency in one or more fundamental areas necessary to replicate research studies [Citation5–8]. Reproducibility issues can be grouped into these general categories: study design, laboratory protocols, reference materials, data collection and analysis. Evidence indicates that lack of rigor in these areas inevitably leads to ineffective translational research and poor clinical trial outcomes [Citation6,Citation8–10].

While irreproducibility is the result of many interrelated factors, it is clear that establishing an underlying framework of best practices within the context of the categories is key to addressing experimental reproducibility in biomedical and public health research [Citation5–10]. The largest contribution to irreproducibility arises from biological reference materials, accounting for an estimated 36% of the financial impact. Combined with inadequate laboratory protocols (10%), these two areas account for approximately half of the factors that contribute to irreproducible science [Citation4]. The U.S. National Institutes of Health (NIH) has defined scientific rigor as ‘the strict application of the scientific method to ensure robust and unbiased experimental design methodology, analysis, interpretation and reporting’ (https://grants.nih.gov/policy/reproducibility/guidance.htm). Combined with Freedman’s definition, it is clear that the primary reason for experimental irreproducibility is the lack of clear-cut ‘best practices’ for documenting reagents and methods. In contrast to clinical practice where standardization, meticulous documentation, and quality assurance are required and necessary, there are few broadly established quality assurance standards in basic biomedical research. This is not out of ignorance but because of a lack of strategies to educate scientists and students [Citation11]. In addition, resources to provide training as well as put standards into practice have been missing [Citation3,Citation5,Citation12,Citation13]. Nowhere is this more widespread than in slide-based histopathology research. For our purposes, a slide-based assay is defined as any test in which a chemical, immunological, or molecular reaction is used to demonstrate a cellular or tissue target mounted on a glass slide. This is regardless of the sample origin (whole cells to tissue sections), the preparation (fresh, unfixed to fixed), target modality (histochemical stain, antibody, nucleic acid probe), assay type (traditional, multiplex, in situ), detection chemistry (chemical, fluorescence, chromogenic, molecular), or assay format (automated or manual method).

Before the introduction of contemporary immunohistochemical (IHC) and molecular methods, histochemical dyes were routinely used in staining procedures to enhance the contrast of biological materials undergoing microscopic analysis. Events surrounding World War I gave rise to the Commission on Biological Stains, later named the Biological Stain Commission (BSC) (https://biologicalstaincommission.org/) that established quality assurance (QA) standards for dyes used for biological and other purposes [Citation14]. To this day, the BSC continues testing dry dyes for manufacturers and vendors, checking chemical purity and certifying the dyes for biological use. It is interesting to note that BSC dye certification has existed for over 100 years and that these dyes and their application in staining protocols has laid the foundation for modern pathology [Citation15]. Routine hematoxylin and eosin (H&E) staining is the gold standard for analytical diagnostic histopathology in both clinical and research disciplines. Starting in the early 1940ʹs, complex histochemical and enzyme staining methods were developed to identify specific tissue components before the arrival of modern techniques. During this ‘golden age’ of histochemistry, rigorous technique was required to establish direct correlation between histochemical staining and morphological observations of cellular components. It was not until the development of transmission electron microscopy (TEM) that histochemical-based inter- and intracellular histological observations could be confirmed, ushering in the era of modern microscopic pathology [Citation16]. Because of the empirical nature of histochemical staining, protocols are often tailored to the preference of the observer in order to facilitate interpretation. As a result, reproducibility among institutions, laboratories, and personnel may be highly variable [Citation16–18]. While H&E remains the gold standard in diagnostic pathology, for many purposes IHC methods have largely replaced histochemical approaches and become the new ‘special stains.’ Nevertheless, H&E and other histochemical stains routinely used in a variety of sample formats still require the same rigorous quality control as other techniques to obtain highly reproducible results. Although not utilized as extensively as histochemical staining, enzyme histochemistry also has a long, illustrious history. Firmly established in the 1930ʹs, many different methods were developed to detect everything from enzymes to complex molecules. Enzyme histochemical techniques established direct morphological evidence for the localization of in vitro biochemical pathways providing insight into pathological process [Citation19]. Nearly all enzyme histochemistry is performed on frozen tissue and has stringent processing requirements to maintain the viability of the target of interest [Citation19]. Therefore, robust documentation of processing and protocols is essential for experimental reproducibility.

In the last several decades, antibodies have become indispensable biological tools in research and clinical practice. Antibodies have widespread application and are among the most broadly used protein-binding reagents in biological sciences [Citation20–23]. There can be no doubt that IHC, in its various forms has revolutionized biomedical research and diagnostic histopathology since first demonstrated by Albert Coons et al. in 1942 [Citation24]. However, it is now recognized that many antibody-based research assays were poorly designed and executed, raising questions about the validity of the results and findings [Citation1,Citation7,Citation25,Citation26]. There is abundant evidence that antibodies themselves are a major source of variability [Citation27,Citation28]. The explosive growth of commercially available antibodies, detection systems, instrumentation and methods combined with a general lack of acceptable optimization, validation, and reporting standards have only exacerbated the reproducibility problem [Citation3,Citation29–31]. Many basic science researchers are unaware that there are a limited number of ‘clinical’ antibodies in use. These markers undergo extensive ‘validation’ under rigorous U.S. Food and Drug Administration (FDA) oversight prior to their approval and have years of field data to support their use and performance. This misunderstanding fosters a notion, based on an oversimplified view that IHC is a ‘straightforward and easily executed assay’ [Citation32–36]. Antibodies used in IHC research are rarely validated extensively or in the same manner as those used in the clinical setting [Citation37]. In fact, lack of appropriate and reasonable validation approaches is one of the key issues that needs to be addressed in order to increase rigor across all antibody applications [Citation29,Citation31,Citation33,Citation35,Citation38]. Regardless, many scientists in basic research believe their IHC antibodies generally perform equivalently to those found in the clinical setting despite the fact that Research Use Only (RUO) antibodies are not scrutinized in a similar fashion. This clearly demonstrates the need to develop guidance and standards for optimizing, validating and reporting research antibodies as well as, share information about the quality of the antibodies and consistency of the developed protocols [Citation3,Citation29]. None of these issues are new, having been described repeatedly in the literature over the past 20 years [Citation27,Citation28,Citation35,Citation38–43].

In situ hybridization (ISH) using complementary nucleotide sequences is widely utilized to detect DNA or RNA in cells, tissue sections, and whole-mount preparations. First demonstrated by Gall and Pardue in 1969 using isotope-labeled probes, ISH has evolved into a powerful molecular technique for both research and diagnostic purposes [Citation44]. Today, there are a wide variety of ISH methodologies available, all based on the principles established by Gall and Pardue. The assay has continuously evolved from detecting DNA to detecting mRNA, microRNA, and single nucleotide mutations using fluorescent (FISH) or chromogenic (CISH) in situ methods. Since the 1980ʹs, FISH has been the standard methodology for detecting chromosomal abnormalities. However, advances in probe design, reporter molecules, amplification systems, and signal detection have significantly improved specificity and sensitivity expanding the use of ISH techniques [Citation45–51]. As with IHC, the ISH assay specificity is determined by a probe designed to target specific DNA or RNA sequences. In contrast to IHC, investigators have multiple options to directly design, construct, and label probes [Citation52,Citation53]. In 2005, the U.S. National Center for Biotechnology Information (NCBI) debuted the ‘Probe Database’ to track the large number of probes being generated at the time. The goal was to have a central repository that contained probe sequences and technical reagents from a wide variety of applications, methods, and species, in order to facilitate reproducibility. However, due to a decline in probe submission usage, and technological advances in sequencing, the Probe Database was decommissioned in 2020. The database remains available (ftp://ftp.ncbi.nih.gov/pub/ProbeDB/), but NCBI no longer accepts new submissions. While there are a number of searchable gene expression databases that are helpful for probe design, no universal repository for ISH probes currently exists which hampers ISH reproducibility. The application of ISH methods, specifically for RNA detection, has seen increased use in recent years. The development of branched and ‘double z-stacking’ ISH detection methods has led to commercialization and usage of RNA ISH [Citation49]. Probes, targeting whole genome sequenced RNA biomarkers can now be made to order and utilized on instrument platforms, but this does not absolve researchers from ensuring quality control and documentation [Citation54]. Finally, there are few ISH-based, FDA-approved clinical assays (https://www.fda.gov/medical-devices/in-vitro-diagnostics/nucleic-acid-based-tests). Similar to IHC, clinically used ISH assays are subject to the same stringent quality control and documentation requirements in order to maintain high levels of reproducibility necessary for patient care.

Photomicroscopy and imaging has evolved immensely since the creation of the first photomicrograph in 1834 by William Henry Fox Talbot [Citation55]. Image capture was labor intensive, utilizing film cameras that relied on manual processing of chemically developed, light-sensitive emulsions [Citation56]. Photomicroscopy improved dramatically with the introduction of digital cameras in the late 1980ʹs and early 1990ʹs. Digital cameras simplified image acquisition and eliminated costly and time-consuming film processing [Citation56,Citation57]. Images were easily captured from any compound microscope fitted with a camera and connected to a computer. In the early 2000ʹs, whole slide imaging (WSI) debuted, providing centralized image capture and storage solutions [Citation58,Citation59]. The combination of digital image capture, evolution of powerful image acquisition software and analysis tools has brought about wide spread adoption and use but also increased the level and complexity of the data generated from these images. Digital photomicroscopy and image software solutions have created new problems since it has become easy to adjust or modify digital image files [Citation60]. Publishers and journals such as the Journal for Cell Biology are addressing the rigor and reproducibility of published images [Citation61]. For years, qualitative and semi-quantitative data has been collected from slide-based histological preparations assessed directly from microscopic slides. Ocular micrometers and grids have been utilized as aides for morphometric analysis and semi-quantitative data collection [Citation62,Citation63]. Although these methods have been calibrated and standardized, semi-quantitative assessments are only an approximation and interobserver variation exists in assigning diagnoses and severity grades [Citation64,Citation65]. With the adoption of WSI, manual scoring approaches often have been replaced by image analysis software. There are numerous open source and advanced software packages with dedicated or custom algorithms for automated or semi-automated quantitative image analysis. Academic institutions as well as pharmaceutical and biomedical industries use the software for basic research and drug development to evaluate the effects of novel treatments but there are few standards or guides [Citation66–69]. A limited number of algorithms are FDA 510 K-cleared and used in the clinical setting but all require internal validation (https://www.fda.gov/medical-devices/510k-clearances/search-releasable-510k-database) [Citation70]. Although beyond the scope of this manuscript, the growing use of quantitative algorithmic image analysis and deep learning approaches has increased the importance of documenting all aspects of image acquisition to improve transparency and reproducibility of the data generated in addition to the need for high-quality histochemistry. These newer imaging technologies are more sensitive to the subtle variations and demand rigorous quality control to establish reproducible analytical pipelines with predictive value [Citation71–74]. Guides and technical performance assessments for WSI devices are provided by (or available from) a regulatory agency (FDA), a professional society providing accreditation (College of American Pathologist CAP), and professional societies such as the Society for Toxicologic Pathology (STP), Digital Pathology Association (DPA), and Association for Pathology Informatics (API) [Citation69,Citation70,Citation75–77]. These organizations have generated ‘best practice’ recommendations to help address validation, data accuracy, and precision from images acquired and analysis generated from digital hardware and software solutions [Citation69,Citation70,Citation75–77]. Community-driven initiatives from Quality Assessment and Reproducibility for Instruments and Images in Light Microscopy (QUAREP-LiMi) involve improving the reproducibility of light microscopy image data [Citation78]. It is imperative that as we continue to adopt and utilize digital image acquisition and analysis solutions there will be an increasing need for documenting quality control, calibration, validation, and image metadata in order to improve overall image quality standardization and data reproducibility [Citation60,Citation61,Citation79–82].

It logically follows that to reproduce or verify a published histochemical, IHC or ISH protocol, one must have access to a detailed description of the materials and methods used to replicate the findings. However, many studies do not include sufficient detail to uniquely identify key resources [Citation7,Citation29]. Vasilvesky et al. found that only 44% of antibodies were uniquely identifiable in a curated list of 238 manuscripts from 84 journals. [7]. Even more striking was that authors only provided catalog numbers for 27% of the antibodies examined in this study. More recently, Menke et al. surveyed over 1.5 million open access articles from PubMed Central (http://www.ncbi.nlm.nih.gov/pmc/). Spanning over 20 years, their results confirmed that irreproducibility is widespread across multiple rigor categories [Citation83]. There are a number of guides specifically aimed and increasing rigor and reproducibility in experimental histopathology. Based on the ARRIVE (Animals in Research: Reporting In Vivo Experiments) guidelines, Scudamore et al. created the Minimum Information for Publication of Experimental Pathology (MINPEPA) recommendations [Citation84,Citation85]. However, the Scudmore guide only suggests the minimum information that should be reported [Citation85]. In addition, many journals now require a completed Materials Design Analysis Reporting (MDAR) checklist, intended to collect key information consistent with the NIH rigor and reproducibility rules (see below) [Citation86]. Cell Press revised their instructions to authors replacing experimental procedures with Structure, Transparent, Accessible Reporting (STAR) methods section [Citation87]. The key difference is the use of one table to document all critical resources used in the manuscript, collecting the source and identifier for each item (https://www.cell.com/star-authors-guide). While journals and publishers have a responsibility to ensure that reporting guidelines are met or made more stringent, it is ultimately the researcher who must be held accountable. Reporting guidelines cannot overcome poor design, documentation, quality control, or writing [Citation83]. To be able to replicate experimental protocols, a detailed robust description of the experimental method is required and it is up to each scientist to ensure that the necessary level of detail in materials and methods sections is included in manuscript submissions. IHC literature is notorious for having inadequate methods reporting, thereby hampering reproducibility and wasting valuable resources. For researchers to replicate and build upon published literature, they should be secure in the knowledge that the materials and methods specified in a publication have been correctly identified, prepared, and applied [Citation13]. If not, fundamentally flawed science will continue to contaminate the literature and waste precious resources [Citation34].

Because current practices for reporting material and methods in the literature are inadequate, various groups have addressed specific areas to enhance rigor and transparency. As discussed above, the BSC is a good example of improving rigor and reproducibility. Successful products receive a certification label identifying the lot and can be used for direct tracking and reporting unsatisfactory dye performance. In 2013, the Future of Research Communications and e-Scholarship (FORCE11), a grassroots organization, established the Resource Identification Initiative [Citation88]. They created a system for reporting research resources in the biomedical literature, enlisting publishers, journal editors, antibody manufacturers, and distributors to address the problem of incomplete resource identification. The result was creation of Research Resource Identifiers (RRIDs) to ensure unique identification of not only antibodies, but also other key material resources such as plasmids, organisms and cell lines [Citation88]. The RRID portal (https://scicrunch.org/resources) is connected to a number of community repositories that facilitate the process. The Antibody Registry (https://antibodyregistry.org/) is searchable for existing RRID antibodies and outputs basic antibody information for registered antibodies and allows addition of unregistered antibodies. New antibody submission are vetted and assigned a unique identifier that is never deleted [Citation88]. In 2014, NIH announced that it had begun developing strategies to enhance rigor and reproducibility, culminating in changes to grant submission guidelines [Citation89]. This new policy included requirements for authenticating key biological or chemical resources and strict reporting of experimental details, found under Advanced Notice of Coming Requirements for Formal Instruction in Rigorous Experimental Design and Transparency to Enhance Reproducibility, Notice Number: NOT-OD-16-034. In 2015, the Global Biologicals Standards Institute hosted the Asilomar Workshop bringing together numerous stakeholders in an effort to generate standards for antibody validation [Citation29]. Several key themes emerged from this conference: validation practices and information sharing are important for reproducibility, reporting of both positive and negative results for antibodies ultimately improves the reproducibility of research, and users can contribute to transparency efforts by including full details of their methods. In 2016, the International Working Group on Antibody Validation (IWGAV) formed and published the first comprehensive proposal to standardize antibody validation strategies [Citation31]. The IWGAV recommendations established best practices, but because of the complexity of the suggested strategies many are not practical or available to end users, especially those performing low volume testing. In 2016, the Federation of American Societies for Experimental Biology (FASEB), published recommendations for Enhancing Research Reproducibility (https://faseb.org/Science-Policy-and-Advocacy/Science-Policy-and-Research-Issues/Research-Reproducibility.aspx). While these recommendations were in line with other organizational guidelines, FASEB also made specific suggestions for society publications and journals: support of simple common guidelines for reporting methods and reagents in publications and developing uniform instructions to authors for transparent reporting of material and methods. After 2020, the NCBI decommissioned its Probe Database and to date, there is no proposed replacement of this database to capture ISH probe and technical details for reproducibility purposes. However, the rapid expansion of genomic technologies has led to creation of repositories that capture a wealth of genomic data. The most comprehensive is the Encyclopedia of DNA elements (ENCODE) (https://www.encodeproject.org/), a web portal that stores raw and processed data from a wide variety of genomic assays. Metadata is collected from a number of related projects and groups and access to the data is unrestricted and includes experimental details [Citation90,Citation91].

Guidelines for slide-based metadata collection

To address these issues, we believe that establishing a guided set of required elements for transparent reporting of all slide-based histological material and methods is key to increasing rigor and reproducibility. To do so, we redefine reproducibility of slide-based assays to avoid a definition that is either too broad or narrow. Thus, we propose defining reproducibility as results obtained with the same method on similar test materials in different laboratories with different operators using different equipment, will generally give the same results with slight variations that have no consequential impact on interpretation or quantitation. Using this definition, a framework can be constructed to comprehensively capture the necessary information for reporting histological slide-based assay materials and methods. From a theoretical perspective, the same biophysical, biochemical, immunological and molecular properties apply to all slide-based assays. The ability to detect target molecules among different sample formats with different detection systems is determined by the selection of specific pre-analytical, analytical and post-analytical steps or sequence of steps (e.g. as in antigen retrieval), necessary to generate the appropriate signal being assayed within the particular sample format. This unifying approach avoids the confusion generated by the differences between sample preparatory methods such as describing fluorescent versus chromogenic IHC assays. By definition, the distinction between sample sources and methodology becomes irrelevant because the reporting standard is based on the required technical elements to achieve the desired result. This way, users who want to replicate the IHC results can focus on modeling the protocol/procedure with existing resources, relevant to the specific sample types available to generate the required result. Consistent with MINPEPA recommendations, our framework provides greater technical detail on slide based assays and is specifically designed to facilitate translation of assays across different methodologies. Our goal is not to rewrite existing guides or journal checklists. Instead, we list the key elements, parameters and variables for slide-based assays that need to be documented and shared. The intent is to point out 1) many variable details that are often left out and 2) possible key elements/variables not considered but have an impact on assay performance.

Therefore, we propose a comprehensive data collection and reporting scheme for slide-based assays that defines protocols based on essential elements required for executing or replicating the assay instead of driven by sample types or technical methodology. By delineating a protocol as a set of required elements, a universal approach to generate an inclusive protocol for all slide-based methodologies becomes possible.

Elements framework for slide-based methods

We have delineated basic workflow steps into preanalytical, analytical, and post-analytical phases to be consistent with the Total Test model. The application of the Total Test principle to our element guide dictates that all elements (preanalytical, analytical, and post-analytical) be rigorously designed, documented, and executed because the quality of an output is reflected in the quality of the input [Citation40,Citation92,Citation93]. shows a list of protocol elements, element details, and specific parameters that are critical for pre-analytical biospecimen processing. Note that RRIDs are required for all key resources to include organism, biosample, cell lines, plasmids, antibodies as well as tools and other resources [Citation88]. Similarly, captures elements, element details, and specific parameters for analytical procedural steps.

Table 1. Guide for the collection of preanalytical slide-based metadata for transparent reporting of materials and methods.

Table 2. Guide for collection of analytical slide-based metadata for transparent reporting of materials and methods.

collects specific post-analytical elements and parameters. Many protocol elements listed will be well known to histologists and pathologists but the significance of other specific elements may not be obvious but have supporting literature citations. While not comprehensive, the citations either describe or provide evidence for the importance or relevance of specific elements to the Total Test [Citation93].

Table 3. Guide for the collection of post-analytical slide-based metadata for microscopy image capture.

Collectively, these tables serve as a template for preparing materials and methods in a manuscript submission but more importantly as a guide for collection and documentation of key metadata for slide-based assays performed in research and clinical settings. The framework of this guide is derived in part from best practices found in clinical histology laboratories with the added flexibility of not being constrained by strict compliance requirements. The guide is sufficiently flexible to be applicable to all forms of slide-based testing whether manual or automated, and irrespective of a target (protein, carbohydrate, lipid, DNA, and RNA). Elements are not exclusive but can be modified, expanded, or removed. This is to include collection of new elements and their requirements and facilitate addition of other assay modalities (aptamers, nanobodies), adapted to other slide-based methods (spatial profiling) and newer methods (imaging mass spectroscopy). Element types describes each particular element within the context of the Total Test framework. The element types seen in are not limited to our lists since the guide is designed to expand or contract as needed. Required element parameters are the specific variables that can be manipulated for any given step or sequence in a protocol. Detailed parameters must be included to facilitate protocol transfer between methodologies, research and clinical approaches as well as automated, semi-automated, and manual methods. For all elements, the information must have sufficient detail to allow an individual to systematically identify the resource, reproduce, or reasonably duplicate the methodology through an alternative approach. RRIDs are essential for this purpose [Citation203]. The old standard of just providing a catalog number, company, city, state and country cannot uniquely identify products. This is especially true if the company uses multiple vendors to distribute products or if the company cease to exist. RRID citations uses specific formatting (resource identifier, company, catalog number, lot number: https://scicrunch.org/resources/about/guidelines) that are uniquely recognizable and traceable [Citation88]. RRIDs are required in the MDAR checklist compulsory for hundreds of journals [Citation86]. NIH and STAR methods accept and highly encourage RRIDs as unique and unambiguous identifiers [Citation203].

We also contend that much can be learned from clinical QA practices. Clinical laboratories must demonstrate compliance with regulatory agencies to maintain a high standard of care. While the biomedical research enterprise is not held to the same level of stringency, research core facilities should follow good laboratory practices, apply rigorous standardization and documentation that enhances reproducibility and elevates the biomedical research enterprise. Therefore, utilizing core facilities and following their best practice guidelines will enhance investigator research and increase the robustness of experimental procedures [Citation204–207]. We also argue that application of this framework is critical to collect detailed information with spatial profiling technologies listed in . Little is known on how biospecimen variability will affect genomic transcriptomic, proteomic, metabolomic, and other-omic slide-based analysis. These extremely sensitive methods accumulate large amounts of data. Preanalytical processing can significantly affect interpretation and have a major influence on results [Citation208–210]. The rapid adoption of spatial profiling techniques without adherence to best practices can lead to irreproducibility issues that are not revealed for some time. As mentioned earlier, similar application of the Total Test concept should be applied to image analysis and deep learning methods to generate robust and reproducible results. Since these techniques are still in their relative infancy, it is important to establish recommendations for best practices now [Citation211–213].

Table 4. Microdissection and spatial transcriptomic techniques.

Discussion

The element-based approach is a paradigm shift in the rationale used for collection and reporting of slide-based histological metadata. The proposed framework captures many basic protocol elements that are critical to recreate an assay in different environments with different tools. No longer encumbered by confusing redundant terminology such as ‘fixed versus unfixed,’ ‘IHC versus IF,’ and ‘human versus murine,’ essential protocol elements can be provided so an assay can be reproduced within the confines of available materials and methods. In contrast to just providing a ‘dye concentration’ or ‘starting dilution’ for an assay, the expanded system collects specific element parameters critical for each assay step, allows for modification and adaptation to meet available resources. This enables the determination of a reasonable developmental starting point, minimizing assay optimization and repeat testing.

It is important that methods and materials provide details to authenticate products used to achieve a final slide-based stained tissue section. In order to start, identification and acquisition of the same key materials is necessary. Therefore, RRIDs must be included for all key resources so that direct and unambiguous identification of experimental materials is possible. Because there are numerous variations and modifications for histochemical stains used on slide-based tissue preparations, careful referencing (a publication or a textbook on histotechniques) should be always be cited for any specific staining method, and modifications clearly noted. For IHC, optimization and/or validation data, as well as positive and negative controls should be included so that resources are applied to reproducing and refining assay performance instead of reinventing the assay [Citation214]. The Human Protein Atlas (http://www.proteinatlas.org) is an excellent resource of validation information for collected protein expression data on over 20,000 anti-human antibodies. The Swedish open access program integrates a number of omic technologies to map all human proteins in cells, tissues, and organs [Citation214,Citation215]. In contrast, YcharOS (https://ycharos.com), a Canadian public interest company is attempting to characterize commercially available antibodies for every human protein [Citation216]. As an open science company, they provide transparent head-to-head antibody comparisons. Data collected from three fundamental techniques is directly available on their website. Similarly, ENCODE provides data on antibody-based functional genomic assays in addition to their extensive genomic data [Citation90,Citation91]. For ISH, specific probe sequence and design methods should be included in any protocol since a universal and unique identification system for tracking probes is not available. Probes sourced from commercial vendors should include detailed information (vendor, catalog, and lot number) synonymous with RRID formatting that facilitates identification. There are now a number of secure, web-based platforms available for documenting and sharing research protocols. Some sites have formed strategic partnerships with journals. Bioprotocol (https://bio-protocol.org/) collaborates with the Association for the Advancement of Science publisher of the journal Science to provide free online access to peer-reviewed protocols and is citable. Protocol exchange (https://protocolexchange.researchsquare.com/) is a non-peer reviewed open repository hosted as part of the Nature Portfolio so authors can post their protocols directly in this repository. Protocols.io (https://www.protocols.io/) is a commercial venture that offers free space and access for published protocols but also offers institutional plans for both academia and industry. While the number of resources for documenting protocols will continue to grow, one must remember that the experimental details and content will be most important to enhance reproducibility.

There are ongoing efforts to determine the impact of guides on rigor and reproducibility across the biomedical literature. In 2014, stakeholders gathered to propose recommendations for indicators to assess and promote transparency [Citation11]. However, lack of available tools limited the extent and scope of assessing the biomedical literature. More recently, Menke et al. devised the Rigor and Transparency Index (RTI), based on the automated scoring algorithm SciScore (https://sciscore.com/) evaluating the rigor and transparency of biomedical literature using MDAR criteria [Citation83]. Based on their assessment of over two-million articles between 1997 and 2020, the RTI showed modest gains indicating that researchers are slowly addressing the reproducibility issue. However, machine-learning approaches still need refinement as not all indicators in all categories are captured due to differences among the scientific disciplines with in the biomedical literature. RRIDs have been more successful in demonstrating behavioral trends and adoption of guidelines. Babic et al. showed that manuscripts reporting cell line RRIDs where much less likely to use cell lines with known issues [Citation217]. RRIDs are machine readable due to their standardized formatting facilitating text mining of the biomedical literature. Between 2014 and 2020, there was a 40-fold increase in the number of manuscripts reporting at least one RRID. Over 100 journals have embraced RRIDs, with many mandating their use in checklists or instructions to authors. Additional effort is necessary to develop more robust rigor and reproducibility metrics as well as greater adherence to guidelines and we hope that this paper will assist in this endeavor. Regardless, results strongly indicate that efforts to enhance reproducibility are moving forward and will lead to greater rigor across the biomedical enterprise [Citation203]. While additional work needs to be done to establish best practice guidelines for assay optimization and validation, we argue that much more information can be provided in manuscript supplemental material sections. The advent of digital publishing has enabled the inclusion of large amounts of supplemental material. In fact, explicit details collected based on this guide can be included in supplemental sections while a brief overview of the methodology found in the body of the manuscript. Finally, the framework provides guidance on specific protocol information to collect and report, establishing a best practice benchmark that researchers can use directly as well as share with staff and students to establish a strong foundation for rigorous, reproducible research. This guide does not exclude the use of other reporting guides or systems used by publishers but instead supplements these guides by detailing additional information necessary for reporting at a high level of stringency. Although beyond the scope of the guide, non-slide-based biochemical and molecular assays, for example, ELISA, Western blots, flow cytometry, PCR, should utilize similar comprehensive framework approaches to establish best practices for the collection and reporting of metadata although specific guidelines for these do exist [Citation86].

Conclusion

We present a best practices framework guide to capture slide-based histotechnology metadata for material and methods that can enhance transparency, reproducibility, and rigor of slide-based assays. This guide provides sufficient detailed information of metadata to capture slide-based histopathology assays to assess and replicate methodology. Transparent reporting of experimental details, will facilitate reproducibility and enhance the rigor of the scientific research enterprise.

Acknowledgments

The authors would like to thank Dr. Brad Bolon for his critical comments. The authors would like to acknowledge their institutions. NYULH Center for Biospecimen Research and Development (RRID:SCR_018304), Department of Pathology and the Laura and Isaac Perlmutter Cancer Center (LC) and the Stowers Institute for Medical Research (YF)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The authors have no funding to report.

References

  • Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124.
  • Baker M. 1,500 scientists lift the lid on reproducibility. Nature. 2016;533(7604):452–454.
  • Freedman LP, Gibson MC, Bradbury ARM, et al. The need for improved education and training in research antibody usage and validation practices. Biotechniques. 2016;61(1):16–18.
  • Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol. 2015;13(6):e1002165.
  • Freedman LP, Inglese J. The increasing urgency for standards in basic biologic research. Cancer Res. 2014;74(15):4024–4029.
  • Naudet F, Sakarovitch C, Janiaud P, et al. Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in the BMJ and PLOS medicine. BMJ. 2018;360:k400.
  • Vasilevsky NA, Brush MH, Paddock H, et al. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148.
  • Wallach OD, Boyack KW, Ioannidis JPA. Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017. PLoS Biol. 2018;16(11):e2006930.
  • Iqbal SA, Wallach JD, Khoury MJ, et al. Reproducible research practices and transparency across the biomedical literature. PLoS Biol. 2016;14(1):e1002333.
  • Landis SC, Amara SG, Asadullah K, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490(7419):187–191.
  • Ioannidis JP, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–175.
  • Freedman LP, Venugopalan G, Wisman R. Reproducibility 2020: progress and priorities. F1000Research. 2017;6:604.
  • Prager EM, Chambers KE, Plotkin JL, et al. Improving transparency and scientific rigor in academic publishing. Brain Behav. 2019;9(1):e01141.
  • Penney DP. A brief history of the biological stain commission: its founders, its mission and the first 75 years. Biotech Histochem. 2000;75(4):154–166.
  • Wick MR. Diagnostic histochemistry: a historical perspective. Semin Diagn Pathol. 2018;35(6):354–359.
  • Wick MR. The hematoxylin and eosin stain in anatomic pathology—An often-neglected focus of quality assurance in the laboratory. Semin Diagn Pathol. 2019;36(5):303–311.
  • Riva MA, Manzoni M, Isimbaldi G, et al. Histochemistry: historical development and current use in pathology. Biotech Histochem. 2014;89(2):81–90.
  • Romano LA, Pedrosa VF. Re-claiming H&E: back to the future. Postgrad Med J. 2020;96(1131):58.
  • Meier-Ruge WA, Bruder E. Current concepts of enzyme histochemistry in modern pathology. Pathobiology. 2008;75(4):233–243.
  • Baker M. Antibody anarchy: a call to order. Nature. 2015;527(7579):545–551.
  • Baker M. Reproducibility crisis: blame it on the antibodies. Nature. 2015;521(7552):274–276.
  • Bordeaux J, Welsh AW, Agarwal S, et al. Antibody validation. Biotechniques. 2010;48(3):197–209.
  • Bradbury A, Pluckthun A. Reproducibility: standardize antibodies used in research. Nature. 2015;518(7537):27–29.
  • Coons AH, Creech HJ, Jones RN, et al. The demonstration of pneumococcal antigen in tissues by the use of fluorescent antibody. J Immunol. 1942;45(3):159–170.
  • Begley CG, Ellis LM. Drug development: raise standards for preclinical cancer research. Nature. 2012;483(7391):531–533.
  • Egelhofer TA, Minoda A, Klugman S, et al. An assessment of histone-modification antibody quality. Nat Struct Mol Biol. 2011;18(1):91–93.
  • Couchman JR. Commercial antibodies: the good, bad, and really ugly. J Histochem Cytochem. 2009;57(1):7–8.
  • Saper CB. A guide to the perplexed on the specificity of antibodies. J Histochem Cytochem. 2009;57(1):1–5.
  • Rimm D, Uhlen M, LaBaer ARM, et al. Antibody validation standards, policies and practices. In: GBSI workshop report, Asilomar conference, CA, USA. Washington (DC): Global Biological Standards Institute; 2016.
  • Roncador G, Engel P, Maestre L, et al. The European antibody network’s practical guide to finding and validating suitable antibodies for research. mAbs. 2016;8(1):27–36.
  • Uhlen M, Bandrowski A, Carr S, et al. A proposal for validation of antibodies. Nat Methods. 2016;13(10):823–827.
  • Cattoretti G. Standardization and reproducibility in diagnostic immunohistochemistry. Hum Pathol. 1994;25(10):1107–1109.
  • O’Leary TJ. Standardization in immunohistochemistry. Appl Immunohistochem Mol Morphol. 2001;9(1):3–8.
  • Sfanos KS, Yegnasubramanian S, Nelson WG, et al. If this is true, what does it imply? How end-user antibody validation facilitates insights into biology and disease. Asian J Urol. 2019;6(1):10–25.
  • Taylor CR. Immunohistochemistry: growing pains, from a stain to an assay. Appl Immunohistochem Mol Morphol. 2019;27(5):325–326.
  • Torlakovic EE. How to Validate predictive immunohistochemistry testing in pathology? Arch Pathol Lab Med. 2019;143(8):907.
  • Gibson-Corley KN, Hochstedler C, Sturm M, et al. Successful integration of the histology core laboratory in translational research. J Histotechnol. 2012;35(1):17–21.
  • Goldstein NS, Hewitt SM, Taylor CR, et al. Recommendations for improved standardization of immunohistochemistry. Appl Immunohistochem Mol Morphol. 2007;15(2):124–133.
  • Anagnostou VK, Welsh AW, Giltnane JM, et al. Analytic variability in immunohistochemistry biomarker studies. Cancer Epidemiol Biomarkers Prev. 2010;19(4):982–991.
  • Baskin DG, Hewitt SM. Improving the state of the science of immunohistochemistry: the Histochemical Society’s standards of practice. J Histochem Cytochem. 2014;62(10):691–692.
  • Kalyuzhny AE. The dark side of the immunohistochemical moon: industry. J Histochem Cytochem. 2009;57(12):1099–1101.
  • Taylor CR. An exaltation of experts: concerted efforts in the standardization of immunohistochemistry. Hum Pathol. 1994;25(1):2–11.
  • Taylor CR. Predictive biomarkers and companion diagnostics. The future of immunohistochemistry: “in situ proteomics,” or just a “stain”? Appl Immunohistochem Mol Morphol. 2014;22(8):555–561.
  • Pardue ML, Gall JG. Molecular hybridization of radioactive DNA to the DNA of cytological preparations. Proc Natl Acad Sci. 1969;64(2):600–604.
  • Singer RH, Ward DC. Actin gene expression visualized in chicken muscle tissue culture by using in situ hybridization with a biotinylated nucleotide analog. Proc Natl Acad Sci. 1982;79(23):7331–7335.
  • Femino AM, Fay FS, Fogarty K, et al. Visualization of single RNA transcripts in situ. Science. 1998;280(5363):585–590.
  • Raj A, van den Bogaard P, Rifkin SA, et al. Imaging individual mRNA molecules using multiple singly labeled probes. Nat Methods. 2008;5(10):877–879.
  • Player AN, Shen L-P, Kenny D, et al. Single-copy gene detection using branched DNA (bDNA) in situ hybridization. J Histochem Cytochem. 2001;49(5):603–612.
  • Wang F, Flanagan J, Su N, et al. RNAscope: a novel in situ RNA analysis platform for formalin-fixed, paraffin-embedded tissues. J Mol Diagn. 2012;14(1):22–29.
  • Choi HM, Beck VA, Pierce NA. Next-generation in situ hybridization chain reaction: higher gain, lower cost, greater durability. ACS Nano. 2014;8(5):4284–4294.
  • Kishi JY, Lapan SW, Beliveau BJ, et al. SABER amplifies FISH: enhanced multiplexed imaging of RNA and DNA in cells and tissues. Nat Methods. 2019;16(6):533–544.
  • Leung HY, Yeung MHY, Leung WT, et al. The current and future applications of in situ hybridization technologies in anatomical pathology. Expert Rev Mol Diagn. 2022;22(1):5–18.
  • Young AP, Jackson DJ, Wyeth RC. A technical review and guide to RNA fluorescence in situ hybridization. PeerJ. 2020;8:e8806.
  • Hicks DG, Longoria G, Pettay J, et al. In situ hybridization in the pathology laboratory: general principles, automation, and emerging research applications for tissue-based studies of gene expression. J Mol Histol. 2004;35(6):595–601.
  • Gase J. Illuminating the history and process of photomicrography at the National Museum of Health and Medicine. 2019 [cited 2022 Jun 29]; Available from: https://www.medicalmuseum.mil/micrograph/index.cfm/posts/2019/photomicrography_history#:~:text=Some%20attribute%20it%20to%20Thomas,the%20first%20photomicrographs%3A%20plant%20sections
  • Morrison AO, Gardner JM. Microscopic image photography techniques of the past, present, and future. Arch Pathol Lab Med. 2015;139(12):1558–1564.
  • Riley RS, Ben-Ezra JM, Massey D, et al. Digital photography: a primer for pathologists. J Clin Lab Anal. 2004;18(2):91–128.
  • Pantanowitz L, Parwani AV. Digital images and the future of digital pathology. J Pathol Inform. 2010;1(1):1.
  • Pantanowitz L, Sharma A, Carter AB, et al. Twenty Years of digital pathology: an overview of the road travelled, what is on the horizon, and the emergence of vendor-neutral archives. J Pathol Inform. 2018;9(1):40.
  • Rossner M, Yamada KM. What’s in a picture? The temptation of image manipulation. J Cell Biol. 2004;166(1):11–15.
  • Rossner M. The JCB 2003: progress, policies, and procedures. J Cell Biol. 2003;161(5):837–838.
  • Byers HR, Bhawan J. Pathologic parameters in the diagnosis and prognosis of primary cutaneous melanoma. Hematol Oncol Clin North Am. 1998;12(4):717–735.
  • Cree IA, Tan PH, Travis WD, et al. Counting mitoses: SI(ze) matters! Mod Pathol. 2021;34(9):1651–1657.
  • Tizhoosh HR, Diamandis P, Campbell CJV, et al. Searching images for consensus: can AI remove observer variability in pathology? Am J Pathol. 2021;191(10):1702–1708.
  • Van Bockstal MR, Berlière M, Duhoux FP, et al. Interobserver variability in ductal carcinoma in situ of the breast. Am J Clin Pathol. 2020;154(5):596–609.
  • Crissman JW, Goodman DG, Hildebrandt PK, et al. Best practices guideline: toxicologic histopathology. Toxicol Pathol. 2004;32(1):126–131.
  • Gibson-Corley KN, Olivier AK, Meyerholz DK. Principles for valid histopathologic scoring in research. Vet Pathol. 2013;50(6):1007–1015.
  • Meyerholz DK, Beck AP. Principles and approaches for reproducible scoring of tissue stains in research. Lab Invest. 2018;98(7):844–855.
  • Tuomari D, Elliott G, Kulwich B, et al. Society of Toxicologic pathology position on histopathology data collection and audit trail: compliance with 21 CFR parts 58 and 11. Toxicol Pathol. 2004;32(1):122–123.
  • Evans AJ, Brown RW, Bui MM, et al. Validating whole slide imaging systems for diagnostic purposes in pathology. Arch Pathol Lab Med. 2022;146(4):440–450.
  • Chlipala E, Bendzinski CM, Chu K, et al. Optical density-based image analysis method for the evaluation of hematoxylin and eosin staining precision. J Histotechnol. 2020;43(1):29–37.
  • Wu Y, Cheng M, Huang S, et al. Recent advances of deep learning for computational histopathology: principles and applications. Cancers (Basel). 2022;14(5):1199.
  • Smith B, Hermsen M, Lesser E, et al. Developing image analysis pipelines of whole-slide images: pre- and post-processing. J Clin Transl Sci. 2021;5(1):e38.
  • Madabhushi A, Lee G. Image analysis and machine learning in digital pathology: challenges and opportunities. Med Image Anal. 2016;33:170–175.
  • Lara H, Li Z, Abels E, et al. Quantitative image analysis for tissue biomarker use: a white paper from the digital pathology association. Appl Immunohistochem Mol Morphol. 2021;29(7):479–493.
  • Zarella MD, Bowman; D, Aeffner F, et al. A practical guide to whole slide imaging: a white paper from the digital pathology association. Arch Pathol Lab Med. 2019;143(2):222–234.
  • Health CDRH. Technical performance assessment of digital pathology whole slide imaging devices. Guidance for industry and food and drug administration staff. USDOHAHSFAD Administration, Editor. 2016.
  • Boehm U, Nelson G, Brown CM, et al. QUAREP-LiMi: a community endeavor to advance quality assessment and reproducibility in light microscopy. Nat Methods. 2021;18(12):1423–1426.
  • Koch M, Symvoulidis P, Ntziachristos V. Tackling standardization in fluorescence molecular imaging. Nat Photonics. 2018;12(9):505–515.
  • Levenson R, Beechem J, McNamara G. Spectral imaging in preclinical research and clinical pathology. Anal Cell Pathol (Amst). 2012;35(5–6):339–361.
  • Linden MA, Sedgewick GJ, Ericson M. An innovative method for obtaining consistent images and quantification of histochemically stained specimens. J Histochem Cytochem. 2015;63(4):233–243.
  • Macville MV, Van der Laak JAWM, Speel EJM, et al. Spectral imaging of multi-color chromogenic dyes in pathological specimens. Anal Cell Pathol. 2001;22(3):133–142.
  • Menke J, Roelandse M, Ozyurt B, et al. The Rigor and transparency index quality metric for assessing biological and medical science methods. iScience. 2020;23(11):101698.
  • Kilkenny C, Browne WJ, Cuthill IC, et al. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. J Pharmacol Pharmacother. 2010;1(2):94–99.
  • Scudamore CL, Soilleux EJ, Karp NA, et al. Recommendations for minimum information for publication of experimental pathology data: MINPEPA guidelines. J Pathol. 2016;238(2):359–367.
  • Macleod M, Collings AM, Graf C, et al. The MDAR (Materials design analysis reporting) framework for transparent reporting in the life sciences. Proc Nat Acad Sci. 2021;118(17):e2103238118.
  • Tonzani S, Fiorani S. The STAR Methods way towards reproducibility and open science. iScience. 2021;24(4):102137.
  • Bandrowski A, Brush M, Grethe JS, et al. The Resource Identification Initiative: a cultural shift in publishing. F1000Res. 2015;4:134.
  • Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505(7485):612–613.
  • Luo Y, Hitz BC, Gabdank I, et al. New developments on the Encyclopedia of DNA elements (ENCODE) data portal. Nucleic Acids Res. 2020;48(D1):D882–D889.
  • Sloan CA, Chan ET, Davidson JM, et al. ENCODE data at the ENCODE portal. Nucleic Acids Res. 2016;44(D1):D726–32.
  • Engel KB, Moore HM. Effects of preanalytical variables on the detection of proteins by immunohistochemistry in formalin-fixed, paraffin-embedded tissue. Arch Pathol Lab Med. 2011;135(5):537–543.
  • Taylor CR. The total test approach to standardization of immunohistochemistry. Arch Pathol Lab Med. 2000;124(7):945–951.
  • Grillo F, Bruzzone M, Pigozzi S, et al. Immunohistochemistry on old archival paraffin blocks: is there an expiry date? J Clin Pathol. 2017;70(11):988–993.
  • Bass BP, Engel KB, Greytak SR, et al. A review of preanalytical factors affecting molecular, protein, and morphological analysis of formalin-fixed, paraffin-embedded (FFPE) tissue: how well do you know your FFPE specimen? Arch Pathol Lab Med. 2014;138(11):1520–1530.
  • Brown RW, Speranza VD, Alvarez JO, et al. Uniform labeling of blocks and slides in surgical pathology: guideline from the College of American pathologists Pathology and laboratory quality center and the national society for histotechnology. Arch Pathol Lab Med. 2015;139(12):1515–1524.
  • Baena-Del Valle JA, Zheng Q, Hicks JL, et al. Rapid loss of RNA detection by in situ hybridization in stored tissue blocks and preservation by cold storage of unstained slides. Am J Clin Pathol. 2017;148(5):398–415.
  • Fitzgibbons PL. Challenges in improving preanalytic specimen handling of routine cancer biospecimens. Arch Pathol Lab Med. 2019;143(11):1300–1301.
  • Compton CC, Robb JA, Anderson MW, et al. Preanalytics and precision pathology: pathology practices to ensure molecular integrity of cancer patient biospecimens for precision medicine. Arch Pathol Lab Med. 2019;143(11):1346–1363.
  • Fitzgibbons PL, Bradley LA, Fatheree LA, et al. Principles of analytic validation of immunohistochemical assays: guideline from the college of American pathologists pathology and laboratory quality center. Arch Pathol Lab Med. 2014;138(11):1432–1443.
  • Grillo F, Pigozzi S, Ceriolo P, et al. Factors affecting immunoreactivity in long-term storage of formalin-fixed paraffin-embedded tissue sections. Histochem Cell Biol. 2015;144(1):93–99.
  • Hojat A, Wei B, Olson MG, et al. Procurement and storage of surgical biospecimens. Methods Mol Biol. 2019;1897:65–76.
  • Gruber HE, Ingram J, Zinchenko N, et al. Practical histological methods for use with cultured cells. Biotech Histochem. 2009;84(6):283–286.
  • Rao S, Masilamani S, Sundaram S, et al. Quality measures in pre-analytical phase of tissue processing: understanding its value in histopathology. J Clin Diagn Res. 2016;10(1):EC07–11.
  • Mullink H, Henzen-Logmans SC, Tadema TM, et al. Influence of fixation and decalcification on the immunohistochemical staining of cell-specific markers in paraffin-embedded human bone biopsies. J Histochem Cytochem. 1985;33(11):1103–1109.
  • Kapila SN, Boaz K, Natarajan S. The post-analytical phase of histopathology practice: storage, retention and use of human tissue specimens. Int J Appl Basic Med Res. 2016;6(1):3–7.
  • Shidham VB. CellBlockistry: chemistry and art of cell-block making - A detailed review of various historical options with recent advances. Cytojournal. 2019;16:12.
  • McGoogan E, Colgan TJ, Ramzy I, et al. Cell preparation methods and criteria for sample adequacy. International academy of cytology task force summary. diagnostic cytology towards the 21st century: an international expert conference and tutorial. Acta Cytol. 1998;42(1):25–32.
  • Grizzle WE. Special symposium: fixation and tissue processing models. Biotech Histochem. 2009;84(5):185–193.
  • Chung JY, Song JS, Ylaya K, et al. Histomorphological and molecular assessments of the fixation times comparing formalin and ethanol-based fixatives. J Histochem Cytochem. 2018;66(2):121–135.
  • Otali D, He Q, Stockard CR, et al. Preservation of immunorecognition by transferring cells from 10% neutral buffered formalin to 70% ethanol. Biotech Histochem. 2013;88(3–4):170–180.
  • Quintana C. Cryofixation, cryosubstitution, cryoembedding for ultrastructural, immunocytochemical and microanalytical studies. Micron. 1994;25(1):63–99.
  • Bouzari N, Olbricht S. Histologic pitfalls in the Mohs technique. Dermatol Clin. 2011;29(2):261–272.
  • Miller LJ, Argenyi ZB, Whitaker DC. The preparation of frozen sections for micrographic surgery. A review of current methodology. J Dermatol Surg Oncol. 1993;19(11):1023–1029.
  • Shi SR, Liu C, Pootrakul L, et al. Evaluation of the value of frozen tissue section used as “gold standard” for immunohistochemistry. Am J Clin Pathol. 2008;129(3):358–366.
  • Kawamoto T. Use of a new adhesive film for the preparation of multi-purpose fresh-frozen sections from hard tissues, whole-animals, insects and plants. Arch Histol Cytol. 2003;66(2):123–143.
  • Liou W, Geuze HJ, Slot JW. Improving structural integrity of cryosections for immunogold labeling. Histochem Cell Biol. 1996;106(1):41–58.
  • Litwin JA. Light microscopic histochemistry on plastic sections. Prog Histochem Cytochem. 1985;16(2):1–84.
  • Masuda T, Kawaguchi J, Oikawa H, et al. How thick are the paraffin-embedded tissue sections routinely prepared in laboratory? A morphometric study using a confocal laser scanning microscope. Pathol Int. 1998;48(3):179–183.
  • Pearse AD, Marks R. Measurement of section thickness in quantitative microscopy with special reference to enzyme histochemistry. J Clin Pathol. 1974;27(8):615–618.
  • McCampbell AS, Raghunathan V, Tom-Moy M, et al. Tissue thickness effects on immunohistochemical staining intensity of markers of cancer. Appl Immunohistochem Mol Morphol. 2017;27(5):345–355.
  • Libard S, Cerjan D, Alafuzoff I. Characteristics of the tissue section that influence the staining outcome in immunohistochemistry. Histochem Cell Biol. 2019;151(1):91–96.
  • Grube D. Constants and variables in immunohistochemistry. Arch Histol Cytol. 2004;67(2):115–134.
  • Gambella A, Porro L, Pigozzi S, et al. Section detachment in immunohistochemistry: causes, troubleshooting, and problem-solving. Histochem Cell Biol. 2017;148(1):95–101.
  • Cheung CC, Swanson PE, Nielsen S, et al. Uneven staining in automated immunohistochemistry: cold and hot zones and implications for immunohistochemical analysis of biopsy specimens. Appl Immunohistochem Mol Morphol. 2018;26(5):299–304.
  • Pinhel IF, MacNeill FA, Hills MJ, et al. Extreme loss of immunoreactive p-Akt and p-Erk1/2 during routine fixation of primary breast cancer. Breast Cancer Res. 2010;12(5):R76.
  • Blows FM, Ali HR, Dawson S-J, et al. Decline in antigenicity of tumor markers by storage time using pathology sections cut from tissue microarrays. Appl Immunohistochem Mol Morphol. 2016;24(3):221–226.
  • Economou M, Schöni L, Hammer C, et al. Proper paraffin slide storage is crucial for translational research projects involving immunohistochemistry stains. Clin Transl Med. 2014;3(1):4.
  • Fergenbaum JH, Garcia-Closas M, Hewitt SM, et al. Loss of antigenicity in stored sections of breast cancer tissue microarrays. Cancer Epidemiol Biomarkers Prev. 2004;13(4):667–672.
  • Rasmussen BB. Letter to the Editor. Mod Pathol. 2005;18(8):1145. author reply 1146-7.
  • Mirlacher M, Kasper M, Storz M, et al. Influence of slide aging on results of translational research studies using immunohistochemistry. Mod Pathol. 2004;17(11):1414–1420.
  • Wolf C, Jarutat T, Vega Harring S, et al. Determination of phosphorylated proteins in tissue specimens requires high-quality samples collected under stringent conditions. HIstopathol. 2014;64(3):431–444.
  • Xie R, Chung J-Y, Ylaya K, et al. Factors influencing the degradation of archival formalin-fixed paraffin-embedded tissue sections. J Histochem Cytochem. 2011;59(4):356–365.
  • Sasaki T, Kawabata Y, Suzuki N, et al. Decreased D2-40 immunoreactivity in stored paraffin sections and methods for preserving it. Biotech Histochem. 2014;89(6):412–418.
  • Gelb AB, Freeman VA, Astrow SH. Evaluation of methods for preserving PTEN antigenicity in stored paraffin sections. Appl Immunohistochem Mol Morphol. 2011;19(6):569–573.
  • Forse CL, Pinnaduwage D, Bull SB, et al. Fresh cut versus stored cut paraffin-embedded tissue: effect on immunohistochemical staining for common Breast cancer markers. Appl Immunohistochem Mol Morphol. 2018;27(3):231.
  • Omilian AR, Zirpoli GR, Cheng T-YD, et al. Storage of breast conditions and immunoreactivity cancer subtyping markers in tissue microarray sections. Appl Immunohistochem Mol Morphol. 2020;28(4):267–273.
  • Takada N, Hirokawa M, Ohbayashi C, et al. Re-evaluation of MIB-1 immunostaining for diagnosing hyalinizing trabecular tumour of the thyroid: semi-automated techniques with manual antigen retrieval are more accurate than fully automated techniques. Endocr J. 2018;65(2);239–244.
  • Prichard JW. Overview of automated immunohistochemistry. Arch Pathol Lab Med. 2014;138(12):1578–1582.
  • Valli V, Peters E, Williams C, et al. Optimizing methods in immunocytochemistry: one laboratory’s experience. Vet Clin Pathol. 2009;38(2):261–269.
  • Arihiro K, Umemura S, Kurosumi M, et al. Comparison of evaluations for hormone receptors in breast carcinoma using two manual and three automated immunohistochemical assays. Am J Clin Pathol. 2007;127(3):356–365.
  • Biesterfeld S, Kraus HL, Reineke T, et al. Analysis of the reliability of manual and automated immunohistochemical staining procedures. A pilot study. Anal Quant Cytol Histol. 2003;25(2):90–96.
  • Le Neel T, Moreau A, Laboisse C, et al. Comparative evaluation of automated systems in immunohistochemistry. Clin Chim Acta. 1998;278(2):185–192.
  • Moreau A, Le Neel T, Joubert M, et al. Approach to automation in immunohistochemistry. Clin Chim Acta. 1998;278(2):177–184.
  • Takahashi T, Ishiguro K. Development of an automatic machine for in situ hybridization and immunohistochemistry. Anal Biochem. 1991;196(2):390–402.
  • MaWhinney WH, Warford A, Rae MJ, et al. Automated immunochemistry. J Clin Pathol. 1990;43(7):591–596.
  • Cohen C, Unger ER, Sgoutas D, et al. Automated immunohistochemical estrogen receptor in fixed embedded breast carcinomas: comparison with manual immunohistochemistry on frozen tissues. Am J Clin Pathol. 1989;92(5):669–672.
  • Basu A, Chiriboga L, Narula N, et al. Validation of PD-L1 clone 22C3 immunohistochemical stain on two Ventana DISCOVERY autostainer models: detailed protocols, test performance characteristics, and interobserver reliability analyses. J Histotechnol. 2020;43(4):174–181.
  • Arnold MM, Srivastava S, Fredenburgh J, et al. Effects of fixation and tissue processing on immunohistochemical demonstration of specific antigens. Biotech Histochem. 1996;71(5):224–230.
  • Henwood AF. The application of heated detergent dewaxing and rehydration to immunohistochemistry. Biotech Histochem. 2012;87(1):46–50.
  • Pandey P, Dixit A, Tanwar A, et al. A comparative study to evaluate liquid dish washing soap as an alternative to xylene and alcohol in deparaffinization and hematoxylin and eosin staining. J Lab Physicians. 2014;6(2):84–90.
  • Premalatha BR Patil S, Rao RS, et al. Mineral Oil—A biofriendly substitute for xylene in deparaffinization: a novel method. J Contemp Dent Pract. 2013;14(2):281–286.
  • Kalantari N, Bayani M, Ghaffari T. Deparaffinization of formalin-fixed paraffin-embedded tissue blocks using hot water instead of xylene. Anal Biochem. 2016;507:71–73.
  • Faoláin EÓ, Hunter MB, Byrne JM, et al. Raman spectroscopic evaluation of efficacy of current paraffin wax section dewaxing agents. J Histochem Cytochem. 2005;53(1):121–129.
  • Paulsen IM, Dimke H, Frische S. A single simple procedure for dewaxing, hydration and heat-induced epitope retrieval (HIER) for immunohistochemistry in formalin fixed paraffin-embedded tissue. Eur J Histochem. 2015;59(4):2532.
  • Dapson RW. Dye-tissue interactions: mechanisms, quantification and bonding parameters for dyes used in biological staining. Biotech Histochem. 2005;80(2):49–72.
  • Titford M. The long history of hematoxylin. Biotech Histochem. 2005;80(2):73–78.
  • Horobin RW. Biological staining: mechanisms and theory. Biotech Histochem. 2002;77(1):3–13.
  • Kugler P. Enzyme histochemical methods applied in the brain. Eur J Morphol. 1990;28(2–4):109–120.
  • Emoto K, Yamashita S, Okada Y. Mechanisms of heat-induced antigen retrieval: does pH or ionic strength of the solution play a role for refolding antigens? J Histochem Cytochem. 2005;53(11):1311–1321.
  • Bogen SA, Vani K, Sompuram SR. Molecular mechanisms of antigen retrieval: antigen retrieval reverses steric interference caused by formalin-induced cross-links. Biotech Histochem. 2009;84(5):207–215.
  • Fowler CB, Evers DL, O’Leary TJ, et al. Antigen retrieval causes protein unfolding: evidence for a linear epitope model of recovered immunoreactivity. J Histochem Cytochem. 2011;59(4):366–381.
  • Shi SR, Shi Y, Taylor CR, et al. New dimensions of antigen retrieval technique: 28 years of development, practice, and expansion. Appl Immunohistochem Mol Morphol. 2019;27(10):715–721.
  • Boenisch T. Pretreatment for immunohistochemical staining simplified. Appl Immunohistochem Mol Morphol. 2007;15(2):208–212.
  • Ramos-Vara JA. Principles and methods of immunohistochemistry. Methods Mol Biol. 2017;1641:115–128.
  • Buchwalow I, Samoilova V, Boecker W, et al. Non-specific binding of antibodies in immunohistochemistry: fallacies and facts. Sci Rep. 2011;1(1):28.
  • Boenisch T. Formalin-fixed and heat-retrieved tissue antigens: a comparison of their immunoreactivity in experimental antibody diluents. Appl Immunohistochem Mol Morphol. 2001;9(2):176–179.
  • Gendusa R, Scalia CR, Buscone S, et al. Elution of High-affinity (>10-9 K D) antibodies from tissue sections. J Histochem Cytochem. 2014;62(7):519–531.
  • Lott RL, Riccelli PV, Sheppard EA, et al. Immunohistochemical validation of rare tissues and antigens with low frequency of occurrence: recommendations from the Anatomic Pathology Patient Interest Association (APPIA). Appl Immunohistochem Mol Morphol. 2021;29(5):327.
  • Buchwalow I, Samoilova V, Boecker W, et al. Multiple immunolabeling with antibodies from the same host species in combination with tyramide signal amplification. Acta Histochem. 2018;120(5):405–411.
  • Wang H, Su N, Wang LC, et al. Quantitative ultrasensitive bright-field RNA in situ hybridization with RNAscope. Methods Mol Biol. 2014;1211:201–212.
  • Liu W, Song H, Chen Q, et al. Recent advances in the selection and identification of antigen-specific nanobodies. Mol Immunol. 2018;96:37–47.
  • Muyldermans S. Applications of Nanobodies. Annu Rev Anim Biosci. 2021;9(1):401–421.
  • Takahashi M, Sakota E, Nakamura Y. The efficient cell-SELEX strategy, Icell-SELEX, using isogenic cell lines for selection and counter-selection to generate RNA aptamers to cell surface proteins. Biochimie. 2016;131:77–84.
  • Bukari BA, Citartan M, Ch’ng ES, et al. Aptahistochemistry in diagnostic pathology: technical scrutiny and feasibility. Histochem Cell Biol. 2017;147(5):545–553.
  • Pu Y, Liu Z, Lu Y, et al. Using DNA aptamer probe for immunostaining of cancer frozen tissues. Anal Chem. 2015;87(3):1919–1924.
  • de Castro MA, Rammner B, Opazo F. Aptamer Stainings for Super-resolution Microscopy. Methods Mol Biol. 2016;1380:197–210.
  • Karp NA, Fry D. What is the optimum design for my animal experiment? BMJ Open Sci. 2021;5(1):e100126.
  • Roth J. Lectins for histochemical demonstration of glycans. Histochem Cell Biol. 2011;136(2):117–130.
  • Sorrelle N, Ganguly D, Dominguez ATA, et al. Improved multiplex immunohistochemistry for immune microenvironment evaluation of mouse formalin-fixed, paraffin-embedded tissues. J Immunol. 2018;202(1):292–299.
  • Bolognesi MM, Manzoni M, Scalia CR, et al. Multiplex staining by sequential immunostaining and antibody removal on routine tissue sections. J Histochem Cytochem. 2017;65(8):431–444.
  • Krenacs T, Krenacs L, Raffeld M. Multiple antigen immunostaining procedures. Methods Mol Biol. 2010;588:281–300.
  • van den Brand M, Hoevenaars BM, Sigmans JHM, et al. Sequential immunohistochemistry: a promising new tool for the pathology laboratory. Histopathology. 2014;65(5):651–657.
  • Paulsen JD, Zeck B, Sun K, et al. Keratin 19 and mesenchymal markers for evaluation of epithelial-mesenchymal transition and stem cell niche components in primary biliary cholangitis by sequential elution-stripping multiplex immunohistochemistry. J Histotechnol. 2020;43(4):163–173.
  • Black S, Phillips D, Hickey JW, et al. CODEX multiplexed tissue imaging with DNA-conjugated antibodies. Nat Protoc. 2021;16(8):3802–3835.
  • Laberiano-Fernández C, Hernández-Ruiz S, Rojas F, et al. Best practices for technical reproducibility assessment of multiplex Immunofluorescence. Front Mol Biosci. 2021;8:660202.
  • Taube JM, Akturk G, Angelo M, et al. The Society for Immunotherapy of Cancer statement on best practices for multiplex immunohistochemistry (IHC) and immunofluorescence (IF) staining and validation. J Immunother Cancer. 2020;8(1):e000155.
  • Schofer C, Weipoltshammer K, Almeder M, et al. Signal amplification at the ultrastructural level using biotinylated tyramides and immunogold detection. Histochem Cell Biol. 1997;108(4–5):313–319.
  • Hunyady B, Krempels K, Harta G, et al. Immunohistochemical signal amplification by catalyzed reporter deposition and its application in double immunostaining. J Histochem Cytochem. 1996;44(12):1353–1362.
  • Skaland I, Nordhus M, Gudlaugsson E, et al. Evaluation of 5 different labeled polymer immunohistochemical detection systems. Appl Immunohistochem Mol Morphol. 2010;18(1):90–96.
  • Warford A, Akbar H, Riberio D. Antigen retrieval, blocking, detection and visualisation systems in immunohistochemistry: a review and practical evaluation of tyramide and rolling circle amplification systems. Methods. 2014;70(1):28–33.
  • van der Loos CM. Chromogens in multiple immunohistochemical staining used for visual assessment and spectral imaging: the colorful future. J Histotechnol. 2010;33(1):31–40.
  • Day WA, Lefever MR, Ochs RL, et al. Covalently deposited dyes: a new chromogen paradigm that facilitates analysis of multiple biomarkers in situ. Lab Invest. 2017;97(1):104–113.
  • Billinton N, Knight AW. Seeing the wood through the trees: a review of techniques for distinguishing green fluorescent protein from endogenous autofluorescence. Anal Biochem. 2001;291(2):175–197.
  • Viegas MS, Martins TC, Seco F, et al. An improved and cost-effective methodology for the reduction of autofluorescence in direct immunofluorescence studies on formalin-fixed paraffin-embedded tissues. Eur J Histochem. 2007;51(1):59–66.
  • Olympus. Microscopy resource center. [cited 2022 Jun 19]; Available from: https://www.olympus-lifescience.com/en/microscope-resource
  • Zeiss. Education in microscopy and digital Imaging. 2022 [cited 2022 Jun 19]; Available from: https://zeiss-campus.magnet.fsu.edu
  • Cromey DW. Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images. Science and Engineering Ethics. 2010;16(4):639–667.
  • Inoue T, Yagi Y. Color standardization and optimization in whole slide imaging. Clin Diagn Pathol. 2020;4(1). DOI:10.15761/CDP.1000139
  • Pritt BS, Gibson PC, Cooper K. Digital imaging guidelines for pathology: a proposal for general and academic use. Adv Anat Pathol. 2003;10(2):96–100.
  • Sasaki A. Recent advances in the standardization of fluorescence microscopy for quantitative image analysis. Biophys Rev. 2022;14(1):33–39.
  • Yagi Y, Gilbertson JR. Digital imaging in pathology: the case for standardization. J Telemed Telecare. 2005;11(3):109–116.
  • Bandrowski A. A decade of GigaScience: what can be learned from half a million RRIDs in the scientific literature? Gigascience. 2022;11. DOI:10.1093/gigascience/giac058.
  • Charalambakis NE, Ambulos NP, Hockberger P, et al. Establishing a national strategy for shared research resources in biomedical sciences. FASEB J. 2021;35(11):e21973.
  • Kos-Braun IC, Gerlach B, Pitzer C. A survey of research quality in core facilities. Elife. 2020;9. DOI:10.7554/eLife.62212.
  • Mische SM, Fisher NC, Meyn SM, et al. A review of the scientific rigor, reproducibility, and transparency studies conducted by the ABRF research groups. J Biomol Tech. 2020;31(1):11–26.
  • Restivo L, Gerlach B, Tsoory M, et al. Towards best practices in research: role of academic core facilities. EMBO Rep. 2021;22(12):e53824.
  • Decalf J, Albert ML, Ziai J. New tools for pathology: a user’s review of a highly multiplexed method for in situ analysis of protein and RNA expression in tissue. J Pathol. 2019;247(5):650–661.
  • McGinnis LM, Ibarra‐Lopez V, Rost S, et al. Clinical and research applications of multiplexed immunohistochemistry and in situ hybridization. J Pathol. 2021;254(4):405–417.
  • Eng J, Bucher E, Hu Z, et al. A framework for multiplex imaging optimization and reproducible analysis. Commun Biol. 2022;5(1):438.
  • Bergholtz H, Carter J, Cesano A, et al. Best practices for spatial profiling for breast cancer research with the GeoMx(®) Digital spatial profiler. Cancers (Basel). 2021;13(17):4456.
  • Kakade VR, Weiss M, Cantley LG. Using imaging mass cytometry to define cell identities and interactions in human tissues. Front Physiol. 2021;12:817181.
  • Porta Siegel T, Hamm G, Bunch J, et al. Mass spectrometry imaging and integration with other imaging modalities for greater molecular understanding of biological tissues. Mol Imaging Biol. 2018;20(6):888–901.
  • Berglund L, Björling E, Oksvold P, et al. A genecentric human protein Atlas for expression profiles based on antibodies. Mol Cell Proteomics. 2008;7(10):2019–2027.
  • Ponten F, Jirstrom K, Uhlen M. The human protein Atlas–a tool for pathology. J Pathol. 2008;216(4):387–393.
  • Baker M. When antibodies mislead: the quest for validation. Nature. 2020;585(7824):313–314.
  • Babic Z, Capes-Davis A, Martone ME, et al. Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines. Elife. 2019;8. DOI:10.7554/eLife.41676.