1,959
Views
9
CrossRef citations to date
0
Altmetric
Articles

Improving structure and transparency in reliability evaluations of data under REACH: suggestions for a systematic method

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 212-241 | Received 08 Jun 2018, Accepted 19 Jul 2018, Published online: 15 Jan 2019

Figures & data

Table 1. Peer-reviewed studies included in this investigation. Studies have been listed according to the assigned reliability category by the registrant: category 1 = Reliable without restriction, category 2 = Reliable with restriction, category 3 = Not reliable and category 4 = Not assignable.

Figure 1. Overview of selected studies from each reliability category fulfilling the criteria (1) the bibliographic reference was stated as “publication”, (2) the reference could be identified, (3) the study summary only referred to one bibliographic reference and (5) the adequacy was stated. One of the three studies in reliability category 3, was exchanged as the information in the summary did not correspond to the study stated as reference. Instead, another study assigned reliability category 3, but with no assigned adequacy was included. The numbers in bold indicate the number of included studies in each category (in total 20).

Figure 1. Overview of selected studies from each reliability category fulfilling the criteria (1) the bibliographic reference was stated as “publication”, (2) the reference could be identified, (3) the study summary only referred to one bibliographic reference and (5) the adequacy was stated. One of the three studies in reliability category 3, was exchanged as the information in the summary did not correspond to the study stated as reference. Instead, another study assigned reliability category 3, but with no assigned adequacy was included. The numbers in bold indicate the number of included studies in each category (in total 20).

Figure 2. Screen shot of the online SciRAP tool, showing part of the form for evaluating methodology quality. The evaluation is recorded by selecting a pre-defined evaluation statement (fulfilled, partially fulfilled, not fulfilled or not determined) and justifying the assessment with a comment.

Figure 2. Screen shot of the online SciRAP tool, showing part of the form for evaluating methodology quality. The evaluation is recorded by selecting a pre-defined evaluation statement (fulfilled, partially fulfilled, not fulfilled or not determined) and justifying the assessment with a comment.

Table 2. SciRAP reporting and methodology criteria considered key for evaluating reliability based on expert judgment and “red criteria” in ToxRTool.

Table 3. Principles for categorizing studies into reliability categories 1–4 based on the SciRAP evaluation.

Table 4. Reliability category and rationale for reliability as registered in the REACH registration dossier for the 20 selected studies and resulting reliability evaluation with the SciRAP tool.

Table 5. SciRAP evaluation of reporting quality criteria for the 20 studies included in this investigation listed according to the assigned reliability category with the SciRAP tool. Each criterion was evaluated as either F = fulfilled, PF = partially fulfilled, NF = not fulfilled, ND = not determined or NA = not applicable.

Table 6. SciRAP evaluation of methodology quality criteria for the 20 studies included in this investigation listed according to the assigned reliability category with the SciRAP tool. Each criterion was evaluated as either F = fulfilled, PF = partially fulfilled, NF = not fulfilled, ND = not determined or NA = not applicable.