2,523
Views
22
CrossRef citations to date
0
Altmetric
Special Section: A Collection of Articles on Opportunities and Challenges in Utilizing Real-World Data for Clinical Trials and Medical Product Development

Biostatistical Considerations When Using RWD and RWE in Clinical Studies for Regulatory Purposes: A Landscape Assessment

, , , , , , , , , ORCID Icon, & show all
Pages 3-13 | Received 01 May 2020, Accepted 26 Jan 2021, Published online: 10 Mar 2021

References

  • Abrahamowicz, M., Bjerre, L. M., Beauchamp, M.-E., LeLorier, J., and Burne, R. (2016), “The Missing Cause Approach to Unmeasured Confounding in Pharmacoepidemiology,” Statistics in Medicine, 35, 1001–1016. DOI: 10.1002/sim.6818.
  • Akacha, M., Bretz, F., Ohlssen, D., Rosenkranz, G., and Schmidli, H. (2017), “Estimands and Their Role in Clinical Trials,” Statistics in Biopharmaceutical Research, 9, 268–271. DOI: 10.1080/19466315.2017.1302358.
  • Austin, P. C. (2011), “An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies,” Multivariate Behavioral Research, 46, 399–424. DOI: 10.1080/00273171.2011.568786.
  • Baiocchi, M., Cheng, J., and Small, D. S. (2014), “Instrumental Variable Methods for Causal Inference,” Statistics in Medicine, 33, 2297–2340. DOI: 10.1002/sim.6128.
  • Berger, M. L., Mamdani, M., Atkins, D., and Johnson, M. L. (2009), “Good Research Practices for Comparative Effectiveness Research: Defining, Reporting and Interpreting Nonrandomized Studies of Treatment Effects Using Secondary Data Sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I,” Value in Health, 12, 1044–1052.
  • Berger, M. L., Martin, B. C., Husereau, D., Worley, K., Allen, J. D., Yang, W., Quon, N. C., Mullins, C. D., Kahler, K. H., and Crown, W. (2014), “A Questionnaire to Assess the Relevance and Credibility of Observational Studies to Inform Health Care Decision Making: An ISPOR-AMCP-NPC Good Practice Task Force Report,” Value in Health, 17, 143–156. DOI: 10.1016/j.jval.2013.12.011.
  • Berger, M. L., Sox, H., Willke, R. J., Brixner, D. L., Eichler, H.-G., Goettsch, W., Madigan, D., Makady, A., Schneeweiss, S., Tarricone, R., Wang, S. V., Watkins, J., and Daniel Mullins, C. (2017), “Good Practices for Real-World Data Studies of Treatment and/or Comparative Effectiveness: Recommendations From the Joint ISPOR-ISPE Special Task Force on Real-World Evidence in Health Care Decision Making,” Pharmacoepidemiology and Drug Safety, 26, 1033–1039. DOI: 10.1002/pds.4297.
  • Chen, J., Heyse, J., and Lai, T. L. (2018), Medical Product Safety Evaluation: Biological Models and Statistical Methods, Boca Raton, FL: Chapman and Hall/CRC.
  • Chen, J., Ho, M., Lee, K., Song, Y., Fang, Y., Goldstein, B. A., He, W., Irony, T., Jiang, Q., van der Laan, M., Lee, H., Lin, X., Meng, Z., Mishra-Kalyani, P., Rockhold, F., Wang, H., and White, R. (2020), “The Current Landscape in Biostatistics of Real-World Data and Evidence: Use of RWD/RWE to Inform Clinical Study Design and Analysis,” Statistics in Biopharmaceutical Research (submitted).
  • Cox, E., Martin, B. C., Van Staa, T., Garbe, E., Siebert, U., and Johnson, M. L. (2009), “Good Research Practices for Comparative Effectiveness Research: Approaches to Mitigate Bias and Confounding in the Design of Nonrandomized Studies of Treatment Effects Using Secondary Data Sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part II,” Value in Health, 12, 1053–1061.
  • Curtis, J. R., Chen, L., Bharat, A., Delzell, E., Greenberg, J. D., Harrold, L., Kremer, J., Setoguchi, S., Solomon, D. H., and Xie, F. (2014), “Linkage of a De-Identified United States Rheumatoid Arthritis Registry With Administrative Data to Facilitate Comparative Effectiveness Research,” Arthritis Care & Research, 66, 1790–1798.
  • Davis, S. M., Stroup, T. S., Koch, G. G., Davis, C. E., Rosenheck, R. A., and Lieberman, J. A. (2011), “Time to All-Cause Treatment Discontinuation as the Primary Outcome in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Schizophrenia Study,” Statistics in Biopharmaceutical Research, 3, 253–265. DOI: 10.1198/sbr.2011.10013.
  • De Gruttola, V. G., Clax, P., DeMets, D. L., Downing, G. J., Ellenberg, S. S., Friedman, L., Gail, M. H., Prentice, R., Wittes, J., and Zeger, S. L. (2001), “Considerations in the Evaluation of Surrogate Endpoints in Clinical Trials: Summary of a National Institutes of Health Workshop,” Controlled Clinical Trials, 22, 485–502. DOI: 10.1016/S0197-2456(01)00153-2.
  • Delgado-Rodríguez, M., and Llorca, J. (2004), “Bias,” Journal of Epidemiology & Community Health, 58, 635–641.
  • Des Jarlais, D. C., Lyles, C., Crepaz, N., and Group, T. (2004), “Improving the Reporting Quality of Nonrandomized Evaluations of Behavioral and Public Health Interventions: The TREND Statement,” American Journal of Public Health, 94, 361–366. DOI: 10.2105/ajph.94.3.361.
  • Dreyer, N. A., Bryant, A., and Velentgas, P. (2016), “The GRACE Checklist: A Validated Assessment Tool for High Quality Observational Studies of Comparative Effectiveness,” Journal of Managed Care & Specialty Pharmacy, 22, 1107–1113.
  • Dreyer, N. A., Schneeweiss, S., McNeil, B. J., Berger, M. L., Walker, A. M., Ollendorf, D. A., and Gliklich, R. E. (2010), “GRACE Principles: Recognizing High-Quality Observational Studies of Comparative Effectiveness,” The American Journal of Managed Care, 16, 467–471.
  • Dreyer, N. A., Velentgas, P., Westrich, K., and Dubois, R. (2014), “The GRACE Checklist for Rating the Quality of Observational Studies of Comparative Effectiveness: A Tale of Hope and Caution,” Journal of Managed Care Pharmacy, 20, 301–308. DOI: 10.18553/jmcp.2014.20.3.301.
  • Dusetzina, S. B., Tyree, S., Meyer, A.-M., Meyer, A., Green, L., and Carpenter, W. R. (2014), Linking Data for Health Services Research: A Framework and Instructional Guide, Rockville, MD: Agency for Healthcare Research and Quality (US).
  • Faries, D., Peng, X., Pawaskar, M., Price, K., Stamey, J. D., and Seaman, J. W. (2013), “Evaluating the Impact of Unmeasured Confounding With Internal Validation Data: An Example Cost Evaluation in Type 2 Diabetes,” Value in Health, 16, 259–266. DOI: 10.1016/j.jval.2012.10.012.
  • FDA (1998), “Providing Clinical Evidence of Effectiveness for Human Drug and Biological Products,” available at https://www.fda.gov/media/71655/download.
  • FDA (2017a), “PDUFA Reauthorization Performance Goals and Procedures Fiscal Years 2018 Through 2022,” available at https://www.fda.gov/media/99140/download.
  • FDA (2017b), “Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices,” available at https://www.fda.gov/media/99447/download.
  • FDA (2018), “Framework for FDA’s Real-World Evidence Program,” available at https://www.fda.gov/media/120060/download.
  • Franklin, J. M., and Schneeweiss, S. (2017), “When and How Can Real World Data Analyses Substitute for Randomized Controlled Trials?,” Clinical Pharmacology and Therapeutics, 102, 924–933. DOI: 10.1002/cpt.857.
  • Freemantle, N., Calvert, M., Wood, J., Eastaugh, J., and Griffin, C. (2003), “Composite Outcomes in Randomized Trials: Greater Precision But With Greater Uncertainty?,” Journal of the American Medical Association, 289, 2554–2559. DOI: 10.1001/jama.289.19.2554.
  • Fröbert, O., Lagerqvist, B., Olivecrona, G. K., Omerovic, E., Gudnason, T., Maeng, M., Aasa, M., Angerås, O., Calais, F., and Danielewicz, M. (2013), “Thrombus Aspiration During ST-Segment Elevation Myocardial Infarction,” New England Journal of Medicine, 369, 1587–1597. DOI: 10.1056/NEJMoa1308789.
  • Frost, M. H., Reeve, B. B., Liepa, A. M., Stauffer, J. W., and Hays, R. D. (2007), “What Is Sufficient Evidence for the Reliability and Validity of Patient-Reported Outcome Measures?,” Value in Health, 10, S94–S105. DOI: 10.1111/j.1524-4733.2007.00272.x.
  • Goldstein, B. A., Bhavsar, N. A., Phelan, M., and Pencina, M. J. (2016), “Controlling for Informed Presence Bias Due to the Number of Health Encounters in an Electronic Health Record,” American Journal of Epidemiology, 184, 847–855. DOI: 10.1093/aje/kww112.
  • Goldstein, B. A., Phelan, M., Pagidipati, N. J., and Peskoe, S. B. (2019), “How and When Informative Visit Processes Can Bias Inference When Using Electronic Health Records Data for Clinical Research,” Journal of the American Medical Informatics Association, 26, 1609–1617. DOI: 10.1093/jamia/ocz148.
  • Greenland, S. (1996), “Basic Methods for Sensitivity Analysis of Biases,” International Journal of Epidemiology, 25, 1107–1116. DOI: 10.1093/ije/25.6.1107.
  • Greenland, S. (2003), “Quantifying Biases in Causal Models: Classical Confounding vs Collider-Stratification Bias,” Epidemiology, 14, 300–306.
  • Haynes, R. B. (2006), “Forming Research Questions,” Journal of Clinical Epidemiology, 59, 881–886.
  • Hennekens, C. H., Buring, J. E., and Mayrent, S. L. (1987), Epidemiology in Medicine, Boston, MA: Little, Brown and Company.
  • Hernán, M. A., Hernández-Díaz, S., and Robins, J. M. (2004), “A Structural Approach to Selection Bias,” Epidemiology, 15, 615.
  • Hernán, M. A., and Robins, J. M. (2017), “Per-Protocol Analyses of Pragmatic Trials,” New England Journal of Medicine, 377, 1391–1398. DOI: 10.1056/NEJMsm1605385.
  • Ho, M., van der Laan, M., Lee, H., Chen, J., Kee, K., Fang, Y., He, W., Irony, T., Jiang, Q., Lin, X., Meng, Z., Mishra-Kalyani, P., Rockhold, F., Song, Y., Wang, H., and White, R. (2020), “The Current Landscape in Causal Inference Frameworks for Design and Analysis of Studies Using Real-World Data and Evidence,” Statistics in Biopharmaceutical Research (submitted).
  • Hripcsak, G., Duke, J. D., Shah, N. H., Reich, C. G., Huser, V., Schuemie, M. J., Suchard, M. A., Park, R. W., Wong, I. C. K., Rijnbeek, P. R., van der Lei, J., Pratt, N., Norén, G. N., Li, Y.-C., Stang, P. E., Madigan, D., and Ryan, P. B. (2015), “Observational Health Data Sciences and Informatics (OHDSI): Opportunities for Observational Researchers,” Studies in Health Technology and Informatics, 216, 574–578.
  • ICH E9 (R1) (2019), “Addendum to Statistical Principles for Clinical Trials on Choosing Appropriate Estimands and Defining Sensitivity Analyses in Clinical Trials,” available at https://database.ich.org/sites/default/files/E9-R1_Step4_Guideline_2019_1203.pdf.
  • ICH E10 (2000), “Choice of Control Group in Clinical Trials,” available at https://database.ich.org/sites/default/files/E10_Guideline.pdf.
  • Izem, R., Sanchez-Kam, M., Ma, H., Zink, R., and Zhao, Y. (2018), “Sources of Safety Data and Statistical Strategies for Design and Analysis: Postmarket Surveillance,” Therapeutic Innovation & Regulatory Science, 52, 159–169.
  • Ji, X., Small, D. S., Leonard, C. E., and Hennessy, S. (2017), “The Trend-in-Trend Research Design for Causal Inference,” Epidemiology, 28, 529.
  • Johnson, M. L., Crown, W., Martin, B. C., Dormuth, C. R., and Siebert, U. (2009), “Good Research Practices for Comparative Effectiveness Research: Analytic Methods to Improve Causal Inference From Nonrandomized Studies of Treatment Effects Using Secondary Data Sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part III,” Value in Health, 12, 1062–1073.
  • Kiyota, Y., Schneeweiss, S., Glynn, R. J., Cannuscio, C. C., Avorn, J., and Solomon, D. H. (2004), “Accuracy of Medicare Claims-Based Diagnosis of Acute Myocardial Infarction: Estimating Positive Predictive Value on the Basis of Review of Hospital Records,” American Heart Journal, 148, 99–104. DOI: 10.1016/j.ahj.2004.02.013.
  • Kleinbaum, D. G., Kupper, L. L., and Morgenstern, H. (1982), Epidemiologic Research: Principles and Quantitative Methods, New York: Wiley.
  • Kruse, C. S., Stein, A., Thomas, H., and Kaur, H. (2018), “The Use of Electronic Health Records to Support Population Health: A Systematic Review of the Literature,” Journal of Medical Systems, 42, 214. DOI: 10.1007/s10916-018-1075-6.
  • Laney, D. (2001), “3D Data Management: Controlling Data Volume, Velocity and Variety,” META Group Research Note, 6, 1.
  • Lash, T. L., Fink, A. K., and Fox, M. P. (2009), “A Guide to Implementing Quantitative Bias Analysis,” in Applying Quantitative Bias Analysis to Epidemiologic Data, New York: Springer, pp. 13–32.
  • Lash, T. L., Fox, M. P., MacLehose, R. F., Maldonado, G., McCandless, L. C., and Greenland, S. (2014), “Good Practices for Quantitative Bias Analysis,” International Journal of Epidemiology, 43, 1969–1985. DOI: 10.1093/ije/dyu149.
  • Lee, W.-C. (2014), “Detecting and Correcting the Bias of Unmeasured Factors Using Perturbation Analysis: A Data-Mining Approach,” BMC Medical Research Methodology, 14, 18. DOI: 10.1186/1471-2288-14-18.
  • Li, H., Mukhi, V., Lu, N., Xu, Y.-L., and Yue, L. Q. (2016), “A Note on Good Practice of Objective Propensity Score Design for Premarket Nonrandomized Medical Device Studies With an Example,” Statistics in Biopharmaceutical Research, 8, 282–286. DOI: 10.1080/19466315.2016.1148071.
  • Lin, J., Gamalo-Siebers, M., and Tiwari, R. (2019), “Propensity-Score-Based Priors for Bayesian Augmented Control Design,” Pharmaceutical Statistics, 18, 223–238. DOI: 10.1002/pst.1918.
  • Lin, K. J., and Schneeweiss, S. (2016), “Considerations for the Analysis of Longitudinal Electronic Health Records Linked to Claims Data to Study the Effectiveness and Safety of Drugs,” Clinical Pharmacology and Therapeutics, 100, 147–159. DOI: 10.1002/cpt.359.
  • Loudon, K., Treweek, S., Sullivan, F., Donnan, P., Thorpe, K. E., and Zwarenstein, M. (2015), “The PRECIS-2 Tool: Designing Trials That Are Fit for Purpose,” BMJ, 350, h2147. DOI: 10.1136/bmj.h2147.
  • Lund, J. L., Richardson, D. B., and Stürmer, T. (2015), “The Active Comparator, New User Study Design in Pharmacoepidemiology: Historical Foundations and Contemporary Application,” Current Epidemiology Reports, 2, 221–228. DOI: 10.1007/s40471-015-0053-5.
  • Ma, H., Russek-Cohen, E., Izem, R., Marchenko, O. V., and Jiang, Q. (2018), “Sources of Safety Data and Statistical Strategies for Design and Analysis: Transforming Data Into Evidence,” Therapeutic Innovation & Regulatory Science, 52, 187–198.
  • Marchenko, O., Russek-Cohen, E., Levenson, M., Zink, R. C., Krukas-Hampel, M. R., and Jiang, Q. (2018), “Sources of Safety Data and Statistical Strategies for Design and Analysis: Real World Insights,” Therapeutic Innovation & Regulatory Science, 52, 170–186.
  • Névéol, A., and Zweigenbaum, P. (2015), “Clinical Natural Language Processing in 2014: Foundational Methods Supporting Efficient Healthcare,” Yearbook of Medical Informatics, 10, 194–198. DOI: 10.15265/IY-2015-035.
  • Phelan, M., Bhavsar, N. A., and Goldstein, B. A. (2017), “Illustrating Informed Presence Bias in Electronic Health Records Data: How Patient Interactions With a Health System Can Impact Inference,” eGEMs, 5, 22. DOI: 10.5334/egems.243.
  • Randall, S. M., Ferrante, A. M., Boyd, J. H., Bauer, J. K., and Semmens, J. B. (2014), “Privacy-Preserving Record Linkage on Large Real World Datasets,” Journal of Biomedical Informatics, 50, 205–212. DOI: 10.1016/j.jbi.2013.12.003.
  • Relton, C., Torgerson, D., O’Cathain, A., and Nicholl, J. (2010), “Rethinking Pragmatic Randomised Controlled Trials: Introducing the ‘Cohort Multiple Randomised Controlled Trial’ Design,” BMJ, 340, c1066. DOI: 10.1136/bmj.c1066.
  • Roach, K. E. (2006), “Measurement of Health Outcomes: Reliability, Validity and Responsiveness,” Journal of Prosthetics and Orthotics, 18, P8.
  • Robins, J. (1986), “A New Approach to Causal Inference in Mortality Studies With a Sustained Exposure Period—Application to Control of the Healthy Worker Survivor Effect,” Mathematical Modelling, 7, 1393–1512. DOI: 10.1016/0270-0255(86)90088-6.
  • Robins, J. (1992), “Estimation of the Time-Dependent Accelerated Failure Time Model in the Presence of Confounding Factors,” Biometrika, 79, 321–334.
  • Robins, J. M., Rotnitzky, A., and Zhao, L. P. (1994), “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed,” Journal of the American Statistical Association, 89, 846–866. DOI: 10.1080/01621459.1994.10476818.
  • Rockhold, F., Bromley, C., Wagner, E. K., and Buyse, M. (2019), “Open Science: The Open Clinical Trials Data Journey,” Clinical Trials, 16, 539–546. DOI: 10.1177/1740774519865512.
  • Rosenbaum, P. R. (1987), “Model-Based Direct Adjustment,” Journal of the American Statistical Association, 82, 387–394. DOI: 10.1080/01621459.1987.10478441.
  • Rosenbaum, P. R., and Rubin, D. B. (1983), “The Central Role of the Propensity Score in Observational Studies for Causal Effects,” Biometrika, 70, 41–55. DOI: 10.1093/biomet/70.1.41.
  • Rosenbaum, P. R., and Rubin, D. B. (1985), “Constructing a Control Group Using Multivariate Matched Sampling Methods That Incorporate the Propensity Score,” The American Statistician, 39, 33–38.
  • Rosenberger, W. F., and Lachin, J. M. (2015), Randomization in clinical Trials: Theory and Practice, New York: Wiley.
  • Rothman, K. J. (1986), Modern Epidemiology, Boston, MA: Little, Brown and Company.
  • Rubin, D. B. (1988), “An Overview of Multiple Imputation,” in Proceedings of the Survey Research Methods Section of the American Statistical Association, Citeseer, pp. 79–84.
  • Rubin, D. B. (2007), “The Design Versus the Analysis of Observational Studies for Causal Effects: Parallels With the Design of Randomized Trials,” Statistics in Medicine, 26, 20–36.
  • Rubin, D. B. (2008), “For Objective Causal Inference, Design Trumps Analysis,” The Annals of Applied Statistics, 2, 808–840.
  • Rubin, D. B., and Schenker, N. (1991), “Multiple Imputation in Health-Are Databases: An Overview and Some Applications,” Statistics in Medicine, 10, 585–598. DOI: 10.1002/sim.4780100410.
  • Rusanov, A., Weiskopf, N. G., Wang, S., and Weng, C. (2014), “Hidden in Plain Sight: Bias Towards Sick Patients When Sampling Patients With Sufficient Electronic Health Record Data for Research,” BMC Medical Informatics and Decision Making, 14, 51. DOI: 10.1186/1472-6947-14-51.
  • Schneeweiss, S., and Avorn, J. (2005), “A Review of Uses of Health Care Utilization Databases for Epidemiologic Research on Therapeutics,” Journal of Clinical Epidemiology, 58, 323–337. DOI: 10.1016/j.jclinepi.2004.10.012.
  • Shinozaki, T., and Nojima, M. (2019), “Misuse of Regression Adjustment for Additional Confounders Following Insufficient Propensity Score Balancing,” Epidemiology, 30, 541–548. DOI: 10.1097/EDE.0000000000001023.
  • Stamey, J. D., Beavers, D. P., Faries, D., Price, K. L., and Seaman, J. W., Jr. (2014), “Bayesian Modeling of Cost-Effectiveness Studies With Unmeasured Confounding: A Simulation Study,” Pharmaceutical Statistics, 13, 94–100. DOI: 10.1002/pst.1604.
  • Streeter, A. J., Lin, N. X., Crathorne, L., Haasova, M., Hyde, C., Melzer, D., and Henley, W. E. (2017), “Adjusting for Unmeasured Confounding in Nonrandomized Longitudinal Studies: A Methodological Review,” Journal of Clinical Epidemiology, 87, 23–34. DOI: 10.1016/j.jclinepi.2017.04.022.
  • Stuart, E. A. (2010), “Matching Methods for Causal Inference: A Review and a Look Forward,” Statistical Science, 25, 1. DOI: 10.1214/09-STS313.
  • Stürmer, T., Glynn, R. J., Rothman, K. J., Avorn, J., and Schneeweiss, S. (2007), “Adjustments for Unmeasured Confounders in Pharmacoepidemiologic Database Studies Using External Information,” Medical Care, 45, S158. DOI: 10.1097/MLR.0b013e318070c045.
  • Stürmer, T., Schneeweiss, S., Avorn, J., and Glynn, R. J. (2005), “Adjusting Effect Estimates for Unmeasured Confounding With Validation Data Using Propensity Score Calibration,” American Journal of Epidemiology, 162, 279–289. DOI: 10.1093/aje/kwi192.
  • Suissa, S., Moodie, E. E., and Dell’Aniello, S. (2017), “Prevalent New-User Cohort Designs for Comparative Drug Effect Studies by Time-Conditional Propensity Scores,” Pharmacoepidemiology and Drug Safety, 26, 459–468. DOI: 10.1002/pds.4107.
  • Townsend, H. (2013), “Natural Language Processing and Clinical Outcomes: The Promise and Progress of NLP for Improved Care,” Journal of AHIMA, 84, 44–45.
  • van der Laan, M. J., and Rubin, D. (2006), “Targeted Maximum Likelihood Learning,” The International Journal of Biostatistics, 2, 1–40. DOI: 10.2202/1557-4679.1043.
  • Vandenbroucke, J. P., von Elm, E., Altman, D. G., Gøtzsche, P. C., Mulrow, C. D., Pocock, S. J., Poole, C., Schlesselman, J. J., Egger, M., and STROBE Initiative (2007), “Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration,” Epidemiology, 18, 805–835. DOI: 10.1097/EDE.0b013e3181577511.
  • VanderWeele, T. J. (2019), “Principles of Confounder Selection,” European Journal of Epidemiology, 34, 211–219. DOI: 10.1007/s10654-019-00494-6.
  • von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., Vandenbroucke, J. P., and STROBE Initiative (2007), “The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies,” Epidemiology, 18, 800–804. DOI: 10.1097/EDE.0b013e3181577654.
  • Wang, S. V., Schneeweiss, S., Berger, M. L., Brown, J., de Vries, F., Douglas, I., Gagne, J. J., Gini, R., Klungel, O., and Mullins, C. D. (2017), “Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies,” Value in Health, 20, 1009–1022. DOI: 10.1016/j.jval.2017.08.3018.
  • Wells, B. J., Chagin, K. M., Nowacki, A. S., and Kattan, M. W. (2013), “Strategies for Handling Missing Data in Electronic Health Record Derived Data,” eGEMs, 1, 1035. DOI: 10.13063/2327-9214.1035.
  • Xu, Y., Lu, N., Yue, L., and Tiwari, R. (2020), “A Study Design for Augmenting the Control Group in a Randomized Controlled Trial: A Quality Process for Interaction Among Stakeholders,” Therapeutic Innovation & Regulatory Science, 54, 269–274.
  • Yu, M., Xie, D., Wang, X., Weiner, M. G., and Tannen, R. L. (2012), “Prior Event Rate Ratio Adjustment: Numerical Studies of a Statistical Method to Address Unrecognized Confounding in Observational Studies,” Pharmacoepidemiology and Drug Safety, 21, 60–68. DOI: 10.1002/pds.3235.
  • Yue, L. Q., Lu, N., and Xu, Y. (2014), “Designing Premarket Observational Comparative Studies Using Existing Data as Controls: Challenges and Opportunities,” Journal of Biopharmaceutical Statistics, 24, 994–1010. DOI: 10.1080/10543406.2014.926367.
  • Zaki, R., Bulgiba, A., Ismail, R., and Ismail, N. A. (2012), “Statistical Methods Used to Test for Agreement of Medical Instruments Measuring Continuous Variables in Method Comparison Studies: A Systematic Review,” PLOS One, 7, e37908. DOI: 10.1371/journal.pone.0037908.
  • Zhang, X., Faries, D. E., Boytsov, N., Stamey, J. D., and Seaman, J. W., Jr. (2016), “A Bayesian Sensitivity Analysis to Evaluate the Impact of Unmeasured Confounding With External Data: A Real World Comparative Effectiveness Study in Osteoporosis,” Pharmacoepidemiology and Drug Safety, 25, 982–992. DOI: 10.1002/pds.4053.
  • Zhang, X., Faries, D. E., Li, H., Stamey, J. D., and Imbens, G. W. (2018), “Addressing Unmeasured Confounding in Comparative Observational Research,” Pharmacoepidemiology and Drug Safety, 27, 373–382. DOI: 10.1002/pds.4394.
  • Zink, R. C., Marchenko, O., Sanchez-Kam, M., Ma, H., and Jiang, Q. (2018), “Sources of Safety Data and Statistical Strategies for Design and Analysis: Clinical Trials,” Therapeutic Innovation & Regulatory Science, 52, 141–158.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.