378
Views
3
CrossRef citations to date
0
Altmetric
Articles

How to assess and take into account trend in single-case experimental design data

ORCID Icon, ORCID Icon & ORCID Icon
Pages 388-429 | Received 02 Sep 2022, Accepted 07 Mar 2023, Published online: 24 Mar 2023

References

  • Allison, D. B., & Gorman, B. S. (1993). Calculating effect sizes for meta-analysis: The case of the single case. Behaviour Research and Therapy, 31(6), 621–631. https://doi.org/10.1016/0005-7967(93)90115-B
  • Bovend’Eerdt, T. J., Botell, R. E., & Wade, D. T. (2009). Writing SMART rehabilitation goals and achieving goal attainment scaling: A practical guide. Clinical Rehabilitation, 23(4), 352–361. https://doi.org/10.1177/0269215508101741
  • Branch, M. (2014). Malignant side effects of null-hypothesis significance testing. Theory & Psychology, 24(2), 256–277. https://doi.org/10.1177/0959354314525282
  • Bringmann, L. F., Hamaker, E. L., Vigo, D. E., Aubert, A., Borsboom, D., & Tuerlinckx, F. (2017). Changing dynamics: Time-varying autoregressive models using generalized additive modeling. Psychological Methods, 22(3), 409–425. https://doi.org/10.1037/met0000085
  • Carlin, M. T., & Costello, M. S. (2018). Development of a distance-based effect size metric for single-case research: Ratio of distances. Behavior Therapy, 49(6), 981–994. https://doi.org/10.1016/j.beth.2018.02.005
  • Center, B. A., Skiba, R. J., & Casey, A. (1985). A methodology for the quantitative synthesis of intra-subject design research. The Journal of Special Education, 19(4), 387–400. https://doi.org/10.1177/002246698501900404
  • Chen, L.-T., Feng, Y., Wu, P.-J., & Peng, C.-Y. J. (2020). Dealing with missing data by EM in single-case studies. Behavior Research Methods, 52(1), 131−150. https://doi.org/10.3758/s13428-019-01210-8
  • Chen, L. T., Wu, P. J., & Peng, C. Y. J. (2019). Accounting for baseline trends in intervention studies: Methods, effect sizes, and software. Cogent Psychology, 6(1), 1679941. https://doi.org/10.1080/23311908.2019.1679941
  • De, T. K., Michiels, B., Tanious, R., & Onghena, P. (2020). Handling missing data in randomization tests for single-case experiments: A simulation study. Behavior Research Methods, 52(3), 1355–1370. https://doi.org/10.3758/s13428-019-01320-3
  • De, T. K., & Onghena, P. (2022). The randomized marker method for single-case randomization tests: Handling data missing at random and data missing not at random. Behavior Research Methods, 54(6), 2905–2938. https://doi.org/10.3758/s13428-021-01781-5
  • Declercq, L., Cools, W., Beretvas, S. N., Moeyaert, M., Ferron, J. M., & Van den Noortgate, W. (2020). MultiSCED: A tool for (meta-)analyzing single-case experimental data with multilevel modeling. Behavior Research Methods, 52(1), 177–192. https://doi.org/10.3758/s13428-019-01216-2
  • Ferron, J. (2002). Reconsidering the use of the general linear model with single-case data. Behavior Research Methods, Instruments, & Computers, 34(3), 324–331. https://doi.org/10.3758/BF03195459
  • Ferron, J., & Jones, P. K. (2006). Tests for the visual analysis of response-guided multiple-baseline data. The Journal of Experimental Education, 75(1), 66–81. https://doi.org/10.3200/JEXE.75.1.66-81
  • Ferron, J. M., Farmer, J. L., & Owens, C. M. (2010). Estimating individual treatment effects from multiple-baseline data: A Monte Carlo study of multilevel-modeling approaches. Behavior Research Methods, 42(4), 930–943. https://doi.org/10.3758/BRM.42.4.930
  • Ferron, J. M., Joo, S-H, & Levin, J. R. (2017). A Monte Carlo evaluation of masked visual analysis in response-guided versus fixed-criteria multiple-baseline designs. Journal of Applied Behavior Analysis, 50(4), 701–716. https://doi.org/10.1002/jaba.410
  • Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis, 36(3), 387–406. https://doi.org/10.1901/jaba.2003.36-387
  • Gast, D. L., & Spriggs, A. D. (2010). Visual analysis of graphic data. In D. L. Gast (Ed.), Single subject research methodology in behavioral sciences (pp. 199–233). Routledge.
  • Gorsuch, R. L. (1983). Three methods for analyzing limited time-series (N of 1) data. Behavioral Assessment, 5(2), 141–154.
  • Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–350. https://doi.org/10.1007/s10654-016-0149-3
  • Hamed, K. H., & Rao, A. A. (1998). A modified Mann-Kendall trend test for autocorrelated data. Journal of Hydrology, 204(1–4), 182–196. https://doi.org/10.1016/S0022-1694(97)00125-X
  • Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A standardized mean difference effect size for multiple baseline designs across individuals. Research Synthesis Methods, 4(4), 324−341. https://doi.org/10.1002/jrsm.1086
  • Hershberger, S. L., Wallace, D. D., Green, S. B., & Marquis, J. G. (1999). Meta-analysis of single-case data. In R. H. Hoyle (Ed.), Statistical strategies for small sample research (pp. 107–132). Sage.
  • Huitema, B. E., & McKean, J. W. (2000). Design specification issues in time-series intervention models. Educational and Psychological Measurement, 60(1), 38–58. https://doi.org/10.1177/00131640021970358
  • Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679–688. https://doi.org/10.1016/j.ijforecast.2006.03.001
  • Imam, A. A. (2021). Historically recontextualizing Sidman’s Tactics: How behavior analysis avoided psychology’s methodological ouroboros. Journal of the Experimental Analysis of Behavior, 115(1), 115–128. https://doi.org/10.1002/jeab.661
  • Janosky, J. E. (1992). Use of the nonparametric smoother for examination of data from a single-subject design. Behavior Modification, 16(3), 387–399. https://doi.org/10.1177/01454455920163005
  • Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise. A flaw in human judgment. William Collins.
  • Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1(4), 427–452. https://doi.org/10.1177/014544557714001
  • Kazdin, A. E. (1978). Methodological and interpretive problems of single-case experimental designs. Journal of Consulting and Clinical Psychology, 46(4), 629–642. https://doi.org/10.1037/0022-006X.46.4.629
  • Kazdin, A. E. (2021). Single-case experimental designs: Characteristics, changes, and challenges. Journal of the Experimental Analysis of Behavior, 115(1), 56–85. https://doi.org/10.1002/jeab.638
  • Kinney, C. E. L. (2022). Improving visual inspection, interrater agreement, and standardization with the graphic variability quotient. The Psychological Record. Advance online publication. https://doi.org/10.1007/s40732-022-00522-0
  • Kinney, C. E., Begeny, J. C., Stage, S. A., Patterson, S., & Johnson, A. (2022). Three alternatives for graphing behavioral data: A comparison of usability and acceptability. Behavior Modification, 46(1), 3–35. https://doi.org/10.1177/0145445520946321
  • Kiresuk, T. J., & Sherman, R. (1968). Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community Mental Health Journal, 4(6), 443–453. https://doi.org/10.1007/BF01530764
  • Kranak, M. P., Falligant, J. M., & Hausman, N. L. (2021). Application of automated nonparametric statistical analysis in clinical contexts. Journal of Applied Behavior Analysis, 54(2), 824–833. https://doi.org/10.1002/jaba.789
  • Krasny-Pacini, A. (2023). Single-case experimental designs for developmental disability research: Invited review. Developmental Medicine & Child Neurology. Advance online publication. https://doi.org/10.1111/dmcn.15513.
  • Krasny-Pacini, A., Evans, J., Sohlberg, M. M., & Chevignard, M. (2016). Proposed criteria for appraising goal attainment scales used as outcome measures in rehabilitation research. Archives of Physical Medicine and Rehabilitation, 97(1), 157–170. https://doi.org/10.1016/j.apmr.2015.08.424
  • Krasny-Pacini, A., Hiebel, J., Pauly, F., Godon, S., & Chevignard, M. (2013). Goal attainment scaling in rehabilitation: A literature-based update. Annals of Physical and Rehabilitation Medicine, 56(3), 212–230. https://doi.org/10.1016/j.rehab.2013.02.002
  • Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34(1), 26−38. https://doi.org/10.1177/0741932512452794
  • Kratochwill, T. R., Horner, R. H., Levin, J. R., Machalicek, W., Ferron, J., & Johnson, A. (2021). Single-case design standards: An update and proposed upgrades. Journal of School Psychology, 89, 91–105. https://doi.org/10.1016/j.jsp.2021.10.006
  • Kubina, R. M. Jr., King, S. A., Halkowski, M., Quigley, S., & Kettering, T. (2022). Slope identification and decision making: A comparison of linear and ratio graphs. Behavior Modification. Advance online publication. https://doi.org/10.1177/014544552211300
  • Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation, 24(3-4), 445–463. https://doi.org/10.1080/09602011.2013.815636
  • Lanovaz, M. J., & Primiani, R. (2022). Waiting for baseline stability in single-case designs: Is it worth the time and effort? Behavior Research Methods. Advance online publication. https://doi.org/10.3758/s13428-022-01858-9
  • Lebrault, H., Chavanne, C., Abada, G., Latinovic, B., Varillon, S., Bertrand, A.-F., Oudjedi, E., Krasny-Pacini, A., & Chevignard, M. (2021). Exploring the use of the cognitive orientation to daily occupational performance approach (CO-OP) with children with executive functions deficits after severe acquired brain injury: A single case experimental design study. Annals of Physical and Rehabilitation Medicine, 64(5), 101535. https://doi.org/10.1016/j.rehab.2021.101535
  • Ledford, J. R., Barton, E. E., Severini, K. E., & Zimmerman, K. N. (2019). A primer on single-case research designs: Contemporary use and analysis. American Journal on Intellectual and Developmental Disabilities, 124(1), 35–56. https://doi.org/10.1352/1944-7558-124.1.35
  • Ledford, J. R., & Zimmerman, K. N. (2022). Rethinking rigor in multiple baseline and multiple probe designs. Remedial and Special Education, https://doi.org/10.1177/07419325221102539
  • Levack, W. M., Taylor, K., Siegert, R. J., Dean, S. G., McPherson, K. M., & Weatherall, M. (2006). Is goal planning in rehabilitation effective? A systematic review. Clinical Rehabilitation, 20(9), 739–755. https://doi.org/10.1177/0269215506070791
  • Levin, J. R., Ferron, J. M., & Gafurov, B. S. (2018). Comparison of randomization-test procedures for single-case multiple-baseline designs. Developmental Neurorehabilitation, 21(5), 290–311. https://doi.org/10.1080/17518423.2016.1197708
  • Levin, J. R., Ferron, J. M., & Gafurov, B. S. (2021). Investigation of single-case multiple-baseline randomization tests of trend and variability. Educational Psychology Review, 33(2), 713–737. https://doi.org/10.1007/s10648-020-09549-7
  • MacKay, G., Somerville, W., & Lundie, J. (1996). Reflections on goal attainment scaling (GAS): cautionary notes and proposals for development. Educational Research, 38(2), 161–172. https://doi.org/10.1080/0013188960380204
  • Maggin, D. M., Barton, E., Reichow, B., Lane, K., & Shogren, K. A. (2022). Commentary on the what works clearinghouse standards and procedures handbook (v. 4.1) for the review of single-case research. Remedial and Special Education, 43(6), 421–433. https://doi.org/10.1177/07419325211051317
  • Maggin, D. M., Cook, B. G., & Cook, L. (2018). Using single-case research designs to examine the effects of interventions in special education. Learning Disabilities Research & Practice, 33(4), 182–191. https://doi.org/10.1111/ldrp.12184
  • Maggin, D. M., Swaminathan, H., Rogers, H. J., O’Keefe, B. V., Sugai, G., & Horner, R. H. (2011). A generalized least squares regression approach for computing effect sizes in single-case research: Application examples. Journal of School Psychology, 49(3), 301–321. https://doi.org/10.1016/j.jsp.2011.03.004
  • Manolov, R. (2018). Linear trend in single-case visual and quantitative analyses. Behavior Modification, 42(5), 684–706. https://doi.org/10.1177/0145445517726301
  • Manolov, R., Moeyaert, M., & Fingerhut, J. (2022). A priori justification for effect measures in single-case experimental designs. Perspectives on Behavior Science, 45(1), 153–186. https://doi.org/10.1007/s40614-021-00282-2
  • Manolov, R., & Onghena, P. (2022). Defining and assessing immediacy in single-case experimental designs. Journal of the Experimental Analysis of Behavior, 118(3), 462−492. https://doi.org/10.1002/jeab.799
  • Manolov, R., & Rochat, L. (2015). Further developments in summarising and meta-analysing single-case data: An illustration with neurobehavioural interventions in acquired brain injury. Neuropsychological Rehabilitation, 25(5), 637−662. https://doi.org/10.1080/09602011.2015.1064452
  • Manolov, R., & Solanas, A. (2013). A comparison of mean phase difference and generalized least squares for analyzing single-case data. Journal of School Psychology, 51(2), 201–215. https://doi.org/10.1016/j.jsp.2012.12.005
  • Manolov, R., Solanas, A., & Sierra, V. (2019). Extrapolating baseline trend in single-case data: Problems and tentative solutions. Behavior Research Methods, 51(6), 2847–2869. https://doi.org/10.3758/s13428-018-1165-x
  • Manolov, R., & Vannest, K. (2019). A visual aid and objective rule encompassing the data features of visual analysis. Behavior Modification. Advance online publication. https://doi.org/10.1177/0145445519854323
  • Mendenhall, W., & Sincich, T. (2012). A second course in statistics: Regression analysis (7th ed.). Prentice Hall.
  • Mercer, S. H., & Sterling, H. E. (2012). The impact of baseline trend control on visual analysis of single-case data. Journal of School Psychology, 50(3), 403–419. https://doi.org/10.1016/j.jsp.2011.11.004
  • Miller, M. J. (1985). Analyzing client change graphically. Journal of Counseling & Development, 63(8), 491–494. https://doi.org/10.1002/j.1556-6676.1985.tb02743.x
  • Moeyaert, M., Ferron, J., Beretvas, S., & Van den Noortgate, W. (2014a). From a single-level analysis to a multilevel analysis of single-case experimental designs. Journal of School Psychology, 52(2), 191–211. https://doi.org/10.1016/j.jsp.2013.11.003
  • Moeyaert, M., Ugille, M., Ferron, J., Beretvas, S. N., & Van den Noortgate, W. (2014b). The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research. Behavior Modification, 38(5), 665–704. https://doi.org/10.1177/0145445514535243
  • Natesan, P. (2019). Fitting Bayesian models for single-case experimental designs. Methodology, 15(4), 147–156. https://doi.org/10.1027/1614-2241/a000180
  • Natesan, P., & Hedges, L. V. (2017). Bayesian unknown change-point models to investigate immediacy in single case designs. Psychological Methods, 22(4), 743–759. https://doi.org/10.1037/met0000134
  • Natesan Batley, P. (2022). Bayesian analysis of single case experimental design count data in trauma research: A tutorial. Psychological Trauma: Theory, Research, Practice, and Policy. https://doi.org/10.1037/tra0001357
  • Normand, M. P., & Bailey, J. S. (2006). The effects of celeration lines on visual data analysis. Behavior Modification, 30(3), 295–314. https://doi.org/10.1177/0145445503262406
  • Olive, M. L., & Franco, J. H. (2008). (Effect) size matters: And so does the calculation. The Behavior Analyst Today, 9(1), 5–10. https://doi.org/10.1037/h0100642
  • Parker, R. I., & Brossart, D. F. (2003). Evaluating single-case research data: A comparison of seven statistical methods. Behavior Therapy, 34(2), 189–211. https://doi.org/10.1016/S0005-7894(03)80013-8
  • Parker, R. I., Cryer, J., & Byrns, G. (2006). Controlling baseline trend in single-case research. School Psychology Quarterly, 21(4), 418–444. https://doi.org/10.1037/h0084131
  • Parker, R. I., & Vannest, K. J. (2009). An improved effect size for single-case research: Nonoverlap of all pairs. Behavior Therapy, 40(4), 357−367. https://doi.org/10.1016/j.beth.2008.10.006
  • Parker, R. I., Vannest, K. J., & Davis, J. L. (2014). A simple method to control positive baseline trend within data nonoverlap. The Journal of Special Education, 48(2), 79–91. https://doi.org/10.1177/0022466912456430
  • Parker, R. I., Vannest, K. J., Davis, J. L., & Sauber, S. B. (2011). Combining nonoverlap and trend for single-case research: Tau-U. Behavior Therapy, 42(2), 284−299. https://doi.org/10.1016/j.beth.2010.08.006
  • Peng, C. Y. J., & Chen, L. T. (2021). Assessing intervention effects in the presence of missing scores. Education Sciences, 11(2), article 76. https://doi.org/10.3390/educsci11020076
  • Perone, M. (1999). Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst, 22(2), 109-116. https://doi.org/10.1007/BF03391988
  • Polatajko, H. J., Mandich, A. D., Miller, L. T., & Macnab, J. J. (2001). Cognitive orientation to daily occupational performance (CO-OP). Physical & Occupational Therapy in Pediatrics, 20(2–3), 83–106. https://doi.org/10.1300/J006v20n02_06
  • Pustejovsky, J. E. (2018). Using response ratios for meta-analyzing single-case designs with behavioral outcomes. Journal of School Psychology, 68(Jun), 99−112. https://doi.org/10.1016/j.jsp.2018.02.003
  • Pustejovsky, J. E., Swan, D. M., & English, K. W. (2019). An examination of measurement procedures and characteristics of baseline outcome data in single-case research. Behavior Modification. Advance online publication. https://doi.org/10.1177/0145445519864264
  • Rakap, S., Snyder, P., & Pasia, C. (2014). Comparison of nonoverlap methods for identifying treatment effect in single-subject experimental research. Behavioral Disorders, 39(3), 128–145. https://doi.org/10.1177/019874291303900303
  • Ross, S. G., & Begeny, J. C. (2014). Single-case effect size calculation: Comparing regression and non-parametric approaches across previously published reading intervention data sets. Journal of School Psychology, 52(4), 419–431. https://doi.org/10.1016/j.jsp.2014.06.003
  • Șen, N. (2022). Investigation of regression-based effect size methods developed in single-subject studies. Behavior Modification, 46(6), 1346–1382. https://doi.org/10.1177/01454455211054018
  • Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E. (2014). Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications. Journal of School Psychology, 52(2), 123–147. https://doi.org/10.1016/j.jsp.2013.11.005
  • Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43(4), 971−980. https://doi.org/10.3758/s13428-011-0111-y
  • Smith, J. D. (2012). Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods, 17(4), 510–550. https://doi.org/10.1037/a0029312
  • Smith, J. D., Borckardt, J. J., & Nash, M. R. (2012). Inferential precision in single-case time-series data streams: How well does the EM procedure perform when missing observations occur in autocorrelated data? Behavior Therapy, 43(3), 679−685. https://doi.org/10.1016/j.beth.2011.10.001
  • Solanas, A., Manolov, R., & Onghena, P. (2010). Estimating slope and level change in N=1 designs. Behavior Modification, 34(3), 195–218. https://doi.org/10.1177/0145445510363306
  • Solomon, B. G. (2014). Violations of assumptions in school-based single-case data. Behavior Modification, 38(4), 477–496. https://doi.org/10.1177/0145445513510931
  • Solomon, B. G., Howard, T. K., & Stein, B. L. (2015). Critical assumptions and distribution features pertaining to contemporary single-case effect sizes. Journal of Behavioral Education, 24(4), 438–458. https://doi.org/10.1007/s10864-015-9221-4
  • Spear, C. F., Strickland-Cohen, M. K., Romer, N., & Albin, R. W. (2013). An examination of social validity within single-case research with students with emotional and behavioral disorders. Remedial and Special Education, 34(6), 357–370. https://doi.org/10.1177/0741932513490809
  • Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11(5), 702–712. https://doi.org/10.1177/1745691616658637
  • Steenbeek, D., Ketelaar, M., Galama, K., & Gorter, J. W. (2008). Goal attainment scaling in paediatric rehabilitation: A report on the clinical training of an interdisciplinary team. Child: Care, Health and Development, 34(4), 521–529. https://doi.org/10.1111/j.1365-2214.2008.00841.x
  • Steenbeek, D., Ketelaar, M., Lindeman, E., Galama, K., & Gorter, J. W. (2010). Interrater reliability of goal attainment scaling in rehabilitation of children with cerebral palsy. Archives of Physical Medicine and Rehabilitation, 91(3), 429–435. https://doi.org/10.1016/j.apmr.2009.10.013
  • Sullivan, K. J., Shadish, W. R., & Steiner, P. M. (2015). An introduction to modeling longitudinal data with generalized additive models: Applications to single-case designs. Psychological Methods, 20(1), 26–42. https://doi.org/10.1037/met0000020
  • Swaminathan, H., Rogers, H. J., Horner, R., Sugai, G., & Smolkowski, K. (2014). Regression models and effect size measures for single case designs. Neuropsychological Rehabilitation, 24(3-4), 554−571. https://doi.org/10.1080/09602011.2014.887586
  • Swan, D. M., & Pustejovsky, J. E. (2018). A gradual effects model for single-case designs. Multivariate Behavioral Research, 53(4), 574–593. https://doi.org/10.1080/00273171.2018.1466681
  • Tanious, R., Manolov, R., & Onghena, P. (2021). The assessment of consistency in single-case experiments: Beyond A-B-A-B designs. Behavior Modification, 45(4), 560–580. https://doi.org/10.1177/0145445519882889
  • Tanious, R., & Onghena, P. (2021). A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behavior Research Methods, 53(4), 1371–1384. https://doi.org/10.3758/s13428-020-01502-4
  • Tarlow, K. (2017). An improved rank correlation effect size statistic for single-case designs: Baseline corrected Tau. Behavior Modification, 41(4), 427–467. https://doi.org/10.1177/0145445516676750
  • Tarlow, K. R., & Brossart, D. F. (2018). A comprehensive method of single-case data analysis: Interrupted time-series simulation (ITSSIM). School Psychology Quarterly, 33(4), 590–603. https://doi.org/10.1037/spq0000273
  • Tate, R. L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., Togher, L., & McDonald, S. (2013). Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item risk of bias in N-of-1 trials (RoBiNT) scale. Neuropsychological Rehabilitation, 23(5), 619–638. https://doi.org/10.1080/09602011.2013.824383
  • Tennant, A. (2007). Goal attainment scaling: Current methodological challenges. Disability and Rehabilitation, 29(20-21)(20–21), 1583–1588. https://doi.org/10.1080/09638280701618828
  • Tincani, M., & Travers, J. C. (2022). Questionable research practices in single-case experimental designs: Examples and possible solutions. In W. O’Donohue, A. Masuda, & S. Lilienfeld (Eds.), Avoiding questionable research practices in applied psychology (pp. 269–285). Springer.
  • Turner-Stokes, L. (2009). Goal attainment scaling (GAS) in rehabilitation: A practical guide. Clinical Rehabilitation, 23(4), 362–370. https://doi.org/10.1177/0269215508101742
  • Valentine, J. C., Tanner- Smith, E. E., Pustejovsky, J. E., & Lau, T. S. (2016). Between-case standardized mean difference effect sizes for single-case designs: A primer and tutorial using the scdhlm web application. Campbell Systematic Reviews, 12(1), 1–31. https://doi.org/10.4073/cmdp.2016.1
  • Vannest, K. J., & Ninci, J. (2015). Evaluating intervention effects in single-case research designs. Journal of Counseling & Development, 93(4), 403–411. https://doi.org/10.1002/jcad.12038
  • Vannest, K. J., & Ninci, J. (2016). Erratum. Journal of Counseling & Development, 94(3), 374. https://doi.org/10.1002/jcad.12092
  • Vannest, K. J., Parker, R. I., Davis, J. L., Soares, D. A., & Smith, S. L. (2012). The Theil–Sen slope for high-stakes decisions from progress monitoring. Behavioral Disorders, 37(4), 271–280. https://doi.org/10.1177/019874291203700406
  • Van Norman, E. R., Boorse, J., & Klingbeil, D. A. (2022). The relationship between visual depictions of rate of improvement and quantitative effect sizes in academic single-case experimental design studies. Journal of Behavioral Education, https://doi.org/10.1007/s10864-022-09500-6
  • Velleman, P. F., & Hoaglin, D. C. (1981). Applications, basics and computing of exploratory data analysis. Duxbury Press. https://ecommons.cornell.edu/handle/1813/78
  • Verboon, P., & Peters, G. J. (2020). Applying the generalized logistic model in single case designs: Modeling treatment-induced shifts. Behavior Modification, 44(1), 27–48. https://doi.org/10.1177/0145445518791255
  • Vibrac, C., Avias, A., François, P.-O., Isner-Horobeti, M.-E., & Krasny-Pacini, A. (2021). Charlie Chaplin and gesture training in severe aphasia: A controlled double-blind single-case experimental design. Annals of Physical and Rehabilitation Medicine, 64(1), 101356. https://doi.org/10.1016/j.rehab.2019.12.010
  • Weermeijer, J., Lafit, G., Kiekens, G., Wampers, M., Eisele, G., Kasanova, Z., Vaessen, T., Kuppens, P., & Myin-Germeys, I. (2022). Applying multiverse analysis to experience sampling data: Investigating whether preprocessing choices affect robustness of conclusions. Behavior Research Methods, 54(6), 2981–2992. https://doi.org/10.3758/s13428-021-01777-1
  • What Works Clearinghouse. (2022). Procedures and standards handbook, version 5.0. U.S. Department of Education, Institute of Education Sciences. https://ies.ed.gov/ncee/wwc/Docs/referenceresources/Final_WWC-HandbookVer5.0-0-508.pdf
  • White, D. M., Rusch, F. R., Kazdin, A. E., & Hartmann, D. P. (1989). Applications of meta-analysis in individual subject research. Behavioral Assessment, 11(3), 281–296.
  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., van Aert, R. C., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, article 1832. https://doi.org/10.3389/fpsyg.2016.01832
  • Wolery, M., Busick, M., Reichow, B., & Barton, E. E. (2010). Comparison of overlap methods for quantitatively synthesizing single-subject data. The Journal of Special Education, 44(1), 18–28. https://doi.org/10.1177/0022466908328009
  • Wolfe, K., Dickenson, T. S., Miller, B., & McGrath, K. V. (2019). Comparing visual and statistical analysis of multiple baseline design graphs. Behavior Modification, 43(3), 361–388. https://doi.org/10.1177/0145445518768723
  • Zaza, C., Stolee, P., & Prkachin, K. (1999). The application of goal attainment scaling in chronic pain settings. Journal of Pain and Symptom Management, 17(1), 55–64. https://doi.org/10.1016/S0885-3924(98)00106-7

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.