4,198
Views
0
CrossRef citations to date
0
Altmetric
Alienation Allegations and Beliefs

Harman and Lorandos’ false critique of Meier et al.’s family court study

, , &
Pages 119-138 | Received 18 Oct 2021, Accepted 26 Mar 2022, Published online: 05 Jul 2022

Abstract

Jennifer Harman and Demosthenes Lorandos purport to have identified numerous methodological flaws in our 2019 study of family court outcomes in cases involving abuse and alienation allegations (“FCO study”; Meier et al., Citation2019). At least half of the supposed flaws they itemized relate to one claim - that they were unable to access our methods and data. They treat the claimed lack of public access as evidence that our study is unreliable, while speculating about other potential flaws. Yet we note - and they acknowledge - that most of the methodological information they sought was in fact available before publication of their article. This article responds to and refutes Harman and Lorandos’ exaggerated and unfounded condemnation of our study. In addition to pointing out that the claimed lack of information would not be a methodological flaw even if true, we explain that their other criticisms are speculative, incorrect, or insignificant. We appreciate this opportunity to clarify that the important findings of the FCO study are valid and should be taken seriously by the courts and those interested in the fairness and safety of custody decisions when there are allegations of abuse and alienation.

In a final grant report posted online in 2019 (Meier et al., Citation2019), and in a peer-reviewed publication in 2020 (Meier), we reported our first set of findings from the Family Court Outcomes Study (“FCO Study”), which was funded by the National Institute of Justice ("NIJ"). Using information from published decisions, this study analyzed the effects of claims of abuse and claims of parental alienation, and their interaction, on court outcomes in custody disputes between parents. We also quantified the frequency with which mothers’ and fathers’ abuse claims were credited by the court.

Our study methods were designed to avoid bias and the study design was approved by the National Institute of Justice after outside peer reviews. For the most part, the results confirmed what practitioners and protective parents have reported: courts frequently disbelieve mothers’ claims of abuse by fathers, and fathers’ cross-claims of alienation increase this disbelief while also increasing mothers’ custody losses. These findings contradict the positions of many family court professionals and advocates—particularly proponents of parental alienation—many of whom take the position that it is fathers, not mothers, who are treated unfairly in family court.

Recently Harman and Lorandos (Citation2021) asserted that the FCO Study contains “many inaccurate and misleading statements” (p. 185). They claim that there are “at least 30 conceptual and methodological problems with the design and analyses of the study that make the results and the conclusions drawn dubious at best” (p. 185). Touting their decision to post their own study’s data and analyses on the Open Science Framework (“OSF”) website, they suggest that our failure to do the same renders our research ipso facto suspect. This response, our rebuttal to Harman and Lorandos’ criticisms, seeks to allay concerns about the FCO Study by correcting their many incorrect claims about it (2021). In another forthcoming article (Meier et al., Citation2022), we detail the many methodological and statistical errors in their own study (Harman & Lorandos, Citation2021), which vitiate both their own claimed findings and their assertion that they have refuted our study.

We begin this refutation of their critique of our study by providing essential background on the FCO Study and its methods. Then we respond to Harman and Lorandos’ criticisms, showing they are either incorrect or irrelevant to the reliability of our findings.

The family court outcomes study

Troubling family court treatment of mothers and children alleging abuse by a father has been widely documented over several decades, both domestically (Khaw et al., Citation2021; Stark et al., Citation2019; Silberg & Dallam, Citation2019; Meier, Citation2010; Bemiller, Citation2008; Berg, Citation2011) and internationally (Sheehy & LaPierre, Citation2020; #The Court Said; Council of Europe, Citation2019). Our FCO Study sought to determine the extent to which these extensive reports of unfavorable judicial responses to mothers’ abuse allegations—often involving parental alienation (“PA”) crossclaims - were indicative of an objectively measurable, national pattern in the United States. We designed the FCO Study to simply, quantitatively, describe family court decision-making.

Expanding from a small pilot study of alienation cases only (Meier & Dickson, Citation2017), the FCO Study utilized electronically published decisions from across the United States as a source of national data and collected these opinions in all custody cases between parents (not State-initiated) involving abuse or alienation published between January 1, 2005 and December 31, 2014. Like all NIJ grant proposals, the FCO study proposal was funded only after recommendation by independent peer reviewers. The framework and rationales for the codes and definitions, as well as the selection of variables for the study, are all described in the Final Summary Overview (Meier et al., Citation2019) and in the NIJ’s archives (Meier, Citation2019).

As described in the Final Summary Overview report submitted to NIJ, which was reviewed by Harman and Lorandos:

[The] purpose of the Meier et al. Study (hereafter “FCO Study”) was to bring neutral empirical data to bear on…Whether and to what extent… courts are disbelieving abuse claims and removing custody from parents claiming abuse, whether and to what extent gender impacts these findings, and how crossclaims of parental alienation affect courts’ treatment of mothers’ and fathers’ abuse claims. (Meier et al., Citation2019, p. 5).

Data would be drawn from a “search for all electronically published decisions in the U.S. in which there were (i) abuse allegations and alienation allegations; (ii) abuse allegations but no alienation allegations; and (iii) alienation allegations but no abuse allegations” (Meier et al., Citation2014). The study simply tabulated information found in the published judicial opinions, i.e., allegations and outcomes by gender, type of abuse alleged by a parent and found by the court, whether alienation was alleged/found, and other objective factors.

To ensure that our search would capture all relevant published cases, the first two authors and two law graduates retained as coders researched states’ varying terminology for family abuse and, after weeks of testing of searches, generated a comprehensive 11-line search string (Meier, Citation2019). This search netted over 15,000 cases that were reviewed and triaged by the coders to match our inclusion/exclusion criteria described in detail in our User Guide (User Guide Appendix A). In total, 4,338 cases met the inclusion criteria. The coding of these cases included a rigorous training period where both coders and one of the investigators coded the same cases and discussed all discrepancies; this training period was followed by an additional period of double-coding by both coders, with investigators’ review of all discrepancies until coders were aligned. Periodic coding checks were performed, via each coder re-coding the other’s cases, and review and resolution of any discrepancies by the investigators. Full descriptions of the iterative coding training process were included in the required regular progress reports to NIJ.

Coded data were imported into Stata and variables were constructed for the planned analyses. The complete variable construction methodology is included in the posted study documentation; this material answers many of the doubts Harman and Lorandos (Citation2021) expressed about our variable construction (Meier, Citation2019). As described in the Final Summary Overview (2019), the data were filtered to investigate two sets of questions. The first segment analyzed and compared outcomes for abuse cases with and without alienation crossclaims (2351 cases; Meier et al., Citation2019, p. 10, 13), in order to compare them. The second segment (2794 cases; Meier et al., Citation2019, p. 20) described outcomes where abuse was alleged, irrespective of alienation claims. The majority of the findings reported are simple frequencies, such as the percentage of mothers alleging one or another type of abuse who were believed or who lost custody. In addition, odds ratios were reported for purposes of particular comparisons, e.g., court responses to allegations of domestic violence vs. child abuse, court responses when alienation is and is not crossclaimed, etc. These odds ratios were tested for statistical significance.

The FCO Study’s hypotheses were contained in the proposal that was subjected to external peer-review prior to funding (Meier et al., Citation2014), and the hypotheses were not changed in reporting the findings. However, we did find some unexpected results, which were reported (Meier et al., Citation2019; Meier, Citation2020). Given the size of our dataset and the length of the study’s research period (5 years), we focused our initial reporting just on key hypotheses—i.e., the rates at which courts credit mothers’ abuse allegations or remove their custody. Those hypotheses relevant to the current discussion and our reported findings are listed below (all quoting Meier et al., Citation2019, p. 11):

  • Allegations of IPV, CA, and/or CSA by mothers with custody are correlated with loss of maternal custody and/or loss of the case (the FCO study defines custody loss as the reversal of primary physical care of children from one parent to the other);

  • Fathers’ counter-claims of parental alienation when accused of abuse are correlated with increased losses of custody and access by mothers;

  • Parental alienation labels applied to mothers are correlated with awards of custody or unsupervised access to fathers, even after judicial findings that the father committed adult or child abuse;

  • Mothers’ allegations of domestic violence are credited more frequently than mothers’ and children’s allegations of child abuse, particularly child sexual abuse;

  • Mothers’ and children’s allegations of child sexual abuse disproportionately result in custody switches to the accused father compared to other types of abuse allegations.

As is detailed in both of our publications, most of these hypotheses were confirmed by the study, which used straightforward descriptive statistics to count frequencies of outcomes in different types of cases. These data provided the first national analysis of how often courts credit different abuse allegations, remove mothers’ custody, etc.

In brief, the study found that the majority of mothers’ claims of abuse by fathers are rejected by trial courts, and that these rejections are far more frequent when mothers allege child abuse, particularly sexual abuse, than when they allege intimate partner violence (Meier et al., Citation2019, pp. 11, 20). Over one-quarter (28%) of mothers who started with primary physical care of the children lose physical custody to the alleged abuser when they report abuse to the court. In addition, when allegedly abusive fathers crossclaim alienation, rates of rejection of mothers’ abuse claims increase, as do custody reversals to the alleged abusers (Meier et al., Citation2019, pp. 15–16).

We also discovered and reported some unanticipated findings, some of which support claims of alienation theory proponents, such as the finding that in alienation cases without abuse allegations, mothers and fathers lose custody at comparable rates; and that across abuse and non-abuse cases, when courts find that a parent is an alienator, mothers and fathers lose custody at comparable rates (Meier et al., Citation2019, Meier, Citation2020). Harman and Lorandos do not acknowledge these findings of gender parity, and even incorrectly claim their similar finding refutes our study. (Harman & Lorandos, Citation2020, p. 184).

Below, we respond first to Harman and Lorandos’ criticisms of the contents of the Final Summary Overview as a document. Then we rebut their claims that the report of our study is misleading. Finally, we explain why the 30 purported methodological flaws they claim to have found in our study are either mistaken or insignificant.

Harman and Lorandos’ criticisms of the FCO study

Harman and Lorandos often misstate our hypotheses while purporting to refute them. For instance, they assert that they tested the Meier et al “hypotheses related to PA” (2021, p. 191) when they formulated their six hypotheses “that specifically examined whether there are gender differences in judicial outcomes” (p. 192; emphasis added). In fact, the bulk of our hypotheses and findings focused solely on outcomes for mothers alleging abuse, whether or not alienation was crossclaimed. We did make some preliminary gender comparisons (Meier et al., Citation2019, p. 18), including in alienation cases, finding some differences and some gender parity in specific contexts (p. 18–19). We made more gender findings in the all-abuse analyses—but those did not focus on alienation (p. 18, n. 22). Thus, Harman and Lorandos did not test our hypotheses.

Criticisms of content of Final Summary Overview

Absence of description of hypotheses and of study limitations

Harman and Lorandos’ article discusses our Final Summary Overview, a report required by the National Institute of Justice (“NIJ”), rather than our published, peer reviewed article on the FCO Study (Meier, Citation2020). The report was posted on the Social Science Research Network, an academic site established for posting of pre-publication “working papers.” (Jensen, Citation2017). NIJ requires specific content for and page limits for a Final Summary Overview (NIJ Research, Evaluation and Award Grant Requirements, Final Summary Overview). The required elements do not include those that Harman and Lorandos criticize us for omitting, such as study limitations (Harman & Lorandos, Citation2021, p. 185, 205) and hypotheses (p. 187). Both of those were, however, detailed in our proposal (Meier et al., Citation2014). Study limitations were also described in Meier’s (Citation2020) published article.

Policy recommendations

Harman and Lorandos (Citation2021) express concern that the FCO study’s findings might influence policymakers (p. 186, 191, 205). They claim our policy recommendations are illegitimate, on the grounds that they:

[go] well beyond their limited data to suggest recommendations that “warrant action,”

which is a woozling [sic] strategy that entails making policy recommendations by

relying on one or a few studies and ignoring other relevant research on the topic

(Harman & Lorandos, Citation2021, p. 186, citing a single critic of the abuse field who invented the term “woozle”).

This criticism contradicts standard research practices. Recommendations for policy and practice in the summary overview were required for the report by the NIJ (U.S. National Institute of Justice, Citation2017).). Such recommendations are also often required by scholarly journals such as those published by the American Psychological Association (APA), including the journal in which Harman and Lorandos published their study (Lamb et al., Citation2021, p. 293–294). The same is true of APA’s Journal Applied Research Standards (JARS) (Appelbaum et al., Citation2018, , p. 6, 8; Table 2, p. 12; Table 9, p. 23).

Table 1. Response to Harmon & Lorando’s List of “30 Methodological Flaws” in FCO Study. NB: Items with an asterisk are those which are variations on a single theme – lack of information. This is not a flaw in methodology.

Absence of citations

Harman and Lorandos (Citation2021, p. 186) claim that Meier et al. use a false “consensus effect” when making statements about others’ positions without citation, such as our statement that many protective parents and their allies report destructive outcomes in family courts However, citations for reports from protective parents, researchers, and others were amply cited in the original proposal (Meier et al., Citation2014), and not required for the Final Summary Overview. Our peer-reviewed published article (Meier, Citation2020) backs up every such statement with citations.

False claims of distortions

Harman and Lorandos (Citation2021) repeatedly invoke the term “woozle” to characterize the FCO Study. Another definition they employ for the term is a distorted “research claim” that can be “used to mislead professionals and others” (p. 185). The suggestion that our study makes distorted research claims is unfounded, as detailed below.

Claimed distortion 1

Harman and Lorandos complain (2021, p. 185) that media reports on the study which fail to describe the study’s limitations distort the research for the lay public. This is a criticism of the media’s summary of an interview, rather than of our study.

Claimed distortion 2

Harman and Lorandos maintain that Meier et al. “misrepresent” the work of Richard Gardner when we state in passing that PAS was created primarily as a rationale for rejecting child sexual abuse claims (2021, p. 185, quoting Meier et al., p. 14). It is well established, however, in both case law and literature that the PAS construct was closely related and widely used to refute child sexual abuse claims (In re Fortin, 2000; Talan, Citation2003; Kelly & Johnston, Citation2001; Lavietes & June, Citation2003; Wikipedia, Citation2021). Gardner’s early publications are explicit about the linkage between child sexual abuse claims and the function of PAS, starting with the title of his first book, “[t]he Parental Alienation Syndrome and the differentiation between fabricated and genuine child sex abuse” (Gardner, Citation1987). Gardner created the “Sexual Abuse Legitimacy Scale” around the same time as he coined PAS, and both constructs used overlapping criteria (Faller, Citation1998). Gardner stated that “the frequency of false [child sexual abuse] allegations is quite high” when PAS is present (Citation1992a, p. 126). Gardner claimed that "irate" mothers have found false sexual abuse allegations to be powerful weapons against their "despised" husbands (1991a, p. 24), that PAS is responsible for most accusations of child sexual abuse that are raised during custody disputes, and that "in custody litigation the vast majority of children who profess sexual abuse are fabricators” (1987, p. 274). Gardner urged parents and courts to ignore possibly alienated children’s “shrieks and claims of maltreatment” (Citation1991b, p. 18). Thus, it is undeniable that PAS was intended and used to impeach claims of child sexual abuse, and to propound the idea that mothers falsely allege CSA frequently to harm fathers.

Claimed distortion 3

Harman and Lorandos (Citation2021) assert that “Meier and colleagues (2019)” align themselves with critics [of PAS] by supporting the belief that “all claims of abuse made by children or ‘protective parents’ should be believed (p. 185). They rely for this vague innuendo on a generic claim by a critic of abuse (p. 185, citing Rand, 2013). There are several problems here. First, the FCO study (Meier et al., Citation2019) objectively coded what the courts found and decided. What [unidentified] advocates supposedly believe is and was irrelevant. Second, the authors (Harman & Lorandos, Citation2021) cite a page in our report where we specifically explain the opposite - that the study cannot and does not independently determine the truth of any allegations (Meier et al., Citation2019, p. 11). That same page of our report also cites research finding that child sexual abuse allegations are likely valid 50–70% of time (Faller, Citation1998; Trocmé & Bala, Citation2005), and such research provides objective reason to suspect that courts’ acceptance of only 15% of such claims (Meier et al., Citation2019, p. 11) is dangerously low. This is a far cry from our taking the position that all allegations of abuse are true.

Harman and Lorandos (Citation2021, p. 186) also point to our recommendation that Guardians Ad Litem (“GALs”) and custody evaluators be trained on misconceptions about alienation and abuse. This recommendation is directly responsive to our data, which shows that professionals have negative impacts on mothers’ - but not fathers’ – outcomes when the parent alleges abuse (Meier et al., Citation2019, p. 22–24). Much other research has found similar bias (Saunders et al., Citation2011); recognition of the problem has led some states to require such training (Colorado General Assembly, Citation2021).

Claimed distortion 4

The authors claim (p. 186) that Meier et al. woozle the reader into believing that some of their findings were statistically significant when they were not, such as highlighting in bold numerous results for which there were no odds ratios presented (and thus not statistically significant) (see p. 19 footnote).”

Harman and Lorandos do not cite a single example of the “numerous results” that were allegedly improperly bolded. The only bolded statements in the report that are not connected to odds ratios (which are included only when statistically significant) are occasional descriptive percentages that we consider striking, e.g., “[i]n cases with credited child physical abuse claims [against fathers], fathers win custody 19% of the time” (Meier et al., Citation2019, p. 13) (emphasis in original). Statistical significance is not applicable to a frequency; nor does our report anywhere suggest that bolded information is always statistically significant. Harman and Lorandos’ reference to a footnote on p. 19 of our report is error and does not support their assertion. (Meier et al., Citation2019 p. 19). And while the bulk of our findings consist of simple frequencies, p values and statistical significance are discussed more explicitly, as appropriate, in the published peer-reviewed article (Meier, Citation2020, p. 103–14, notes 11, 16).

Claimed distortion 5

Harman and Lorandos assert (p. 191) that Meier et al. mislead when describing odds ratios as likelihoods or probabilities, because when the frequency for the event under investigation is low, odds ratios can make the likelihood seem more common than it actually is.

This criticism seems to overlook the nature of our study data. In fact, particularly in low frequency events, “odds will approximate to the risks and the odds ratio will approximate to the relative risk” (Davies et al., Citation1998). Because the FCO study dataset is a complete census of relevant cases, as noted above, and not a sample (as is Harman and Lorandos’ dataset), the odds ratios in our study reflect actual differences between groups. Therefore, our reports of outcomes such as frequencies of custody losses or crediting of abuse are correct.

The 30 alleged “methodological flaws”

Harman and Lorandos claim to have identified "at least 30 conceptual and methodological problems" in the Meier et al., Citation2019 study. Upon close review, the "30 problems" they list consist primarily of a single repeated issue - that they did not obtain documentation of our methods on the schedule and in the manner they wished, and that we did not post our study on the Open Science Framework. The other items are incorrect and/or insignificant. We respond below to these claims by topic rather than individually; above, however, responds to each seriatim.

Lack of “transparency.”

Of the thirty items in Harman and Lorandos’ , 17 (#1–5, 9, 13, 15–18, 21, 22, 24–27) actually repeat a single criticism: that our final summary overview lacked details about the FCO Study methodology. There are three problems with this criticism. First, lack of information does not equate to flaws in the study or its methods. Conversely, transparency is no guarantee of quality or accuracy, as we explicate in Meier et al., forthcoming (2022). Moreover, to the extent the authors did not know the FCO study’s methods, they had no grounds for criticizing them.

Second, the absence of details about our methods in NIJ’s required Final Summary Overview again tracks NIJ’s requirements for reporting funded research. These methods had been detailed in the Proposal and were not appropriate at this stage; they are briefly described in the published article (Meier, Citation2020). Moreover, the criticism that the study was not pre-published on Open Science Foundation (“OSF”) is misplaced. OSF was not even launched until 2013, shortly before the FCO Study began. NIJ was making its funded studies’ materials public long before OSF was created. The agency’s standards, requirements, advance peer reviews and guidelines constitute a far more credible and trustworthy process than posting on OSF, which provides no guarantee of anyone’s review, let alone of quality.

Harman and Lorandos (Citation2021, p. 191) complain that Meier failed to supply the dataset and methodology when they requested it by email. But their own documentation of these communications (posted on OSF) reveals that no one representing themselves as working with Harman or Lorandos ever contacted us. Rather, an attorney who represented herself as being in private practice stated that she was interested in our data because it would be helpful to her clients. She did not appear to have any research capacity to accurately analyze data. Harman and Lorandos’ article reveals that this individual is a “Research Attorney” with Lorandos’ PsychLaw organization (Harman & Lorandos, Citation2021, p. 184, author footnote). Lacking that information, and not trusting her ability to do quantitative analyses, Meier referred her to the National Institute of Justice’s archives where the dataset and methodology were being posted for retrieval by researchers.

Harman and Lorandos also criticize us for the delay in posting of the FCO Study’s data (2021, p. 191). The National Institute of Justice archives—not in our control - was slow in posting the material for unknown reasons. Nonetheless, most of the documentation was available by late August 2020, four months before their article was posted online. The “User Guide” includes Appendices detailing the FCO Study’s exclusions and inclusions, including the Coding Manual with code definitions and guidance to coders; the search string; and the Codebook with raw frequencies for each of the variables. Harman and Lorandos reference their review of some of this material (2021, p. 191, 190, , notes a-e) while contradictorily claiming elsewhere (cf. p. 187–189, ; 191) that the information was not available.

Other criticisms of our methods

Most of their remaining criticisms fall into two categories: speculative and incorrect claims, and claims that are minor or unrelated to reliability of the findings. The following discussion tracks Harman and Lorandos’ discussion and list of 30 “flaws.” Item(s) from their list are noted in parentheses.

Claim of “cherry-picking” (, #2)

Harman and Lorandos assert that “[o]ne of the most striking problems with Meier et al. (Citation2019) research paper is how the legal cases for two data sets were selected, leading to what may be a “cherry-picked” sample that is stacked in favor of the hypotheses that were described” (emphasis added; Harman & Lorandos, Citation2021, p. 185). Here they presume a “striking problem” based on mere speculation (“may”).

Harman and Lorandos point to our reference to the “cleanest” possible dataset as an indication of cherry-picking (2021, p. 186). In fact, our exclusions were applied solely to avoid confounding the core questions in the study by inclusion of extraneous factors. Our primary analyses consisted of comparisons of cases involving mothers’ abuse claims in cases where fathers did and did not crossclaim alienation, which are those most relevant to our key hypotheses. (Meier et al., Citation2019 Report, p. 10, 13). As the report explains, we “excluded from the first set of analyses cases with third party’ victims (e.g., a new or old partner), ‘mutual abuse’ cases, ‘non-specific’ abuse claims, and ‘AKA’ claims [claims that suggest alienation but don’t use the word]” (Meier et al., Citation2019, p. 7 and n. 7). Each of these factors would have obscured the core question—the impact of abuse and alienation allegations on court decision. For instance, mutual abuse cases make it impossible to assess whether a mother’s or father’s individual abuse claim—or a party’s alienation crossclaim—is associated with certain outcomes, as opposed to the other party’s abuse claim. Same sex cases complicate any gender analysis. Incarceration and relocation matters frame adjudications in particular ways, raising additional significant concerns that are not present in simple custody/visitation matters involving abuse and alienation claims. (Meier, Citation2019, p. 9; Meier et al., Citation2019, p. 6–7). None of these exclusions implicates any systemic bias in terms of our hypotheses.

We also analyzed an expanded “all abuse” population of cases containing abuse allegations, this time including cases where a parent was accused of abusing a third party (from prior or new relationships) outside the family at issue in the litigation (but continuing to exclude the other types of cases excluded from the analytic dataset; Meier et al., Citation2019, p. 6–7). The larger all-abuse population enabled us to make some limited gender comparisons, examine the effects of guardians ad litem and evaluators, and to describe outcomes for litigants without relation to whether there were alienation claims. In short, these two different sets of analyses allowed us to answer particular research questions without noise from extraneous factors which would inevitably have cast doubt on the findings.

Possible duplication/appellate/trial opinions (, #14)

Harman and Lorandos erroneously criticize the FCO Study for including trial court as well as appellate court opinions, speculating that we may have double-counted cases (2021 p. 186). However, our posted case triage and coding policies ensured that, “if there was more than one opinion in a case, only the last known opinion was included” (User Guide Appendix A). This policy was employed for any multiple opinions, trial or appellate, concerning the same parties. Since we were using both trial and appellate court opinions to analyze only trial court decisions, it would not have made sense to exclude trial-level opinions entirely, as Harman and Lorandos did. Our dataset netted several hundred electronically published trial court opinions in cases that did not go on to appeal, although the vast majority of published opinions are from appellate decisions (which describe the trial court decision).

Coders (, ## 5, 9, 15)

Harman and Lorandos (Citation2021) count a single question about who the FCO study coders were three separate times in their list of 30 supposed flaws. The fact that two law graduates did all of the triaging and coding was not contained in the Final Summary Overview because it was not among NIJ’s requirements. It was described in the published article that does not seem to have been reviewed by Harman and Lorandos at the time of their critique (Meier, Citation2020, p. 94). In our experience, reading and interpreting court opinions requires some legal training and understanding of the issues in the case. Without some understanding of the dynamics of custody, abuse and alienation claims, it can be difficult to understand and code judicial decisions accurately, as it appears it was for Harman and Lorandos’ undergraduate (and other) coders: the authors belatedly discovered major coding errors (, p. 194 and note 4, p. 196 and note 5).

Lack of specificity about coding processes (## 5, 9, 15–18)

Harman and Lorandos (Citation2020, 186) allege that the Meier et al. study (2019) provided no coding details or methods used to provide accuracy, including how we coded multiple allegations of abuse.

Once again, the paper they criticized was a summary report to a funder, not a published journal article; this level of detail did not belong in that report. And contrary to their claim, the coders, definitions and coding process are described in detail in the Coding Manual contained in the archived User Guide, which was posted by the Archives in August 2020 (Meier, Citation2019, Appendix B, 2020). The Final Summary Overview clearly delineates how we coded multiple allegations of abuse; we coded distinct types of abuse allegations (adult abuse, and child physical or sexual abuse, and mixed types of abuse claims).

Harman and Lorandos, (Citation2021) further misconstrue the FCO study’s clearly defined “corroboration” code when they write “Meier et al. also stated that corroborations of abuse in their coding included arrests, protection orders, and prosecutions, without considering the possibility that the parent may later have been found innocent of their allegations” (p. 186; emphasis added). Of course, corroborating evidence does not determine innocence or guilt; corroborative evidence is additional evidence that most courts consider along with witness testimony to determine whether an allegation is true. Harman and Lorandos wrongly conflate our coding of corroborative evidence with our analysis of credited abuse. As explained in the User Guide, allegations were coded as “credited” only when a court deemed them true, or there was an admission or criminal conviction (Meier, Citation2019, p. 5). As the report notes, corroborating evidence resulted in a slight increase in the frequency of courts’ crediting of mothers’ abuse allegations. (Meier et al., Citation2019, p. 21).

Data analysis - lack of information

Harman and Lorandos’ primary criticism of the FCO Study’s data analysis is that they lacked information on how it was done (2021, p. 191). In fact, a great deal of information was available in August 2020, before they published their article, when the NIJ Archives posted our Secondary Data Analysis User Guide. This material among other things contains the full Stata code used to create all variables used in analyses in the report. We did not provide the code to generate the 2 × 2 crosstab tables that underlie the reported frequencies and odds ratios because they are easily generated from the raw data or coded variables. The authors’ repeated references to our failure to describe statistical “models” (2021, p. 190, 191) erroneously treats our frequencies like their logistic regressions, which were required for their study, due to their own sampling approach. No “models” are needed to calculate numbers and percentages of different outcomes from the census of cases which comprise our dataset.

Post-hoc hypotheses

The authors criticize us for supposedly adopting a “post hoc” hypothesis, and reporting findings on this hypothesis when, in fact, no such hypothesis was ever proposed and no such findings were ever reported. Furthermore, they provide no cite for their incorrect description of our findings. (Harman & Lorandos, Citation2021, p. 190, 192). In fact, we did not “report testing” the hypothesis they describe, nor did we make findings on the subject.

We did report unexpected and un-hypothesized findings, including two findings that contradict our own hypotheses by showing gender equality - rather than bias against mothers (Meier et al., Citation2019 p. 19). These findings involved simple frequencies and/or odds ratios, not regression analyses. They also contradict complaints from the abuse field and support the alienation field, a surprising target for these authors’ criticism. At no time, did we articulate these unexpected findings as “hypotheses” that had been confirmed by the study.

We doubt that the critics of “HARKing” (“hypotheses after results are known” misrepresented as predictions made beforehand) are concerned with transparent reports of purely descriptive statistics, such as these. The primary concern in the literature about HARKing is not that it produces false or distorted findings, but that it can undermine the replicability of a given set of findings or clarity of the scientific process (Kerr, Citation1998). Critics do not even agree that all post hoc hypotheses are either illegitimate or bad for science; some recommend that HARKing be permissible so long as it is explicit and transparent (Rubin, Citation2017).

Improper reporting of statistical tests (, ## 23–26)

Harman and Lorandos’ primary criticisms of our statistical analyses focus on a purported failure to report our regression “models” (Harman & Lorandos, Citation2021, p. 190–191). As noted in the Final Summary Overview (Meier et al., Citation2019, p. 8–9), the majority of our reported findings—simple frequencies and odds ratios—did not involve regressions or models. We conducted preliminary limited multivariate comparisons to assess whether there were systematic differences between cases that were appealed and those that were not. We did not report the results or methodology of those analyses because they addressed a side issue that was not one of our hypotheses. We plan to report more detailed regression analyses on this and other matters in future publications.

They also assert that we failed to report the p-value of .05 that was used in our odds ratio calculations. We believe most readers understand that we used the general convention of a .05 threshold, especially given that we state that we only reported the odds ratios if significant, referencing both p-values and confidence intervals, and noting exceptions where the p value of .05 was not met (Meier et al., Citation2019, p. 13, n. 13; p. 19, n. 25). In context, we believe Harman and Lorandos’ criticism is, at best, a technicality.

Last, we note that their list of 30 supposed flaws also includes a number of items that do not accurately depict the FCO study report or the study. See for responses to each of the 30 items, including some not discussed in the body of this article.

Conclusion

The discussion above demonstrates that Harman and Lorandos repeatedly criticize the FCO Study based on incorrect speculations and misstatements about our methodology and its availability. Like all research, the FCO study was not perfect, but methodological flaws that legitimately cast doubt on the validity of the FCO study’s findings have not, to date, been identified. Those findings report objective information about courts’ responses to differing allegations, in the form of simple descriptive percentages of findings and outcomes among different categories of cases, along with odds ratios when a comparison is worthwhile and statistically significant. The fact that the dataset is a census of all electronically published opinions on the matters of interest reinforces the importance and reliability of these findings.

We hope this rebuttal clarifies the lack of substance to Harman and Lorandos (Citation2021) critique, while substantiating the credibility of our straightforward findings. We are troubled that the invocation of a “list of 30” and use of the pejorative term “woozling,” accompanied by an array of technical jargon, can combine to create a veneer of seriousness, especially when few if any readers can be expected to dig deeply enough to discern the falsity of these claims. Therefore, we urge readers to review the Final Summary Overview and the peer-reviewed article about the study, as well as our forthcoming articles, both to resolve any lingering doubts, and to learn critical information about custody court adjudications. We believe that we have described clearly an objective portrait of court findings and decisions in cases involving abuse and alienation allegations. We welcome good faith comments or questions about the meaning and validity of those findings, or to explain any remaining confusion about the foregoing. In the interests of children and parents, as well as professionals and courts, we hope future engagement on these issues will generate more light and less heat.

Acknowledgments

The authors are grateful to the Journal of Family Trauma, Child Custody and Child Development and their editors for their interest in and assistance with this article. We also appreciate the professional staff at the National Institute of Justice for their assistance with and supervision of the study, as well as Mallory Martin, Alex Tway, and Jeffrey Hayes, who did heroic work on the study but are not co-authors of this article.

Disclosure statement

The authors were awarded a grant from the National Institute of Justice to perform the study secondarily discussed herein.

Additional information

Notes on contributors

Joan S. Meier

Joan S. Meier, J.D., is the National Family Violence Law Center Professor of Clinical Law and Founder and Director of the National Family Violence Law Center at George Washington University Law School, Washington D.C.

Sean Dickson

Sean Dickson, J.D., MPH, is Director of Health Policy at West Health Policy Center, Washington, D.C. Dickson’s work on this article is in his personal capacity and does not reflect the views of the West Health Policy Center.

Chris S. O’Sullivan

Chris S. O’Sullivan, PhD, is a private consultant on domestic violence, human trafficking and sexual assault, involved in research and evaluation, training and program development.

Leora N. Rosen

Leora N. Rosen, PhD., Mph., was a Senior Social Science Analyst at the U.S. National Institute of Justice from 1998 to 2007 and is now a private consultant on research and evaluation.

References