2,783
Views
0
CrossRef citations to date
0
Altmetric
Discussion Note

Incorrigible Science and Doctrinal Pseudoscience

ORCID Icon

ABSTRACT

I respond to Sven Ove Hansson’s [2020. "Disciplines, Doctrines, and Deviant Science." International Studies in the Philosophy of Science 33 (1): 43-52. doi:10.1080/02698595.2020.1831258] discussion note on my (Letrud 2019) critique of his (2013) pseudoscience definition. My critique addressed what I considered to be issues with his choice of definiendum, the efficiency of the definition for debunking pseudoscience, and a problematic extensional overlap with bad science. I attempted to solve these issues by proposing some modifications to his definition. I shall address the four main points of the discussion: whether the primary definiendum ought to be ‘pseudoscience’ or ‘pseudoscientific statement’ (I make a moderate case for ‘pseudoscience’), whether ‘discipline’ is an apt category for the definiens (it is, extensionally), and how to go about debunking pseudoscience (it is complicated). And, perhaps most importantly, whether Hansson’s definition of pseudoscientific statement subsumes examples of bad science, and thus science. I present a case study of efforts at correcting unreliable models proliferating in the research literature. This case demonstrates how bad science can satisfy Hansson’s criteria for pseudoscientific statement, including the criterion of deviant doctrine.

1. Hansson’s Definition of Pseudoscience

The pseudoscience demarcation project seeks to demarcate science from a particular group of nonsciences that feign scientific status and authority or present themselves as correctives or alternatives to science proper. This is one of two science demarcation problems discussed in the philosophical literature. The other problem is the general demarcation between science and nonscience. The two problems are sometimes approached as the same problem (Hansson Citation2021). But ‘pseudoscience’ is more normative than ‘nonscience’, and it makes good sense to address the science–pseudoscience demarcation separately from the science–nonscience demarcation. First, unlike ‘nonscience’, ‘pseudoscience’ is a highly derogatory term, and in theory this means that a line drawn between science and pseudoscience need not overlap with a less normative line drawn between science and nonscience (bad science being one potential area of contention: both bad science and pseudoscience would expectedly fail to meet scientific standards). Second, a pseudoscience demarcation is expected to inform practical and political deliberations on issues such as the public funding of alternative medical treatments, or the teaching of creationism in schools. There is thus a stronger controversy about, and sense of urgency for, a science–pseudoscience demarcation.

Pseudoscience has over the years become a subject on its own (Hirvonen and Karisto Citation2022), and an area for philosophical study (Pigliucci and Boudry Citation2013). Hansson has developed a prominent definition of pseudoscience, and he revisited and revised this definition in 2013:

1. A statement is pseudoscientific if and only if it satisfies the following three criteria:

2. It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain).

3. It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability).

4. It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of deviant doctrine) (Citation2013, 70–71).

My first point of critique (Letrud Citation2019, 4) was that this definition was strictly speaking not a definition of ‘pseudoscience’, but of ‘pseudoscientific statement’. Hansson concurred and promptly proposed a definition of ‘pseudoscience’:

A doctrine is a pseudoscience if and only if it satisfies the following two conditions:

(A) It includes at least one statement which (A1) pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain), and (A2) suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability).

(B) Its major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of pretence) (2020, 49–50).

This definition follows the first definition closely, so I shall focus on the definition of pseudoscientific statement. In the following I shall elaborate, moderate and expand my original criticisms.

2. Pseudoscience or Pseudoscientific Statements

I suggested that the noun ‘pseudoscience’ ought to be the primary definiendum, and that the adjective ‘pseudoscientific’ (and phrases such as ‘pseudoscientific statement’) should be derived from the noun (Letrud Citation2019, 4). Hansson prefers to start with a definition of ‘pseudoscientific statement’ from which he can construct definitions of other concepts in the pseudoscience cluster (such as ‘pseudoscience’ and ‘pseudoscientist’). Hansson suggests that this has a strategic advantage:

… pseudoscientific statements can be seen as the smallest building-blocks of pseudosciences. It is often a good strategy to get the components right before proceeding to define more complex structures. (Citation2020, 44)

This bottom-up strategy works well for several issues. But, conversely, a top-down strategy could possibly accentuate other important aspects of pseudoscience.

His second rationale for using ‘pseudoscientific statement’ as the primary definiendum, is continuity and comparability in the debate (Citation2020, 44). My concern is that this argument favours continuity for continuity’s sake. Sticking to this approach seems unnecessarily restrictive.

Admittedly, these are lesser points of disagreement. I shall, however, raise one specific practical issue with using ‘pseudoscientific statement’ as the primary definiendum in the way Hansson does: Hansson’s definition explicates ‘pseudoscientific statement’ in a way that is not fully compatible with ordinary usage. This makes the definition less suitable for empirical studies. Normally, high precision is desirable when stipulating a definition, and the scientific need for a precise terminology can normally trump the established usage of the term (Carnap Citation1962). But if the object to be studied is the usage of the term, conservativeness ought to trump precision.

‘Pseudoscientific’ is ambiguous, in a similar way as the adjective ‘scientific’. ‘Scientific’ is used in (at least) two waysFootnote1: one as stating that something is part of, pertains to, science. Examples include ‘scientific theory’, and ‘scientific method’. We can refer to this as ‘scientific’ in a Part-of sense. But there is also a second sense, as Resembling-science. Common usage includes being systematic, careful and methodical, e.g. ‘I try to arrange things in some kind of a system, but I'm not very scientific about it.’ (Cambridge Dictionary Citation2023).

Similarly, ‘pseudoscientific’ can be used in (at least) two ways, one as Part-of-pseudoscience, the other as Resembling-pseudoscience. A pseudoscientific statement can thus be a statement that is part of a pseudoscience, or a statement that resembles pseudoscience. Two cinematic examples of Resembling-pseudoscience statements are:

It looks like the neutrinos coming from the sun have mutated into a new kind of nuclear particle. They're heating up the earth's core and suddenly act like microwaves. (Emmerich and Kloser Citation2009).

While these statements are not part of any pseudoscience (nor even claimed to be true in real-life), they can be called pseudoscientific because they are ridiculous scientific-sounding statements, and thus resemble pseudosciences such as Quantum Healing:

Each healthy cell of human body emit photon, disease is an altered emission of photons. Inside and outside of our body consists of energy. Photons from our cells jump on to others and counter photons may be vibrating at other place is known as quantum entanglement and quantum tunnelling. (Shrihari Citation2017, 1)

Hansson explicates the phrase ‘pseudoscientific statement’ to convey a particular form of Part-of-pseudoscience sense (‘It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter’ (Citation2020, 44)), thereby excluding the Resembling-pseudoscience sense. This leaves out a common usage of ‘pseudoscientific statement’, and it could potentially skew how we talk about these things. One specific problem is that using this definition for sociological studies of what laymen, scientists or philosophers consider to be pseudosciences could distort the findings. While the daily usage of ‘pseudoscientific statement’ may involve both meanings, the usages recorded will all be as if in the Part-of-pseudoscience sense. This could for instance inflate an observed extensional consensus on what phenomena are pseudosciences, making it more substantial than it really is. However, the problem can possibly be solved by splitting the concept into one for Resembling-pseudoscience and one for Part-of-pseudoscience, and introducing new definitions and terms for each of them (Hansson Citation2006, 7–8), e.g. ‘pseudoscientific-R’ and ‘pseudoscientific-P’.

3. Disciplines, Doctrines and Areas of Study

Attempting to make Hansson’s definition a more efficient tool for identifying and debunking pseudoscience, I argued that rather than targeting numerous standalone statements, the definition should go after whole disciplines (Letrud Citation2019, 5–6). By criticising core statements (‘core statements are pivotal for the identity of their epistemic disciplines, in that one cannot reject them and still be regarded as a supporter of the discipline, at least not in its present form’ (Letrud Citation2019, 8)), one could effectively criticise the discipline and thereby a whole range of flawed statements in one go. Hansson, conversely, considers ‘discipline’ extensionally inept—for the sciences as well as for the pseudosciences: Hansson argues that some disciplines are purely area-defined (‘studies of … ’), such as educational studies and biology, and thus lack core statements. This presents a problem for the discipline approach, he argues: ‘An area-defined discipline, such as biology, does not have any core statements, in the sense of defining statements that are crucial for the demarcation of the discipline.’ (Citation2020, 47). Furthermore, Hansson argues, some pseudosciences are not even disciplines: Lysenkoism is just a doctrine and will thus fall outside a definition based on a discipline category.

I agree that a definition of pseudoscience based on a discipline category will likely fail to include any area-defined studies as well as standalone doctrines. However, area-defined studies and standalone doctrines can still be referred to as ‘pseudoscientific-R’, i.e. as Resembling pseudoscience, even though they are not pseudoscience disciplines, or parts of a pseudoscience discipline. They just need to resemble pseudoscience, which they arguably do.

4. Debunking Pseudoscience

While the discipline approach is likely compatible with the established usage of ‘pseudoscience’ and ‘pseudoscientific’, it may turn out to be less useful for countering pseudoscientific beliefs than I intended. Hansson pointed out that the discipline-based pseudoscience concept is too narrow to function as an overall debunking strategy. The concept can only be applied on pseudoscience disciplines. This leaves the discipline approach less efficient, and I recognise that it will need to be supplemented with criticisms of several non-core statements.

As mentioned, I sought to ease the practical challenge of debunking pseudosciences by addressing them as disciplines. Criticising the plethora of homeopathic cures one by one would be extremely time-consuming and very likely an uphill battle. Instead, I argued, combatting pseudosciences by criticising core statements that are essential to the identity of the discipline could implicitly criticise the other statements associated with the pseudoscience. Homeopathy has a set of speculative core statements, for instance that a heavily diluted substance leaves an imprint in the water despite that there are no remaining molecules of it. Addressing these core statements could indirectly weaken public belief in claims about the efficiency of several homeopathic cures. But Hansson holds that this approach will be inefficient:

Such a strategy may seem to be efficient from an intellectual point of view, but unfortunately, it does not work from a communicative point of view. For instance, homeopaths usually do not approach the public with claims about the efficacy of extreme dilutions. To the contrary, they tend to omit information about the production process, and instead claim that the efficacy of specific homeopathic drugs is proven by practical experience (Holt and Gilbey 2009). In other words, they recruit customers with non-core statements. (Citation2020, 47)

I suspect this is a correct description. Being transparent about their theories and methods would hardly strengthen the homeopaths’ authority and credibility among their followers, and they would have no obvious rationale for communicating them. In practice, this means that efforts at debunking homeopathic claims ought to incorporate a description of these processes and theories to inform the public about them. If this approach also has a prophylactic effect against belief in homeopathic claims, it could even be much more efficient than correcting false information once widespread. But at this point I can merely speculate about the efficiency of this or any approach. Advancing the discussion will require input from empirical studies on the proliferation and correction of mis- and disinformation.

5. Doctrines and Incorrigibility

I argued that Hansson’s definition subsumed examples of bad science. My original example was the unsubstantiated and unreliable Learning Pyramid model, which has proliferated in didactic research publications for more than 160 years, and thus qualified as a pseudoscientific statement according to his definition (Letrud Citation2019, 4–5). However, as Hansson points out, I failed to adequately address one important aspect of the criterion of deviant doctrine, namely the resistance to corrections. I shall try to rectify this shortcoming in the following.

Pseudoscience, as it is commonly conceived, involves the sustained promotion of teachings that lack scientific legitimacy (Hansson 1996, 173; Hansson Citation2013, 69). This is important, since the most important strength of science is its capacity for self-improvement (Hansson 2018, 64). What makes pseudoscience a much more ominous threat to epistemic and social progress than other types of bad science is its doctrinal resistance to correction. Therefore, the doctrinal component should have an essential role in an adequate characterisation of pseudoscience. (Hansson Citation2020, 50)

Self-correction and self-improvement are ideals for research, but at best approximate descriptions of scientific practice (Ioannidis Citation2012). There are doctrinal components also in science.

To illustrate and corroborate this, I shall in the sections 6–6.3 present a case study of the reception of 13 papers raising criticism against the Yerkes-Dodson Law and the Inverted U theory. The example shows how five decades of efforts have failed to stop the proliferation of an unreliable model in the research literature. Also, I suggest one potential explanation for this resistance to correction: these efforts have faced an affirmative citation bias, a skewedness in the citations where those that maintain the reliability of the model outnumber those who reject it by a good margin.

Affirmative citation bias suggests that there are three main ways of citing debunking efforts (apart from remaining neutral): (i) agree with the criticism, (ii) disagree with the criticism (and preferably justify the continued use of the theory) and (iii) cite the critical paper incorrectly as affirming the theory. Group (i) is likely small: when an author agrees with the criticism, the theory (as well as the critique) becomes irrelevant for the paper and are left out. If these three groups are roughly the same size, the sum of myth-affirmative citations (ii + iii) will outnumber the myth-rejecting citations (i) (Letrud and Hernes Citation2019). This bias was demonstrated for three highly cited attempts at debunking the Hawthorne Effect myth and was likely associated with citation plagiarism: citing a paper solely based on another citing author’s exegesis, thus potentially proliferating any misreading (or citation plagiarism) done by this author. Not only had the critics failed to have an impact on the literature, but they might even have contributed to the continued proliferation of the myth (Letrud and Hernes Citation2019, 2).

6. The Yerkes-Dodson Law and the Inverted U

Robert M. Yerkes and John Dillingham Dodson (Citation1908) studied the effect of stimulus and task difficulty on the rapidity of habit formation of Japanese dancing mice. The mice were tested in three sets, each set with a different level of difficulty. When the mice made errors, they were given electric shocks. In two of three sets Yerkes and Dodson plotted a U-shape in the number of errors (Y-axis), suggesting that there was an optimal level of stimulus (strength of electric shock, X-axis) for habit formation. The study was underpowered (using at most only four mice per sample), and lacked a factorial design (which is better suited for studying more than one independent variable) (Brown Citation1965). The outcome was furthermore contingent on the specific formulation of the habit learning criterion (correct performance 10 out of 10 times over three consecutive days) (Hancock and Ganey Citation2003; Hanoch and Vitouch Citation2004). There were inadequate differences in levels of difficulty between the sets (Hanoch and Vitouch Citation2004), and reanalyses of the data did not reproduce the original findings (Bäumler Citation1994).

From the fifties authors started to claim that what Yerkes and Dodson had found was in fact a law describing the relationship between motivation or arousal for performance in humans. And this law was subsequently combined with Donald Hebb’s inverted U-shaped relationship between arousal of the nervous system and performance. The YDL thus became a family of plastic and inconsistent laws describing various forms of arousal and stress and their effect on performance in the shape of an inverted U. Optimal performance required some, but not too much, arousal or stress (Teigen Citation1994).

Currently, the YDL/IU are used to describe optimal level of stress or arousal for performance in nearly every field of human activity, such as sports, work productivity and learning. These ‘laws’ are widely cited, despite that they are immune to falsification (Kerr Citation1985; Christianson Citation1992; Teigen Citation1994), the supporting evidence is either lacking (Neiss Citation1988), a product of the experimental design (Näätänen Citation1973; Kerr Citation1985), or at best weak (Christianson Citation1992), scant and inconsistent (Westman and Eden Citation1996). Conversely, the evidence largely show a linear detrimental effect of stress (Corbett Citation2015). And the arousal concept itself is an over-generalized and vague construct (Kerr Citation1985; Neiss Citation1988; Hancock and Ganey Citation2003; Hanoch and Vitouch Citation2004). The YDL/IU is basically a folk model (Dekker and Hollnagel Citation2004).

The ubiquitous inverted Us … claimed to illustrate the Yerkes-Dodson law may be of considerable didactic, heuristic and even theoretical value; but when it comes to the question of their origin, it is tempting to answer with Brehmer’s (1980) Lockean paraphrase: ‘In one word: not from experience.’ (Teigen Citation1994, 543–544)

Despite the substantial criticisms exemplified above, the YDL/IU is widely cited. The original paper by Yerkes and Dodson alone has 3946 citations in Scopus, and 10,753 in Google Scholar (search date 25 April 2023). Examining the reception of the above critical papers can help understand why these laws have persisted and proliferated in the scientific literature despite the criticisms.

6.1. Method

The above 13 papers constitute a non-exhaustive catalogue of papers presenting substantial arguments against the YDL/IU published over five decades. I retrieved 1268 peer-reviewed papers written in English that cited at least one of the 13 critical papers and were indexed in Scopus or Web of Science.Footnote2 I classified the papers as either affirming the YDL/IU, as being neutral (i.e. ambiguous, not taking sides, or not addressing the theory), or as rejecting the YDL/IU. A couple of papers seemingly accepted the criticism, but still chose to use the model as a heuristic. I classified these as de facto affirming because they most likely contribute to the continued proliferation of the model. I also sought to establish if the affirming citations misrepresented the cited paper as affirming the YDL or the IU. If citing it as a general source for the YDL/IU in a non-critical context, I classified it as Citing as Affirming.

6.2. Results

Out of the 1268, 154 papers rejected the YDL/IU. 871 were neutral about the YDL/IU status (mostly not addressing it nor the critique). 243 affirmed the YDL/IU, and out of these 139 cited the critics as if they affirmed the YDL/IU ().

Table 1. Citations of critics.

There is a potential source of error: I did these assessments without consulting a second coder, which meant that any errors on my part would not be caught. However, the analysis consists of a very general description of the ratio of affirming/rejecting, and skewing this ratio would require numerous errors. Furthermore, most papers did not mention the YDL/IU and required no assessment.

6.3. Discussion

The case study illustrates and corroborates that research is not necessarily self-correcting. Unreliable statements such as the Yerkes-Dodson Law/the Inverted U and the Hawthorne Effect not only persist but prosper in the scientific literature, despite (perhaps even helped by) decades of substantial criticisms. These statements thus qualify as pseudoscientific according to Hansson’s (Citation2013) definition: (i) they pertain to an issue within the domains of science, (ii) they are unreliable and (iii) they constitute incorrigible doctrines. Their major proponents try to create the impression that they represent the most reliable knowledge on its subject matter, by publishing them in peer-reviewed research journals. This demonstrates that his definition of ‘pseudoscientific statement’ subsumes examples of bad yet common forms of scientific practice, thus drawing an unsatisfactory line of demarcation against science.

The study replicates to some extent the affirmative citation bias reported in Letrud and Hernes (Citation2019). The number of affirmative citations (ii + iii) were 243, while the number of rejecting citations (i) were 154, a ratio of more than one-and-a-half to one. 139 of the 243 affirmative citations cited the critical papers as affirming the YDL/IU (iii). This is presumably a result of citation plagiarism.

The critics may very well have convinced most of their readers to reject the YDL/IU. But we can assume that these then would have little to no reason to address the YDL/IU, and to leave it out from their papers. The documented reception may thus be dominated by those that propagate the model.

7. Conclusion

Hansson’s definition of ‘pseudoscientific statement’ ought to clarify that it is of ‘pseudoscientific-P statement’. And, while a pseudoscience definition based on a discipline category may not include all phenomena referred to as ‘pseudoscience’ or ‘pseudoscientific-P’, these can still be referred to as ‘pseudoscientific-R’.

Using ‘discipline’ as category for a pseudoscience definiens would theoretically be more efficient for debunking efforts, compared to addressing individual claims. But how to best debunk pseudoscience is ultimately a factual question, not a philosophical one. As such, the arguments ought to be drawn extensively from empirical studies. There exists a comprehensive literature on correction of dis- and misinformation that can possibly throw light on the issue.

A pseudoscience definition that either classifies science as pseudoscience, or pseudoscience as science, will most likely be rejected. I argued that common, some frowned upon, scientific practices proliferate unreliable statements in the scientific literature. I have here shown that similar practices also hinder efforts at correcting them.

There is a general point to be made, I believe. The pseudoscience demarcation project ought to take into consideration the sometimes-imperfect side of science. Care should be taken when constructing pseudoscience as something that falls short of a set of ideal scientific standards. For, even the sciences fail to meet them, occasionally.

Acknowledgements

I owe thanks to Lars Christie, Hedda Hassel Mørch, Anna-Sara Malmgren, and the two anonymous ISPS reviewers for helpful comments, questions, and corrections.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 ‘ … both the – ic and – ical endings have the basic meanings “of or pertaining to”, “relating to”, “resembling”, “having the quality of” or “characterised by” … ’ (Kaunisto Citation2007, 30).

2 This work was done in late 2019.

References

  • Bäumler, G. 1994. “On the Validity of the Yerkes-Dodson Law.” Studia Psychologica 36: 205–209.
  • Brown, W. P. 1965. “The Yerkes-Dodson Law Repealed.” Psychological Reports 17: 663–666. doi:10.2466/pr0.1965.17.2.663
  • Cambridge Dictionary. 2023. Scientific. https://dictionary.cambridge.org/dictionary/english/scientific.
  • Carnap, R. 1962. Logical Foundations of Probability. Chicago: The University of Chicago Press.
  • Christianson, S-Å. 1992. “Emotional Stress and Eyewitness Memory: A Critical Review.” Psychological Bulletin 112: 284–309. doi:10.1037/0033-2909.112.2.284
  • Corbett, M. 2015. “From Law to Folklore: Work Stress and the Yerkes-Dodson Law.” Journal of Managerial Psychology 30: 741–752. doi:10.1108/JMP-03-2013-0085
  • Dekker, S., and E. Hollnagel. 2004. “Human Factors and Folk Models.” Cognition, Technology & Work 6: 79–86. doi:10.1007/s10111-003-0136-9
  • Emmerich, R. (Writer/Director), and H. Kloser (Writer/Producer). 2009. 2012 [Motion picture]. In. United States: Columbia Pictures.
  • Hancock, P. A., and G. C. N. Ganey. 2003. “From the Inverted-U to the Extended-U: The Evolution of a Law of Psychology.” Journal of Human Performance in Extreme Environments 7: 5–14. doi:10.7771/2327-2937.1023.
  • Hanoch, Y., and O. Vitouch. 2004. “When Less is More: Information, Emotional Arousal and the Ecological Reframing of the Yerkes-Dodson Law.” Theory & Psychology 14: 427–452. doi:10.1177/0959354304044918.
  • Hansson, S. O. 2006. “How to Define.” Princípios: reviste de filosofia 13: 5–30.
  • Hansson, S. O. 2013. “Defining Pseudoscience and Science.” In Philosophy of Pseudoscience. Reconsidering the Demarcation Problem, edited by M. Pigliucci, and M. Boudry, 61–78. Chicago: Chicago University Press.
  • Hansson, S. O. 2020. “Disciplines, Doctrines, and Deviant Science.” International Studies in the Philosophy of Science 33: 43–52. doi:10.1080/02698595.2020.1831258.
  • Hansson, S. O. 2021. Science and Pseudo-Science. Stanford Encyclopedia of Philosophy, May 20. http://plato.stanford.edu/entries/pseudo-science/.
  • Hirvonen, I., and J. Karisto. 2022. “Demarcation without Dogmas.” Theoria 88: 701–720. doi:10.1111/theo.12395.
  • Ioannidis, John P. A. 2012. “Why Science Is Not Necessarily Self-Correcting.” Perspectives on Psychological Science 7 (6): 645–654. doi:10.1177/1745691612464056.
  • Kaunisto, M. 2007. Variation and Change in the Lexicon: A Corpus-Based Analysis of Adjectives in English Ending in -ic And –ical. Amsterdam: Brill.
  • Kerr, J. H. 1985. “The Experience of Arousal: A New Basis for Studying Arousal Effects in Sport.” Journal of Sports Sciences 3: 169–179. doi:10.1080/02640418508729749.
  • Le Fevre, M., J. Matheny, and G. S. Kolt. 2003. “Eustress, Distress, and Interpretation in Occupational Stress.” Journal of Managerial Psychology 18: 726–744. doi:10.1108/02683940310502412.
  • Letrud, K. 2019. “The Gordian Knot of Demarcation: Tying Up Some Loose Ends.” International Studies in the Philosophy of Science 32 (1): 3–11. doi:10.1080/02698595.2019.1618031.
  • Letrud, K., and S. Hernes. 2019. “Affirmative Citation bias in Scientific Myth Debunking: A Three-in-one Case Study.” Plos One 9, doi:10.1371/journal.pone.0222213.
  • Näätänen, R. 1973. “The Inverted-U Relationship Between Activation and Performance: A Critical Review.” In Attention and Performance, edited by Kornblum, Sylvan, Vol. 4, 155–174. New York: Academic Press.
  • Neiss, R. 1988. “Reconceptualizing Arousal: Psychobiological States in Motor Performance.” Psychological Bulletin 103: 345–366. doi:10.1037/0033-2909.103.3.345
  • Pigliucci, M., and M. Boudry, eds. 2013. Philosophy of Pseudoscience: Reconsidering the Demarcation Problem. Chicago: The University of Chicago Press.
  • Shrihari, T. G. 2017. “Quantum Healing Approach to New Generation of Holistic Healing.” Translational Medicine 7, doi:10.4172/2161-1025.1000198.
  • Teigen, K. H. 1994. “Yerkes-Dodson: A Law for All Seasons.” Theory & Psychology 4: 525–547. doi:10.1177/0959354394044004.
  • Westman, M., and D. Eden. 1996. “The Inverted-U Relationship Between Stress and Performance: A Field Study.” Work & Stress 10: 165–173. doi:10.1080/02678379608256795.
  • Yerkes, Robert M, and John D. Dodson. 1908. “The relation of strength of stimulus to rapidity of habit-formation.” Journal of Comparative Neurology and Psychology 18 (5): 459–482. doi:10.1002/(ISSN)1550-7149.