Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Volume 37, 2023 - Issue 5
809
Views
0
CrossRef citations to date
0
Altmetric
Research Article

On the Inconsistency between Practice and Reporting in Science: The Genesis of Scientific Articles

ORCID Icon
Pages 684-697 | Received 23 May 2022, Accepted 25 Apr 2023, Published online: 12 Jun 2023

ABSTRACT

Scientific publications depict science as an orderly endeavour and the epitome of rationality. In contrast, scientific practice is messy and not strictly rational. Here, I analyse this inconsistency, which is recurrent, and try to clarify its meaning for the functioning of science. The discussion is based on a review of relevant literature and detailed analysis of the role of each of the three intervening elements, the scientist, his/her practice and the scientific publication, with an emphasis on the circular mode of the latter’s creation. This way, I will discuss the nature, causes and relevance of the inconsistency. That corresponds to answering three questions, respectively: ‘what are the characteristics of the inconsistency?’, ‘what are its origins?’ and ‘how could it be interpreted within a model for the structure and functioning of science?’ From this discussion it is concluded that, contrary to the negative character generally attributed to it, the inconsistency between practice and reporting is part of the production mechanism of science.

1. Introduction

The practice of science does not coincide with how that practice is portrayed in scientific publications, which configures an inconsistency. In the words of philosopher of science Paul Feyerabend, scientific ideas often do not come from a problem but from playing or other influences and are then reframed to give the impression that they were generated in response to serious problems (Citation[1975] 1993). This is a recurrent feature of scientific practice, which has not yet been fully explained by philosophers and only rarely is acknowledged by scientists.

This article is aimed at clarifying the meaning of that inconsistency for science itself. This constitutes a fundamental difference in focus concerning previous works on the subject, namely that of philosopher of science Schickore (Citation2008). Schickore’s work is aimed at clarifying the meta-epistemological challenges posed by the inconsistency to the study of science. In contrast, the present article is the reflections of a practicing researcher in relation to science itself.

I will first establish the existence of the aforementioned inconsistency based on a short literature review (Section 2). Then, I will discuss the nature (Section 3), the causes (Section 4) and the relevance (Section 5) of that inconsistency. This corresponds to answering three questions, respectively: ‘what are the characteristics of the inconsistency?’, ‘what are its origins?’ and ‘how could it be interpreted within a model of the structure and functioning of science?’ The discussion is based on a detailed analysis of how each of the three intervening elements, the scientist, his/her practice and the scientific publication, contributes to the inconsistency. It includes also a thought experiment, which consists of exploring the hypothetical consequences of changing the functioning of each of these elements to eliminate the inconsistency. The main conclusions of the discussion are summarized at the end of the article (Section 6).

In what follows, ‘rationality’ is understood in its most basic sense, that is, the use of reason to make decisions and inferences, either through logic or heuristics. It is therefore downstream from the concept of objectivity, which presupposes an (ideal) absence of cognitive bias. Here, it is used to mean scientific rationality, which fits in Max Weber’s concept of theoretical rationality (Kalberg Citation1980). Note that scientific rationality encompasses some plasticity because it refers to the scientific context of reference, i.e. the set of what is accepted as scientific truth, Kuhn’s paradigm ([Citation1962] 2012). This scientific context of reference is in a permanent metamorphosis, constantly undergoing construction, destruction, and remodelling, which means it may not be completely stable or univocal at a given moment. Alternative or competing methods and theories may therefore coexist, all being rational.

In a negative definition, what scientific rationality is not is wishful thinking. As Deleuze and Guattari explained, desire is the main engine of human production and it works based on the creation of real (Citation[1972] 2003): i.e. human production works through wishful thinking processes. These are powerful and innate action-driving thought processes and it was to try seeing through them that philosophy and, in recent centuries, science were invented. As such, rationality may also be understood as an anti-wishful-thinking thought process.

2. Existence of the Inconsistency

Philosophers and even scientists themselves have noted the recurring mismatch between what scientists do in practice and what they later report in their articles, a puzzling contradiction in an enterprise aimed at rationality and effectiveness. However, the ideas that the question generates have varied a lot. Below, I summarize some that are representative of this variety.

Reichenhach (Citation1938) suggested that the inconsistency occurs because scientific activity develops in two sequential phases with very different characteristics. The first phase corresponds to what he calls the ‘context of the discovery’. This phase is not governed by the ‘strict laws of logic’ and ‘scientific thinking’ is at work. Indeed, Reichenbach believed scientific thinking goes beyond logic. As he says, not without some humor, ‘the scientific genius has never felt bound to the narrow steps and prescribed courses of logical reasoning’ (5). Afterwards, there is a second phase that corresponds to what Reichenbach calls the ‘context of justification’, whose main aim is, according to him, communicating the discoveries to the public. The work done during the first phase is reformulated during this second phase, acquiring a logical structure, in a transformation that Reichenbach calls ‘rational reconstruction’. Only after this rational reconstruction is scientific work able to withstand philosophical scrutiny. Reichenbach thought that philosophy of science, in particular, epistemology, his discipline, should worry just about the second phase. He believed that what happens during the first phase, namely the thought patterns behind scientific discoveries, belong to the field of psychology.

But other philosophers of science thought differently, most notably Feyerabend, who did not avoid looking at the messier side of scientific practice (Citation[1975] 1993). For him, the inconsistency between the way science is practiced and how it is reported in scientific publications derives from the imperfect, human character of scientists and from their circumstantial limitations, due to which they act politically and lie to themselves and society: ‘Love of truth is one of the strongest motives for replacing what really happens by a streamlined account or, to express it in a less polite manner, love of truth is one of the strongest motives for lying to oneself and to others’ (p. 183). Indeed, Feyerabend thought that the way scientists approach problems is disorganized, questionable, ambiguous and contaminated by principles they do not know. He also criticized the ‘chauvinistic’ tendency of the class to resist changes in the status quo and their recurrent use of rhetoric and persuasion. It should be noted, however, that Feyerabend was not critical of the lack of order and of a strict logical path in scientific practice. What he disagreed with was, rather, that scientists conceal the true nature of their activity by presenting it as an ordered and almost purely rational endeavour.

Scientist and Nobel laureate of Medicine Medawar (Citation1964) also believed the inconsistency constitutes fraud, but he did not blame scientists. He argued that science follows an inductive format in scientific articles, due to tradition and editors’ requirements, when in practice it relies on hypothetico-deductive processes. He believed that scientists work according to an identifiable (Popperian) method, consisting of hypothesis testing, and advocated for a change in the format required for scientific articles so that these could translate scientists’ actual thought processes.

Harwood (Citation2004) also believed the inconsistency is rooted in misplaced requirements of scientific journals, which make it mandatory to present the research according to a logical model different from the one that effectively governs research practice. However, differently from Medawar, Harwood claims that the guidelines of scientific journals follow the hypothetico-deductive model.

Schickore, a science philosopher, carried out a literature review to try to understand the challenges posed by the inconsistency to the study of science within philosophy and human sciences (Citation2008). Among the authors she analysed are several anthropologists of science, most notably Bruno Latour, who claim the inconsistency derives from a need to hide the confused, highly contextualized and opportunistic nature of scientific practice, in order to please and convince sceptical and potentially belligerent peers.

The brief analysis presented in this section shows that even when the inconsistency is acknowledged, opinions regarding its origins vary. Reichenbach chose not to address those origins, Feyerabend attributed the inconsistency to the unethical behaviour of scientists, the anthropologists of science reviewed by Schickore point to the disordered nature of the scientific practice, and scientists Medawar and Hartwood hold that the inconsistency derives from the imposition of unrealistic formats to scientific publications. The common denominator among these authors is that the inconsistency is never seen as positive. As will be seen, I have a different point of view.

3. The Nature of the Inconsistency

To analyse the nature of the inconsistency between what scientists do and what they report in their publications, I would like to begin by distinguishing its three main elements: scientific practice, scientific publications and the link between them, the scientist.

In what follows, ‘reason’ is understood as the human capacity to reflect by applying logic whereas ‘emotion’ refers to the conscious mental reactions associated with feelings and sensations.

3.1. The Scientist

Scientists are those who practice and publish science. As human beings, their action is mostly motivated by emotional factors such as wishful thinking (Deleuze and Guattari Citation[1972] 2003). Often, scientists themselves recognize they are emotionally driven in their work. One famous example is Einstein’s speech to the Physical Society of Berlin on the occasion of Max Planck’s 60th birthday (Einstein Citation1918). Einstein states, for instance, that ‘there is no logical path to the laws [from which the cosmos can be constructed by pure deduction], only intuition’ and that ‘the state of mind which enables a man to do work of that kind is akin to that of the religious worshiper or the lover; the daily effort comes from no deliberate intention or program, but straight from the heart’, wishing at the end of the speech that ‘the love of science continues in the future to illuminate [Max Planck’s] path’.

Philosophers also have acknowledged the influence of emotional factors on the scientific process. We can mention, for example, Kuhn’s model of how science works at the macro-scale, which is based on the existence of two types of science, the normal and the extraordinary (Citation[1962] 2012). Normal science is that ordinarily practised by the scientific community based on a system of shared beliefs, the so-called paradigm. Extraordinary science arises in revolutionary periods during which the paradigm changes. Kuhn thought that, in both cases, scientific activity is propelled by emotional factors. In his opinion, scientists are attracted to science for reasons such as ‘the desire to be useful, the excitement of exploring new territory, the hope of finding order, and the drive to test established knowledge’ (37-8). Then, in normal science, playful instincts are relevant. Extraordinary science, on the other hand, involves persuasion, competitiveness, and defence instincts. Stress and even desperation often accompany perceived failure in either case.

Sociologists of science, also, recognized the emotional drive of scientists in their work. Hagstrom (Citation1965), for example, believed that the wish to gain peer recognition is a deep motivation for scientists in their practice. He mentions also other extra-scientific, emotionally loaded motivations, such as the desire to obtain material benefits and positions and the fear of losing access to scarce and expensive resources. Note that this second class of motivations was at the time only emerging, as Hagstrom himself recognizes. Since then, their relevance has increased – and, I am convinced, particularly in recent times, due to the application of business management principles to science (Jain, Triandis, and Weick Citation2010) and to the increasing dependence of scientists from scientific institutions (Ward Citation2012).

3.2. Scientific Practice

The practice of science is the activity that generates scientific results. It is often thought that it comprises an ordered sequence of activities such as observing, carrying out experiments and measurements, interpreting results, making models, formulating theories and communicating findings (Schickore Citation2008). Of course, this is an idealized model of scientific activity, as scientists will be the first to recognize, a model that does not mirror what happens in practice. However, it is the model promoted in manuals and taught to students. And it is also the one that corresponds to the content of scientific articles (Bell Citation2009).

The distance that separates this idealized model of science from reality is well illustrated by the autobiographical description of the scientist and Nobel laureate in Medicine, François Jacob (Citation1988). In his words, the practice of science is ‘a jumble of disordered efforts, of attempts born of a desperate eagerness to see more clearly, and also of visions, dreams, unexpected connections, of simplifications often childish, random soundings in every direction, never knowing where one is going to end up’ (318). This illustrates well the fact that scientific practice is often not well planned or even directed by strictly scientific goals. Scientists follow intuitions and wishes, take advantage of available opportunities and resources, improvise, make mistakes, try to remedy mistakes and follow unforeseen paths suggested by chance. The truth is that it is not possible to discern a general and consistent logical line in scientific practice, be it inductive, hypothetico-deductive, Bayesian or a combination of these. As synthesized by cell biologist Frederick Grinnell, there is a linear model of science, which corresponds to the way we teach, in which the path to discovery is guided by objectivity and the logic of the scientific method, facts are there waiting to be observed, and scientists are objective and dispassionate. But this model is a myth or, at the very least, a significant distortion of scientists’ daily practice (Grinnell Citation2011).

3.3. The Scientific Article

The scientific article emerged independently in France and in England, during the late 17th century, in connection to the birth of the scientific journal and thus of modern science itself. The first scientific journals were the Journal des Sçavans, first published in France in January 1665, and Philosophical Transactions, launched in England just two months later (Gross, Harmon, and Reidy Citation2002). The scientific article then rapidly became the primary type of scientific publication, the one through which new scientific claims are made public. Its prominence received an additional boost in the 20th century when, as management techniques were imported into national scientific apparatuses, it was universally adopted as a base for measuring the scientific performance of people and institutions and a criterion for the distribution of resources. Therefore, I will concentrate discussion on this type of scientific publication.

The typical structure of modern scientific articles is depicted in the table below, which refers to the most characteristic case of the experimental research article. This structure emerged in the 19th century and today still corresponds to the guidelines of scientific journals in general. Of course, there may be deviations, especially in relation to theoretical articles. However, these articles are a minority, and in many cases the differences are not significant (Harmon Citation1989; Gross, Harmon, and Reidy Citation2002).

After being prepared by a scientist or research group, the article is sent (submitted) to a scientific journal whose editors, in turn, send it for review to other scientists. The review may have several iterations, involving a discussion between authors and reviewers with the mediation of editors. The exchange of objections and ideas may eventually result in changes to the original article.

There has been some discussion about whether scientific articles are written with the soundness of the logical argument or the opinion of peers in mind (Schickore Citation2008). However, this question is not relevant here. I am convinced that any combination of the two factors could in fact guide the scientist when creating the article. As François Jacob says in his autobiography, scientists need to create ‘what appears as the logical order’ to get both the article accepted and ‘a new way of thinking adopted’ (Citation1988, 318). We could also say, using an analogy, that logic is the law, but the peers are the court. The fundamental point, though, is that the structure of scientific articles reconstructs the underlying practice in a logical and therefore idealized way. As mentioned in section 2, this was noted by Reichenbach early in the 20th century (Citation1938). Revealing of this idealized nature is the fact that scientific articles rarely mention mistakes and anomalies, although these are common in practice. In fact, mistakes and anomalies are often even productive, suggesting paths or leading to unexpected results. So, why don’t they appear in scientific publications?

The reason is not that mistakes and anomalies are concealed. They are absent because occurrences that are irregular in relation to the original research hypotheses are reframed in scientific articles, using other (post hoc) hypotheses in relation to which the results are scientifically interesting, i.e. can be used to advance knowledge. Results that have no scientific interest are usually not published. To publish mistakes, anomalies and failed hypotheses would fill the literature with noise. These are indeed the majority because doing science is walking a path paved with failures.

This type of question does not arise during the review of the article because the discussions with reviewers address solely the consistency of the claims within the framework proposed in the article. They do not focus on the vicissitudes of practice or the original motivations of the research. Indeed, what the article necessarily preserves from the original practice are just the results themselves, typically numerical data or experimental patterns, and the methods that gave rise to these results. Everything else can be recreated. How this recreation is done will be discussed in section 4. But first let me address a few interesting objections.

3.4. Is the Inconsistency a Sign of Scientific Misconduct?

To discuss whether a non-coincidence between what a scientist does in practice and what he/she reports in the publications is a sign of scientific misconduct, let us start by understand what scientific misconduct is.

In the US, scientific misconduct is generally understood as comprising (i) the fabrication of scientific results, (ii) falsification due to incorrect or incomplete reporting of results and (iii) plagiarism, i.e. the appropriation of ideas or work of others without giving due credit to the authors. This definition of scientific misconduct, as fabrication, falsification or plagiarism (FFP), was adopted at the federal level, by the Office of Research Integrity, under the designation of ‘research misconduct’ (ORI Citation2021). However, scientific institutions often go beyond this definition, by including in their institutional codes aspects outside the strict scope of scientific ethics. The policies of 183 such institutions in the US, public and private, were analysed by Resnik et al. (Citation2015) who identified several additional aspects, namely disciplinary ones (such as violations of confidentiality, misrepresentation of own credentials, failure to disclose significant financial interests, concealment of misconduct, false accusations or retaliation for true misconduct allegations), unethical authorship (for example, the inclusion as authors of people who did not participate in the work or the exclusion of real authors), as well as legal aspects (non-compliance with laws and regulations, misappropriation of funds and theft of property).

In Europe, an attempt to reach supranational consensus regarding the definition of scientific misconduct resulted in a document called the European Code of Conduct for Research Integrity (ALLEA Citation2017). According to this code, scientific misconduct comprehends fabrication, falsification and plagiarism, which are considered particularly serious, but also multiple other ethical aspects, as well as many professional and legal ones. The code, indeed, seems to aim at an almost total regulation of the activity and even the thinking of scientists. An ideal researcher is evoked, who is guided by principles of reliability, honesty, responsibility and respect (for society, ecosystems, cultural heritage and the environment) and who takes into account the safety and well-being of his/her research collaborators and of the whole community. Further, the code stipulates that scientific research is a collaborative process governed by objectives and that researchers should therefore ‘agree at the outset on the goals of the research’. The broad normative scope of the code extends to scientific content itself. According to this document, researchers should, among other things, take into account ‘the most recent knowledge’, comply with ‘codes and regulations relevant to their discipline’ and report their results in a way that is ‘compatible with the standards of the discipline’. But the regulatory ambition of this code did not have much resonance in European countries. A recent analysis of national codes of good scientific practices in different European countries reveals that, in Europe too, consensus exists only in relation to the FFP categories (Desmond and Dierickx Citation2021).

We can therefore conclude that scientific misconduct essentially and primarily consists of FFP practices. This means that the inconsistency discussed in this article, between the practice of science and scientific publications, can hardly be seen as scientific misconduct. Indeed, fabrication and falsification (plagiarism does not apply to the present discussion) are practices that affect the content of the Results or Materials and Methods sections of an article. However, the reformulation mechanisms that give rise to this inconsistency do not affect these sections. Results can be fitted in a different narrative and used to answer questions different from those that motivated the original research without their integrity being affected. Also, in such a process, the materials/methods are preserved.

Here, it is important to make a note regarding the European Code of Conduct for Research Integrity (ALLEA Citation2017). Its requirement that the objectives be agreed upon all partners at the outset of the research implies an obligation to adhere rigidly to the initial research objectives, in which case the reintegration of research results into a different narrative would, in fact, constitute misconduct. However, adhering rigidly to the initial research objectives would, for example, make it impossible to use accidental or extra-disciplinary discoveries to answer scientific questions. In any case, the code did not achieve consensus among European countries and the mentioned requirement was not transposed to the national codes.

It is thus possible to conclude that the inconsistency can hardly be considered as misconduct. Rather, as argued throughout this article, it is a structural feature of science.

3.5. Could the Inconsistency Derive from the 21st Century Pressures on Scientists?

The question here is whether the inconsistency could derive from the pressures on scientists due to neoliberal features such as resource scarcity and work precariousness. Another possible source of pressure we should consider in this context is the use of bibliometric indicators as (misguided) measure of quality in science (Gingras Citation2021; Marewski and Bornmann Citation2018). These policies, imposed upon individual scientists in many institutions, result in the extreme forms of ‘publish or perish’ of today’s research and potentiate scientific misconduct. However, as discussed in Section 3.4, the inconsistency is not misconduct. Furthermore, the existence of the inconsistency clearly predates the 21st century and neoliberalism, as can be seen from the literature review presented in Section 2. Therefore, the answer to the initial question is no.

3.6. Reporting Biases and HARKing: Science vs Applied Science

Another doubt that may arise is whether the reconstruction of scientific practice in scientific publications is not at least questionable, given that it can give rise to reporting biases and configure HARKing. I argue that this is not the case and that this hypothesis stems from a confusion between science and applied science. Let us see why.

Reporting biases are distortions due to selective reporting of findings from scientific research. They may, for example, led to misunderstandings on the effects of a particular medical treatment, since studies with negative results tend to be less often published, especially when the treatment is new. Concern about the consequences of these distortions has led some authors to propose solutions consisting on variations of the rule that researchers should stick to their initial hypotheses (Rennie and Flanagin Citation1992; Dwan et al. Citation2013).

In a more targeted approach to this problem, Kerr defined what he saw as a special type of questionable research practice, the so-called HARKing, which consists in the act of presenting a post hoc hypothesis as if it were an a priori hypothesis (Citation1998). Some of the main negative consequences of HARKing are ‘statistical abuses’, which can lead to the mentioned type of misrepresentation of the effects of products and treatments being tested. Kerr puts forward possible ways to prevent HARKing, devising a top-down intervention that consists of convincing (through education), encouraging (by means of changes in the editorial and peer review processes) or coercing (through penalties to be introduced in professional codes of conduct) researchers to maintain their initial hypotheses.

I will not comment here on all the dimensions of the mistake of trying to impose authoritarian control on an intrinsically spontaneous system such as the scientific system. Most researchers will be able to foresee the likely consequences that such an action would have, by analogy, for example, with what currently happens with the imposition of bibliometric indicators as measures of scientific quality (Gingras Citation2021; Marewski and Bornmann Citation2018). Rather, I will focus on the main reason for that mistake, which is a confusion between science and applied science.

The root difference between science and applied science is that science, the object of this article, aims at producing (scientific) knowledge whereas applied science uses scientific knowledge to help solve practical problems. Applied science is one of the dimensions of engineering, medicine, and other technological activities, coexisting with the other dimensions these activities also have, such as practical know-how, law and regulations, etc. This difference between science and applied science is not merely formal: they have different working mechanisms and respond to different incentives. Therefore, what is productive for one may be destructive for the other. Sticking to prior hypothesis may make some sense for applied science, which relies strongly on statistics and where causation is by default inferred through correlation heuristics. But it is destructive for science, which relies precisely on an opposed mechanism: the possibility of reintegrating the results according to new hypotheses.

The confusion between science and applied science stems from the fact that applied science has been increasingly imposed on scientific communities in such a way that knowledge-producing science is often forced to survive in its interstices. Kerr’s perspective in 1998 is already symptomatic of this marginalization of science: almost en passant, in a secondary couple of sentences of his lengthy article, science is called ‘theory’ and a kind of exception in the authoritarian anti-HARKing model is foreseen for those who ‘develop coherent theory’ (it is not clear how will these researchers be recognized in advance).

The truth is that reporting biases, provided they do not configure a falsification of research results (example, leaving out data so that the results fit a desired conclusion), are a problem just for applied science, i.e. the one that is grounded to the initial objectives (the problem to be solved). This includes HARKing. The concept simply does not apply to science, the one that serves to produce scientific knowledge and is the object of the present article. Why would it be a problem to adopt more interesting objectives suggested by research results that could even arise by chance? This happens all the time. On the contrary, from a purely scientific point of view, it would be questionable to disregard useful data and interesting ideas just because they do not fit a preliminary hypothesis that did not live up to the expectations. The crucial ability of a scientist is actually to be able to imagine ways to use their results to advance knowledge.

Of course, one should not lie about research design. However, as explained in the previous sections, that is not the issue. Indeed, the process of reintegration is not a matter of lying because scientific reporting is abstract, not a simple narrative of events.

4. Causes of the Inconsistency: The Integration Process

With the hypotheses of scientific misconduct and questionable practice ruled out, a question arises as to what mechanisms underlie the reconstruction of scientific research that happens when a scientific article is created. That reconstruction transforms the messy scientific practice, stripping out extra-scientific influences, randomness, dead-ends and mistakes, into a rational account that presupposes intentionality and is able to add something new to the existing body of scientific knowledge. I believe that two distinct but simultaneous and interdependent mechanisms underlie this metamorphosis. I will call them rationalization and reframing, respectively.

The first mechanism, rationalization, corresponds to the rational reconstruction Reichenbach pointed out systematically occurs in science (Citation1938). The rationalization is typically achieved through fitting the scientific work into an idealized structure such as the one described in and then subjecting it to successive reviews by the authors themselves or their peers. Indeed, as noted by Holmes (Citation1987), scientific reports do not necessarily preserve the temporal order or logic of the underlying thought processes, nor the steps of experimental methodology.

Table 1. Typical structure and inter-relation between the main parts.

But Holmes’ description is conservative. In reality, what remains of the original research at the end of an article production process may be just a few selected results and the methods used to obtain those results. Everything else can be recreated, including the aims and objectives of the research themselves. This is possible because, in addition to the rationalization, there is a reframing of scientific results into an appropriate context. This second mechanism, reframing, is surprisingly simple: it is based on changing the order by which the article sections are created compared to the order in which they appear in the article. Indeed, as scientists know, the preparation of a scientific article does not begin in the Introduction, it begins in the middle, in the Results section. This allows the scientist to assess the potential of his/her results before investing more in the subject. If the assessment is positive, he/she moves on to the State-of-the-Art section. Here, the researcher tries to understand, by analysing articles from other scientists, if and how his/her results can be used to add something new and relevant to the body of scientific knowledge. Of course, this process can take place before the actual writing of the article but it may also, and to some extent it always does, during the writing. Only after concluding that the results can, in some way, improve existing knowledge will the researcher proceed to the Discussion section. Iteratively then, based on the analysis of the results and of the literature, the scientist builds the three core sections, Results, State-of-the-art and Discussion, that contain the structured body of objectives and hypotheses of the article. Only afterwards is he/she able to approach the Introduction and the Conclusions. These are also iteratively generated as they constitute the outer envelope of the work and must therefore be well interconnected. The Methods section is typically the last one to be produced because it depends on the results, and these are selected to integrate the article according to the value they revealed to building the previous sections.

The reconstruction of the research that takes place when a scientific article is created and its two mechanisms, rationalization and reframing, can be glimpsed in autobiographical accounts of scientists. François Jacob, for example, says that the writing of an article requires scientists ‘to purify the research of all affective or irrational dross, to get rid of any personal scent, any human smell’. He also mentions that it involves ‘[replacing] the real order of events and discoveries by what appears as the logical order, the one that should have been followed if the conclusion were known from the start’ (Jacob Citation1988). These two statements are perfectly illustrative of the mechanisms of rationalization and reframing, respectively.

Both rationalization and reframing have critical functions within the scientific process. The first ensures that the scientific article becomes a model of rationality. But the circular form of production, through which the selected results are reframed, is no less vital. It guarantees the other structural quality of the scientific work: its effectiveness, i.e. the guarantee that it will contribute to the advancement of knowledge. This is achieved by an a posteriori definition of a context within which the results actually obtained are relevant to the existing body of knowledge.

None of this constitutes fraud. Rather, it stems from the integrative nature of the scientific enterprise. I propose the term ‘integrative’ to translate the communitarian dynamics by which each scientist seeks to productively integrate his/her results into the whole of scientific knowledge, which therefore is permanently under construction. Scientists strive to (1st) produce results with a potential for integration and (2nd) create justifying contexts within which their results contribute to the communitarian construction of the body of scientific knowledge. The justifying context can follow any logical line, hypothetico-deductive, inductive, probabilistic or even purely falsificationist.

The results actually obtained, not the initial objectives of the research, are thus the driver of science. This is a structural difference of science compared to applied science. What matters in applied science is the problem that needs to be addressed i.e. the initial objectives. Finding creative integration contexts for unexpected results as science does is therefore usually not a possibility. Rather, when things do not go as planned, it is crucial to understand why something did not work and correct it to reach the objectives. This commitment to the initial goals also makes it essential to follow well-planned down-to-earth research paths in applied science, even if at the expense of creativity and reach.

Kerr argued that the reconstruction of scientific practice in scientific publications creates an ‘inaccurate, distorted model of science’ (Citation1998). However, if, as argued in this article, the reintegration of scientific results under new objectives is a mechanism underlying the fundamental aim of science, i.e. advancing knowledge, what attitude should we have? Should we try to understand such a mechanism, however strange and counterintuitive, or struggle to impose on it a bureaucratized model of science that will ultimately destroy it in the name of aesthetic coherence and (possibly misguided) common interest? I think we should follow the first path.

5. Relevance of the Inconsistency: The Production of Science

So far, I have focused on establishing the existence, identifying the constitutive elements and exploring the mechanisms underlying the inconsistency between what scientists do in practice and what they report in their publications. Let us now try to understand the relevance of this inconsistency. For that, I propose a thought experiment that tries to answer the following question: under what conditions could the inconsistency not exist?

I will analyse the possibilities for each of the three elements, scientist, scientific practice and scientific publications.

Scientists, as we saw in the previous section, are subjected to extra-scientific influences, many of an emotional nature. But could we, through a careful and rigorous selection, find potential scientists with more ideal characteristics, able to follow strictly rational paths in their practice? The answer to this question is no. And the reason is that theoretical rationality (which includes scientific rationality) is inoperative per se. This was amply demonstrated by the cognitive sciences. Damásio, for example, illustrates the idea with the case of Phineas Gage, a competent railway construction foreman (Citation1994). Gage suffered an accident that damaged his left frontal lobe, an area of the brain responsible for managing emotion. Despite his capacity for theoretical rational thinking was not affected, he lost the ability to work or even manage his life at a basic level. Indeed, purely theoretical rational mental functioning is in reality a non-functional aberration, a disabling illness. It could, therefore, never deal with a demanding activity like science, which involves exploring complexity, the unknown and the unexpected and using creativity to build on the frontier of the possible.

But couldn‘t scientific practice be organized? Note that this question is not, in reality, merely rhetorical. The organization of scientific activity has become an objective with the spread of managerial philosophies, the growing interest in the capitalist appropriation of science and its use as social justification. Habermas, for example, noticed the trend very early (Citation[1967] 2018). This has been pursued through management approaches that intend to focus on (‘to apply’) more and more the research, for example by subjecting it to planning and evaluation methods within which achieving the initial objectives becomes crucial. The aim is, of course, to obtain more potentially profitable results with less investment. However, the practical effect of this path has been an increasing bureaucratization of scientific activity, ethical distortions and scientific inefficiency, with a de facto transfer of resources from science to applied science, i.e. technology. Because, as discussed in the previous section, science is based on the possibility of creatively reframing the results into a context within which they can add to the body of knowledge. And that context is often different from what was initially hypothesized. Overall, the body of scientific knowledge is built collectively according to directions that cannot be exactly predicted because they happen in what Deleuze and Guattari creatively called the chaos of infinite possibilities (Citation[1991] 1994). That, of course, is incompatible with fixed predetermined objectives. Technology pursues fixed predetermined objectives, not science. The imposition of fixed predetermined objectives does not allow creatively exploring the chaos of infinite possibilities. It would make it hard, for example, to transform accidental occurrences and errors into possibilities, which was what enabled fundamental discoveries such as penicillin or radioactivity (Grinnell Citation2011). So the answer is no, it is not possible to eliminate the inconsistency between what scientists do and what they report in publications by organizing scientific practice.

What about scientific articles? Could their structure and production mode be changed so as to eliminate the inconsistency? We could, for example, describe the practice more faithfully, centre the work on the initial objectives, follow chronological lines, mention errors and dead ends. But would such a change be of interest?

The answer to this question has three angles. The first is that the processes of rationalization and reframing of results according to appropriate (often new) objectives take place, as discussed in Section 4, in order to build two fundamental characteristics of science, rationality and effectiveness, respectively. And these characteristics would be compromised if, instead, the article remained attached to the initial objectives and followed the convoluted, sometimes chaotic, lines of the underlying practice.

The second angle of the answer has to do with the nature of the scientific contribution conveyed by the article. Indeed, the science is not the raw experimental results that result from practice. Science is those results in a context. It is not just the numerical data or experimental patterns, it is those data and patterns framed according to hypotheses and in relation to reference theories. The same results can, inclusively, acquire different meanings depending on the context. The scientist’s prior knowledge, for example, influences how he/she views the data (Bell Citation2009). And it is during the writing of the article that much of this process, the creative process of science, actually takes place (Holmes Citation1987; Michaelson Citation1987). Indeed, writing itself facilitates complex thinking, restructures ideas and allows performing demanding cognitive tasks (Menary Citation2007). This means that scientific publications are not just a means of communicating the results or persuading peers, although the contrary has been often claimed (Reichenhach Citation1938; Harmon Citation1989; Schickore Citation2008; Bell Citation2009; Howitt and Wilson Citation2014). Scientific publications are the science itself. Therefore, changing the structure of scientific articles, making them more descriptive, is thus not a mere formal issue, it is changing science itself.

A third angle is that of the access of peers to information. In their current form, scientific publications are anchored to practice through the Results section, around which the article is built, and Methods section, which stores the experimental details relevant to understand how the results were obtained. The contextualization of these results in relation to existing theories and facts happens in the other sections of the article (). But this was not always so. In ancient science and even during the early days of modern science, there were no standard rules. There was not even a clear distinction between scientific writing and literature. Natural philosophers used poetic images and rhetorical strategies to convince readers. Also, findings were communicated through short news stories, sometimes including personal observations unrelated to the research, or longer articles, which often followed a historical narrative in which experiments were linked to specific moments and places (Harmon Citation1989). The current standardized structure emerged with the professionalization and the consequent increase in the number of active scientists in the 19th century. It was aimed at facilitating the writing of articles and, at the same time, their understanding by peers (Harmon Citation1989). The latter is fundamental because modern science is a complex communitarian enterprise. Whether or not it would make sense to return to a more primitive science, more closely describing practice and the underlying lines of thought, at the expense of these qualities, therefore needs to be carefully considered.

Wrapping it up, the discussion above shows that it is not possible to eliminate the inconsistency between what scientists do in their practice and what they report in their articles by changing any of the three involved elements, the scientist, the scientific practice or the scientific publication. Fifty years ago, faced with the messy reality of the practice of science, Feyerabend argued that scientific rationality does not exist (Citation[1975] 1993). I believe his conclusion arose because he did not clearly distinguish the three mentioned elements. I argue, instead, that scientific rationality exists but is a characteristic of the scientific publication and not of the scientist or of his/her practice. The inconsistency we have been discussing between scientific practice and scientific publications underlies the process through which the rationality of science is produced. And there is also the other fundamental characteristic of science, effectiveness. Science can be said to be effective because scientific results presented in articles contribute to the solution of problems. However, as discussed, this is only possible because scientific results can be disconnected from their initial objectives, in relation to which they often are not conclusive, and be reframed in order to respond to other pertinent problems. This change in objectives contributes to the inconsistency, showing that, in this aspect too, the inconsistency is a necessary part of the production process of science.

6. Conclusion

Scientific articles are not a way of communication of science as it is practiced, but the result of a process that transforms emotional forces such as desire, without which nothing can be produced by humans, into science. The inconsistency between what scientists do in practice and what they report in their articles is the difference in potential underlying such transformation. That inconsistency is generally attributed to negative factors ranging from an unethical behaviour of scientists, the disordered nature of their practice or the imposition by scientific journals of unrealistic formats to publications. However, the analysis carried out in this article indicates that this negative judgement is not necessarily correct. The inconsistency is actually part of the science production process. On the one hand, we have scientists and their emotional drive, reflected on a practice that is not necessarily objective or logical. That occurs because theoretical rationality per se is inoperative and because, in reality, there is no other way to explore the unknown in its infinite possibilities. It is a process where even mistakes and chance can turn out to be productive. On the other hand, we have the scientific article, the archetype of order and rationality, which presents the scientific results and explains their meaning regarding existing knowledge. Between one and the other, there is the science production process that bridges the inconsistency and links the three elements involved, the scientist, her/his practice and the scientific publication, transforming a creative practice into science.

The production of science is based on a reconstruction of the research through two mechanisms: (i) rationalization and (ii) the reframing of results according to post hoc objectives. Such a dual process occurs typically during the preparation of scientific articles because writing favours complex thinking and the logical structuring of ideas. This means that scientific articles are not just a vehicle for communicating science, they are the science itself. Science, indeed, is not the raw results that come out from practice, but those results contextualized in a way that contributes to the construction of scientific knowledge. The results, regardless of how they came about, not the initial objectives, are the driver of the scientific production process. This represents a fundamental difference in relation to operative activities, such as engineering and technology, and reveals a quality of the scientific enterprise that I propose to call ‘integrative’: the scientific enterprise has an integrative dynamic, in which each scientist seeks to productively integrate his/her results into the whole of scientific knowledge. The integration is made possible by the two mechanisms, rationalization and reframing, through which the raw results of scientific practice are embedded in a context that makes them relevant to scientific knowledge.

We can therefore conclude that the inconsistency between what scientists do and report is not a hindrance or an aspect to be corrected. It is a structural characteristic of modern science.

One final note on the science/applied science dichotomy. Applied science is not science, it is a dimension of technology. The two live together but neither they have the same objectives nor do they respond to the same incentives. If this is taken into account, their coexistence can be synergistic rather than mutually destructive. Therefore, it would be important to stop trying to make science conform to the initial objectives. And to stop demanding scientific articles from applied science as if that were indispensable in technological activities. Note that applied science can easily be corrupted because it neither has the security mechanism of science (the possibility of reintegrating the results in a new context in which these results do advance knowledge) nor the umbilical connection to the noumenal reality that the practical dimensions of engineering and medicine have. Therefore, let us refrain from expecting applied science to operate in the same manner as science while simultaneously providing practical solutions as technology. It will not happen. The result will be filling the scientific literature with wishful thinking.

Acknowledgments

I am grateful to Olga Pombo for the interesting discussions and helpful suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Teresa Diaz Gonçalves

Teresa Diaz Gonçalves is a researcher at the National Laboratory for Civil Engineering (LNEC), in Lisbon. She graduated in civil engineering in 1992 and obtained a PhD from Instituto Superior Técnico (IST) in 2007. She was awarded the Manuel Rocha research prize and is the author or co-author of nearly two hundred publications, including journal articles, conference communications, and confidential consultancy reports. Her research interests focus on ecomaterials, such as those based on raw earth or vegetal fibers, vernacular building practices, the conservation of architectural heritage, in particular the decay of porous building materials due to salt crystallization, and the physics of drying. She is also interested in methodological and philosophical issues in the practice of science and technology.

References

  • All European Academies (ALLEA). 2017. The European Code of Conduct for Research Integrity. Berlin: ALLEA. https://www.allea.org/wp-content/uploads/2017/05/ALLEA-European-Code-of-Conduct-for-Research-Integrity-2017.pdf.
  • Bell, Randy L. 2009. “Teaching the Nature of Science: Three Critical Questions.” Best Practices in Science Education 22: 1–6.
  • Damásio, António. 1994. Descartes’ Error. Emotion, Reason, and the Human Brain. New York: Avon Books.
  • Deleuze, Gilles, and Felix Guattari. [1972] 2003. Anti-Oedipus. Capitalism and Schizophrenia. The Athlone Press.
  • Deleuze, Gilles, and Felix Guattari. [1991] 1994. What is Philosophy? New York: Columbia University Press.
  • Desmond, Hugh, and Kris Dierickx. 2021. “Research Integrity Codes of Conduct in Europe: Understanding the Divergences.” Bioethics 35 (5): 414–428. doi:10.1111/bioe.12851.
  • Dwan, Kerry, Carrol Gamble, Paula R. Williamson, Jamie J. Kirkham, and Reporting Bias Group. 2013. “Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias—An Updated Review.” PloS One 8 (7): e66844. doi:10.1371/journal.pone.0066844.
  • Einstein, Albert. 1918. “Principles of Research.” Address to the Physical Society, Berlin, for Max Planck’s Sixtieth Birthday. https://www.site.uottawa.ca/~yymao/misc/Einstein_PlanckBirthday.html.
  • Feyerabend, Paul. [1975] 1993. Against Method: Outline of an Anarchistic Theory of Knowledge. London: Verso.
  • Gingras, Yves. 2021. “‘Science’ Has Always Been Evaluated … and Will Always Be.” Social Science Information 60 (3): 303–307. doi:10.1177/05390184211025204.
  • Grinnell, Frederick. 2011. Everyday Practice of Science: Where Intuition and Passion Meet Objectivity and Logic. New York: Oxford University Press.
  • Gross, Alan G., Joseph E. Harmon, and Michael S. Reidy. 2002. Communicating Science: The Scientific Article from the 17th Century to the Present. West Lafayette, Indiana: Parlor Press.
  • Habermas, Juergen. [1967] 2018. Ciência e Técnica como Ideologia. Lisbon: Edições 70.
  • Hagstrom, Warren O. [1965] 2019. The Scientific Community. Internet Archive. https://archive.org/details/scientificcommun0000hags.
  • Harmon, Joseph E. 1989. “The Structure of Scientific and Engineering Papers: A Historical Perspective.” IEEE Transactions on Professional Communication 32 (3): 132–138. doi:10.1109/47.31618.
  • Harwood, William S. 2004. “A New Model for Inquiry: Is the Scientific Method Dead?” Journal of College Science Teaching 33 (7): 29–33.
  • Holmes, Frederic L. 1987. “Scientific Writing and Scientific Discovery.” Isis 78 (2): 220–235. doi:10.1086/354391.
  • Howitt, Susan M., and Anna N. Wilson. 2014. “Revisiting ‘Is the Scientific Paper a Fraud?’: The Way Textbooks and Scientific Research Articles are Being Used to Teach Undergraduate Students Could Convey a Misleading Image of Scientific Research.” EMBO Reports 15 (5): 481–484.
  • Jacob, François. 1988. The Statue Within: An Autobiography. New York: Cold Spring Harbor Laboratory Press.
  • Jain, Ravi, Harry C. Triandis, and Cynthia W. Weick. 2010. Managing Research, Development and Innovation: Managing the Unmanageable. New Jersey: John Wiley & Sons.
  • Kalberg, Stephen. 1980. “Max Weber’s Types of Rationality: Cornerstones for the Analysis of Rationalization Processes in History.” The American Journal of Sociology 85 (5): 1145–1179. doi:10.1086/227128.
  • Kerr, Norbert L. 1998. “HARKing: Hypothesizing After the Results are Known.” Personality and Social Psychology Review 2 (3): 196–217. doi:10.1207/s15327957pspr0203_4.
  • Kuhn, Thomas S. [1962] 2012. The Structure of Scientific Revolutions. Chicago: The‎ University of Chicago Press.
  • Marewski, Julian N., and Lutz Bornmann. 2018. “Opium in Science and Society: Numbers.” arXiv Preprint. arXiv: 1804.11210.
  • Medawar, Peter. 1964. “Is the Scientific Paper a Fraud?” BBC Talk. http://www2.fct.unesp.br/docentes/carto/enner/PPGCC/Redacao/artigos/Is%20the%20scientific%20paper%20a%20fraud%3F.pdf.
  • Menary, Richard. 2007. “Writing as Thinking.” Language Sciences 29 (5): 621–632. doi:10.1016/j.langsci.2007.01.005.
  • Michaelson, Herbert B. 1987. “How Writing Helps R&D Work.” IEEE Transactions on Professional Communication, PC-30 (2): 85–86. doi:10.1002/9781119134633.ch43.
  • The Office of Research Integrity (ORI). 2021. Definition of Research Misconduct. https://ori.hhs.gov/definition-research-misconduct.
  • Reichenhach, Hans. 1938. Experience and Prediction. Chicago: University of Chicago Press.
  • Rennie, Drummond, and Annette Flanagin. 1992. “Publication Bias: The Triumph of Hope Over Experience.” JAMA 267 (3): 411–412.
  • Resnik, David B., Talicia Neal, Austin Raymond, and Grace E. Kissling. 2015. “Research Misconduct Definitions Adopted by US Research Institutions.” Accountability in Research 22 (1): 14–21. doi:10.1080/08989621.2014.891943.
  • Schickore, Jutta. 2008. “Doing Science, Writing Science.” Philosophy of Science 75 (3): 323–343. doi:10.1086/592951.
  • Ward, Steven C. 2012. Neoliberalism and the Global Restructuring of Knowledge and Education. New York: Routledge.