2,041
Views
3
CrossRef citations to date
0
Altmetric
Research Article

A construct validity analysis of the concept of psychological literacy

ORCID Icon, ORCID Icon & ORCID Icon
Article: 1922069 | Received 30 Apr 2020, Accepted 24 Jul 2020, Published online: 23 May 2021

ABSTRACT

Objective

Psychological literacy has become influential as a concept to promote the value of a psychology degree to potential students and employers, particularly in the United States, the United Kingdom, and Australia. This influence is based upon an assumption that the concept of psychological literacy is valid. The objective of this paper is to examine relevant literature, identifying possible issues in providing evidence of validity for the construct.

Method

Messick’s unified validity framework was utilised to evaluate threats to the overall construct validity of psychological literacy. Broad literature such as empirical and case studies, reports, and opinion papers were included as sources for analysis. A content analysis was conducted to determine the level of consensus for proposed psychological literacy attributes.

Results

There was limited consensus for most attributes in the examined literature, which compromises construct validity according to Messick’s framework. However, five terms were cited in most papers. Consolidating these terms provides a conceptualisation of psychological literacy as the ability to apply scientific principles to psychology concepts in work and personal contexts.

Conclusion

Possible solutions to resolve construct validity threats are offered. Refining the concept requires further exploration of perceptions among key stakeholders such as psychology teachers, students, and employers. KEY POINTS

What is already known about this topic:

  1. A substantial body of literature has been published that discusses the construct of psychological literacy, but limited research (n=7) has measured the construct.

  2. A systematic narrative review of psychological literacy detailed concerns over multiple conceptualisations in studies that measured the construct.

  3. These prior findings revealed a need to evaluate the validity of the construct.

What this topic adds:

  1. This paper addresses the need for a construct validity assessment as identified in a previous systematic review.

  2. The validity assessment includes broader literature that was not included in the previous systematic review of measurement studies.

  3. A roadmap for future research is provided, identifying areas that must be addressed for the construct of psychological literacy to have validity.

Psychology is a STEM+ discipline, purported to enable the development of a range of scientific, interpersonal, and communication skills (Trapp et al., Citation2011). As a result, psychology graduates should be well placed to effect change in a range of contexts including personal and professional levels (Hulme et al., Citation2015). However, proposed funding cuts to university programmes in Australia (including the behavioural sciences) has increased pressure to demonstrate the employability and “job-readiness” of psychology graduates (Department of Education, Citation2019; Department of Education, Skills and Employment, Citation2020). Thus, clear evidence of the skills, knowledge, and abilities that psychology graduates bring to the workforce is essential (Goedeke & Gibson, Citation2011).

Psychological literacy refers to the set of skills and attributes that students develop over the course of an undergraduate psychology degree (McGovern et al., Citation2010). This concept has been gaining traction over the last decade and is a potentially unifying concept to articulate what a psychology degree uniquely adds, beyond what is developed in other disciplines. It could provide a lens to consider the value that a programme has to the workforce, allowing the discipline to rationalise courses and resources. To substantiate the concept, and demonstrate construct validity, evidence is needed to show how a psychology degree is distinct from other degree programmes. There must be a clear agreement in the attributes that equate with literacy in psychology, so we can be sure that our claims about the uniqueness and values of our programmes are valid.

Psychological literacy has been embraced as a useful concept by accreditation bodies such as the Australian Psychology Accreditation Council (APAC) and the British Psychological Society (BPS). These organisations have been influenced by the concept in the development of standards that must be met for a psychology degree to be accredited. These standards stipulate the “knowledge, skills and values that underpin students’ psychological literacy and which enable them to apply psychology to real life contexts” (British Psychological Society [BPS]], Citation2019, p. 19). By influencing accreditation requirements, the concept of psychological literacy drives curricula. There may be profound and serious ramifications where concepts used to dictate how programmes and learning outcomes are designed are not valid.

There is an assumption that a standardised understanding of psychological literacy exists. A broad set of psychological literacy attributes is provided by McGovern et al. (Citation2010): (1) having a well-defined vocabulary and basic knowledge of the critical subject matter of psychology; (2) valuing the intellectual challenges required to use scientific thinking and the disciplined analysis of information to evaluate alternative courses of action; (3) taking a creative and amiable sceptic approach to pproblem-solving (4) applying psychological principles to personal, social and organisational issues in work, relationships and the broader community; (5) acting ethically; (6) being competent in using and evaluating information and technology; (7) communicating effectively in different modes and with many different audiences; (8) recognising, understanding and fostering respect for diversity; and (9) being insightful and reflective about one’s own and others’ behaviour and mental processes. Although the McGovern definition is influential, it is not the only conceptualisation of psychological literacy. Results from a recent systematic review (Newell et al., Citation2019) identified different conceptualisations in studies that aimed to measure psychological literacy. The extent to which different conceptualisations exist beyond measurement studies is unknown and is an important area for investigation, as attributes must be consistent in the literature for the concept of psychological literacy to be useful and valid.

Construct validity must incorporate various areas of validity and “subsumes other forms of validity evidence (e.g., content, convergent, discriminant validities)” (Stephens et al., Citation1995, p. 1022). Messick (Citation1989) developed an umbrella construct validity framework that includes these types of validity as well as concerns for reliability. Messick explains the framework as “an integrated evaluative judgment of the degree to which empirical evidence and conceptual rationales support the adequacy and appropriateness of inferences” (p. 13). As Poldner, Simons, Wijngaards and van der Schaaf explain, “in the educational measurement literature, the conceptual framework of construct validity by Messick is universally used … especially in the case of complex cognitive constructs” (Poldner et al., Citation2011, p. 23). The framework is well known and employed in education research studies (e.g., Baartman, Bastiaens, Kirschner & van der Vleuten, Citation2007; Poldner et al., Citation2011). In the current study, Messick’s framework was used to assess issues for establishing validity of the construct within papers, and between conceptualisations. Identifying threats to construct validity has been applied in various disciplines (Crooks et al., Citation1996), particularly in education policy review (Neall & Tuckey, Citation2014; Stobart, Citation2001).

Aim and research questions

A recent systematic review paper (Newell et al., Citation2019) recommended an analysis of psychological literacy construct validity as a priority for future research. This paper directly addresses this research gap by identifying any threats to Messick’s (Citation1995) six aspects of construct validity across a broad range of sources. We investigated the current state of construct validity in all sources, including an analysis of conceptualisations in measurement studies, reports, case studies, editorials, opinion pieces, and discussion papers. The aim was to understand what psychological literacy researchers agree are the unique attributes of psychology degrees, providing clarity over agreement between sources. This was to determine Messick’s “content” aspect. A consequence of this analysis is also the identification of what is not consistent between conceptualisations. If this is substantially different between authors, it is indicative of a threat to construct validity, and consequently, means that the concept cannot be used in a meaningful way. We also aimed to determine threats to gathering evidence about psychology’s point-of-difference against other disciplines.

Method

Data collection

An expert librarian was consulted to construct a strategy of search terms and relevant databases. A titles and abstract search of the terms “psychological literacy” and “psychologically literate” was conducted in selected databases (Scopus, ProQuest, PsycINFO [Ovid], Web of Science and Google Scholar) in September and November 2018, September 2019, and February 2020. Authors were also contacted in May 2020 to identify relevant papers and papers in press. In total, 35 papers were identified for analysis.

Analysis procedure

Evidence of threats to the construct validity of psychological literacy was evaluated using Messick’s (Citation1995) six validity aspects, which are outlined below. For the context aspect, a two-step process involved a manual scanning of documents for their conceptualisations of psychological literacy, as well as a text search for the terms “psychological literacy” and “psychologically literate” to ensure rigour in the document search (matching the database search strategy). For the remaining five aspects, a collection table for each aspect was created to store cases of threats. From these data tables, the most representative case(s) of threats to establishing validity in psychological literacy studies (i.e. what authors classed as important issues to address in future research), were outlined in results.

Content aspect

This aspect requires specification of the knowledge, skills or dispositions that lie within the boundaries of psychological literacy. To satisfy the content aspect, there must be clear specification of what is relevant and representative of psychological literacy (Messick, Citation1995), with the exclusion of irrelevant content (Baartman et al., Citation2007).

A content analysis of conceptualisations was conducted (Appendix), using Qualitative Content Analysis (QCA; Schreier, Citation2012). QCA categories were selected (see ) by manually identifying relevant terms used across all 35 sources. The conceptualisations that were collated as Appendix, were searched using the search terms for each category. Sentences that included reference to these categories were collected (with author details) in separate data tables for each category. Where synonyms were needed to capture a category, for example, “knowledge”, “principle”, “underst”, “content”, “subject”, these results were collated into one table, with duplicates removed. A frequency of papers in each category was counted. The list of frequencies is presented in with search terms for each category (i.e. “appl” retrieved applies, apply, application, applications). Second and third authors confirmed that the search terms were appropriate extractions from the conceptualisations in Appendix. In the case where the first author determined that synonyms were required to capture the category of interest, the second and third authors confirmed that these terms were appropriate to combine into one category. Categories were also manually checked by the first author for relevance, with irrelevant entries removed. For example, the category of “mental” aimed to identify conceptualisations discussing mental health or mental processes, but “compartmentalised” was also returned during the search, which was deemed irrelevant and not coded to the category. Analyses also considered the different nature of categories, with conceptualisations from Appendix relating to both attributes and contexts listed as relevant to psychological literacy.

Table 1. Qualitative content analysis frequencies

Substantive aspect

“Substantive” construct validity appraises whether measures authentically capture the construct of interest. For example, if psychological literacy is conceptualised as including “problem solving”, then the measure must objectively assess “problem solving” (e.g., ability to actually solve a problem), rather than a proxy (e.g., self-reported perception of the ability to problem-solve). Papers were examined for threats to this aspect and a data table was developed to contain all cases and literature extracts. For the following four aspects data tables were similarly constructed.

Structural aspect

The structural aspect evaluates whether the approach to measuring a construct matches an accepted conceptualisation of that construct. This approach should also include any proposed relationships between psychological literacy attributes in any conceptualisation.

Generalisability aspect

Generalisability is concerned with reproducibility of results in other conditions or with other participants (Poldner et al., Citation2011), or the ability to generalise performance assessment across measures of different aspects of the same attribute (Linn et al., Citation1991; Messick, Citation1994).

External aspect

“External” refers to evidence of convergent (correlation between the same psychological literacy attribute across related studies) and discriminant validity (distinguishing unrelated constructs from psychological literacy) (Campbell & Fiske, Citation1959).

Consequential aspect

This aspect relates to the consequences of assessing psychological literacy, such as improved student learning, and includes concerns for bias and social demands related to testing. To satisfy the consequential aspect, results cannot be attributed to biases due to method design or demands arising from testing conditions.

Results

Evidence of threats to construct validity were collated for each aspect. Where threats to Messick’s six aspects were identified, at least one case is presented in detail, with implications for establishing psychological literacy construct validity outlined.

Test of threats to the content aspect

An analysis of the various conceptualisations of psychological literacy (as presented in Appendix) revealed inconsistency in how psychological literacy is conceptualised. Five attributes were common among 50% or more of papers (more than 17 papers): knowledge, application, employ, science, self/person. What is gleaned from the qualitative content analysis is that the majority of authors agree on a version of the following as a conceptualisation of PL: apply scientific principles to psychology concepts in work and personal contexts. presents the categories, search terms, frequency of authors and a list of authors for each category. Knowledge (knowledge, principle, understanding, content, subject) was referenced in 26 papers (see for author list). Appl (application, apply, applied, applies) was referenced by 24 papers in their conceptualisation of psychological literacy. Employ (employed, employment, employable, employability, employer, employ), organis (organisation, organisational), profession (profession, professional, professions), work (working, work, worker) and career (career, careers) were terms used collectively by 22 papers to conceptualise psychological literacy. Within the 22 references (), n = 15 refer a context (i.e. work/professional setting), and n = 10 reference term as an attribute: employability (n = 2), psychology as a helping profession (n = 2), professional/career capabilities, recognise value to potential employers, professional values, communicate in a professional manner, pre-professional communication skills, knowledge of the profession, professional project work. Scien (scientific, science, scientist) and empiri (empirical, empiricist, empiricism) were referenced (collectively) by 18 authors (). Self (self) and person (personal, person, persons) were identified in 18 papers. n = 16 references refer to term as a context; n = 6 reference term as an attribute: self-awareness (n = 4), self-management (n = 2).

Various attributes or skills related to professional conduct were identified such as generic, generalisable, work-ready/integrated skills or employability. For example, Turner and Davila-Ross (Citation2015) conceptualise psychological literacy as the “ability to communicate research findings and scientific opinions to others in an accurate and professional manner” (p. 64). Some authors treat employability as a synonym for psychological literacy (Hill & Rennoldson in Taylor & Hulme, Citation2018a). Kent and Skipper (Citation2015) recognise employability as an aspect of psychological literacy, by using psychological “knowledge to help themselves manage the recruitment and selection process” (p. 12). Taylor and Hulme (Citation2018b) introduce case studies by both Cachia et al., and Reddy that view psychological literacy as “an approach to employability” (p. 366). Also, Cranney et al. (Citationin press) discuss professional knowledge, skills and abilities as reflective of psychological literacy, stating “professional/career KSAs are the evidence-based skills that are relevant to employability (e.g., interpersonal skills, team-work, communication, critical thinking)” (p. 7). Two more categories were close to majority, with “problem-solving” (search terms consisting of: issue, problem, solv, solution) identified in 16 conceptualisations, and 17 papers including “community” (society, social, societal, community, communities) as a relevant context to demonstrate psychological literacy.

Considering that a minority of attributes were common between 50% or more papers, it indicates that there is considerable variation in attributes. For example, perspective, team, and wellbeing were only mentioned by one paper, respectively; other attributes only mentioned marginally more often: cultur(e,al) (n = 2), divers(e,ity) (n = 7), respect(ful) (n = 7). Citizens/citizenship was mentioned in the conceptualisation of psychological literacy of six papers, and the role of psychology as a helping profession: (n = 4). Less consensus exists for “values” (n = 3), “attitudes” (n = 1), and “dispositions” (n = 2). The content aspect of construct validity requires that measures are both relevant and representative of psychological literacy. The results shown in suggest that there is little agreement over which attributes are relevant and representative of psychological literacy, which is problematic for the construct validity of published research.

Test of threats to the substantive aspect

A serious threat is the differing ways of defining psychological literacy (as presented in Appendix). The limited consensus outlined in the content aspect (above) is problematic for the establishment of substantive construct validity, as this aspect requires that assessment methods align with what was intended to be measured. It is difficult to ensure that the methods selected to assess psychological literacy align, without a unified conceptualisation of psychological literacy. However, other threats to the construct validity of papers will be presented to identify future research considerations.

Psychological literacy skill development assessed by self-report represents a major threat to substantive construct validity. In Newell et al. (Citation2019) the majority of studies utilised self-report measures of psychological literacy (Boneau, Citation1990; Burton et al., Citation2013; Heritage et al., Citation2016; Morris et al., Citation2013; Roberts et al., Citation2015; Taylor & Coady, Citation2019; Tomcho & Foels, Citation2017). Skill development measured by self-report does not provide evidence that students can demonstrate skills, as it is unclear whether student perception of self-competency equates to competency without skill demonstration.

Test of threats to the structural aspect

Two papers (Burton et al., Citation2013; Roberts et al., Citation2015) aimed to address the need for clarifying the structure of psychological literacy attributes through factor analyses, with a comparison of their results presented in . On inspection, Burton et al.’s “general psychological literacy” factor resembles Roberts et al.’s “generic graduate attributes”. There is agreement over “applying psychological principles to personal, social, and organisational issues in work, relationships, and the broader community” (attribute four; McGovern et al., Citation2010, p. 11) being unique to psychology; reported as both “domain specific” in Burton et al. and as a separate latent factor (“psychology as a helping profession”) in Roberts et al. However, discipline knowledge (attribute one) has been classified as general in Roberts et al., but domain specific in Burton et al. Two issues for construct validity are then evident: a lack of conceptual agreement (as presented in the content section above), and limited agreement for the structure of psychological literacy from empirical studies. There is agreement between conceptualisations from Appendix and factor analyses in . A scientific approach and information literacy are classed as generic (Burton et al.; Murdoch, Citation2016; Roberts et al). An area of contradiction, however, is the classification of “application”. Application has been classified as psychology-specific in factor analyses, but a generic skill by Murdoch. This depends on the definition of “application”, as Murdoch views this skill as involving communication, group process skills and process thinking. However, there is consensus that communication skills are generic, as documented in Burton et al., Cranney, Morris et al. (Citation2012), and Murdoch. These results demonstrate that more clarification is needed to identify what is unique to psychology, and what is a complimentary, generic attribute.

Table 2. Comparison of factor structures

Test of threats to the generalisability aspect

Generalisability is concerned with reproducibility. Replication is a problem in part because of the paucity of psychological literacy research. In the studies where replication has been attempted, there is insufficient evidence for replicability. There was only one attempt at replication reported in a review of measurement studies (Newell et al., Citation2019), where Roberts et al. (Citation2015) attempted to replicate the factor structure from their first sample but were unable to replicate these results in their second sampling phase.

Test of threats to the external aspect

Convergent validity

This aspect relies on adequate conceptualisation of the concept. Specifically, a succinct delineation of the structural dimensions of the concept: a consistent understanding of relationships, and any hierarchy within the conceptualisation. A notable issue lies in the case study literature (Taylor & Hulme, Citation2018a, Citation2018b), which aimed at developing a myriad of skills (broadly defined under the banner of psychological literacy). The varied beliefs of what psychological literacy includes (as outlined in Appendix) makes the exploration of relationships between attributes unachievable. Although some studies aimed to outline these relationships, factor analyses demonstrate limited convergence as observed in and represent a threat to convergent validity. Additionally, as reported previously (Newell et al., Citation2019), there is a lack of congruence between measurement approaches with Roberts et al. (Citation2015) reporting a lack of convergent validity between single measures from Burton et al. (Citation2013), and multiple measures matched to each of McGovern et al.’s (Citation2010) attributes. This example is relevant to the assessment of construct validity as the lack of convergent validity is a notable threat.

Discriminant validity

In Newell et al. (Citation2019), there was a reported lack of discriminant validity, which has implications for the establishment of psychological literacy construct validity. In summary, a threat to discriminant validity was observed in a study that aimed to use psychological literacy measures to discriminate between psychology students and speech pathology students (Heritage et al., Citation2016). The measures selected for this study only weakly differentiated between psychology and non-psychology cohorts, so the external aspect is limited by this threat. Recent research conducted by Taylor and Coady (Citation2019) also aimed to demonstrate psychological literacy discriminant validity between students of five disciplines (business, health, IT, law, and psychology). The measures selected for the study were not able to significantly differentiate between disciplines, except for “knowledge of psychological concepts” where business students self-rated significantly higher than other disciplines (including psychology). Whether the self-report measure represented an appropriate proxy is discussed in the substantive aspect, but this study did not provide definitive evidence of discriminant validity for psychological literacy. Reports that suggest students should develop “x” attribute do not discuss how non-psychology students differ in the development of the attribute. Similarly, case studies also do not mention whether other cohorts or disciplines also are exposed to skill development of the desired attribute(s).

Test of threats to the consequential aspect

When developing a measure of psychological literacy, the methodology used is important for the management of bias or social threats. Some studies have employed a rating of concepts that they deemed to be essential to psychology (Boneau, Citation1990; Tomcho & Foels, Citation2017). Boneau did permit the addition of concepts for rating, but Tomcho and Foels did not permit raters to add to their proscribed lists of concepts available for rating. This is important for validity as there may have been concepts more important to raters than those presented. The approach taken by Tomcho and Foels represents a threat to the consequential aspect by not allowing educators to add to the list of concepts. If fewer concepts are provided, it increases the chance of inflated ratings for included concepts.

A second case demonstrates a consequential aspect threat: Morris et al.’s (Citation2013) participants included first- to third-year psychology students. First- and third-year psychology students completed cornerstone and capstone courses labelled as “specialist units”, where “the concept of PL [psychological literacy] is introduced. In the third-year capstone unit, students are asked to document in detail their development of each GA, and in their final journal task they are asked to explicitly reflect on PL, which has been discussed in the lecture and practical classes.” (p. 55). As the importance of psychological literacy and graduate attributes was emphasised in cornerstone and capstone courses, authors expected that results were “enhanced by completing specialist psychology units” (p. 55). The results refer to students’ self-report measures of awareness, development of, and importance of psychological literacy and graduate attributes. Morris et al. report that “a greater proportion of MajorSp [“major in psychology and had completed the specialist cornerstone unit, and in the case of Year 3 students, the capstone unit”, p.56] students were aware of the concept of PL than was the case in the other two groups [NoMajor (no psychology major); Major (major in psychology, but without the cornerstone unit completed)” (p. 56)]” (pp.57 – 59). Similar results were reported between students that had taken the capstone/cornerstone units (labelled as “MajorSp”) compared to psychology majors that did not take these courses, “MajorSp students tended to rate development higher than did Major students” (p. 59). As those students that did not take the specialist courses (namely second-year students, for which a specialist unit was not developed) reported lower awareness and development of PL, it represents a circular finding: this group of students had less knowledge presumably because they had not been taught this material. There is also a question of the dual role of teacher and researcher (Boileau et al., Citation2018; Ferguson et al., Citation2004; Thomas et al., Citation2019). It has been suggested that the power imbalance from an educator also acting as researcher could affect data quality (Thomas et al.), as it may have been difficult for students to not rate psychological literacy as important while active in a course that introduced the importance of psychological literacy. It is not completely clear whether this is an issue for Morris et al.’s study, but it is worth reporting as a consideration for studies that utilise student populations.

Discussion and recommendations

To determine the validity of psychological literacy, studies were evaluated against Messick’s (Citation1995) six aspects of construct validity. Analyses show heterogeneity in the skills and attributes attributed to psychological literacy, with more variability evident in additional reports and theoretical papers evaluated in this study. This variation in how psychological literacy is conceptualised prevents comparisons between studies and limits the establishment of an evidence-base for the validity of psychological literacy. Content validity was difficult to establish and had implications for subsequent aspects. However, identifying issues through the lens of other aspects allowed for a broader consideration of validity, and to progress research through recommendations to resolve these issues. Recommendations for resolving threats aligned with each aspect are presented below.

Content

Examining psychological literacy literature revealed broad views over what should constitute psychological literacy. However, there was a focus on application as central to psychological literacy, which reflects shifts in accreditation bodies towards competency-based standards. This is a move towards identifying outputs (evidence of what students can do), rather than specifying curriculum inputs and hoping for outputs (Cranney, Morris et al., Citation2012; Hulme et al., Citation2020). A focus on competency-based assessment is influenced by funding models, such as Australian government funding of programs, requiring accountability in relation to value for work readiness, or a general need to evidence “fit-for-purpose”.

The diversity of psychological literacy conceptualisation makes it difficult for authors to claim that their conceptualisation is valid, with little agreement over included attributes. The boundaries between psychological literacy and related ideas such as “employability” are not defined, with the Australian Psychological Society outlining employability as a psychology attribute (Australian Psychological Society [APS], Citation2012). Several authors also included employability in their conceptualisations of psychological literacy (Cranney & Botwood, Citation2012; Cranney, Morris et al., Citation2012; Hulme et al., Citation2015). Developing graduates who can participate in employment should be an aim for all disciplines, but the inclusion of “employability” in psychological literacy makes it more difficult to establish a standardised conceptualisation.

Messick (Citation1995) stresses that the relevance and representativeness of the attributes should be appraised by expert judgment. As psychological literacy is not defined in a standardised way, it is difficult to determine what attributes are relevant or irrelevant to the construct. Future research should aim to achieve agreement of desired psychological literacy attributes, so psychological literacy conceptual development constitutes the most pressing avenue for future research and curriculum development.

Conceptualisations were taken from any point in the paper where they outlined their view of what psychological literacy is. These was usually located in the introduction, but for Roberts et al. (Citation2015) their factor analyses represented a refinement of their conceptualisation and was used instead. Based on this procedure, there was room for some judgment involved in this process; it is possible that the original authors of the manuscripts may have held an alternate view. However, we aimed to address this by discussing the content of Appendix among all authors of this paper, reviewing conceptualisations, and cross-checking original sources.

Low consensus between conceptualisations for values, dispositions and attitudes suggests that these kinds of attributes are contentious in the literature. Geiss (Citation2019) advocates that “we must nevertheless help our students to develop these critical psychological attitudes without indoctrinating them” (p. 50). McGovern et al.’s (Citation2010) statement, “cognitive and affective insight must go hand-in-hand with behavioral changes” (p. 24), has been critiqued by Murdoch (Citation2016) suggesting that “this moves dangerously close to indoctrination if our students have to use their psychological literacy in ways approved by ‘us’” (p. 195). There was also less observed preference to view psychology as involving ‘helping or “supporting”, highlighting the tension related to the purpose of a psychology degree. Roberts et al. (Citation2015) present their conceptualisation of psychological literacy as involving a “supporting and caring dimension”, which they elaborate as “the helping aspects of psychology as a profession” (p. 7). Taylor and Hulme (Citation2018b) suggest “this dimension is inherent in many learning and teaching activities in psychology and ultimately a motivator for many students to study psychology where psychology is viewed as a helping profession” (p. 362). Many more papers reference science (n = 18), including Beins et al. (Citation2011), who propose “increases in beliefs that psychology is a science is analogous to improvements in psychological literacy” (p. 13). The inclusion of terms like “science” in many conceptualisations suggests a preference for how the included papers want psychology education to be focused or viewed. Cranney, Morris et al. (Citation2012) define this tension as a challenge for developing scientific literacy in students that do not possess a science background, as they are expecting to learn “how to help people” (p. 104). This is an area of future research. Examining a range of stakeholder opinions about the purpose and outcome(s) of a psychology degree, including students, teachers and employers, will allow the definition of psychological literacy to be explored further.

Substantive

Difficulties in establishing content validity have substantial implications for subsequent aspects. This is especially true when considering the broad range of literature consulted in this paper, as it reveals greater heterogeneity in how psychological literacy is understood. In Newell et al. (Citation2019) the main concern with self-report was the ability for students to have a calibrated estimation of their own skill development. In Messick’s (Citation1994) conceptualisation of construct validity, self-report measures present concerns for the substantive aspect: whether measures are an accurate representation of what we say we are measuring. Essentially, whether self-reported psychological literacy is an appropriate proxy for the assessment of psychological literacy. We argue that what is being measured is student belief of, and/or awareness of, psychological literacy, or student perception of skill development. Measures of psychological literacy that prioritise demonstration of skill development (matched to a standardised conceptualisation of psychological literacy) are required to satisfy this aspect (Messick, Citation1994). Taylor and Coady (Citation2019) offer a method of comparing self-reported psychological literacy with measures matched to a conceptualisation of psychological literacy. Comparing proxies and self-report is useful to determine whether self-reported psychological literacy is a valid representation of the construct. The measures selected in Taylor and Coady make comparisons difficult, as the proxies were scales that required self-rating, for example, in one paper, the proxy for “confidence in evaluating information” was the “information literacy self-efficacy scale” (Kurbanoglu et al., Citation2006). This scale requires participants to self-rate their confidence in “synthesising newly gathered information with previous information” (Taylor & Coady, p. 3). However, this design should be used with proxies that observe the application of a skill, such as an assessment of students’ ability to evaluate information, rather than the reporting of a skill.

If there is no unified conceptualisation to draw from, then there is no clarity for what constitutes the construct and therefore, what to assess. Studies that have aimed to clarify this through factor analysis, have raised additional questions. Newstead (Citation2015) suggests that results from psychological literacy factor analyses were an artefact of the selected measures. As the extracted factors change with different measures selected, these analyses demonstrate a threat for the construct validity of psychological literacy. There is a need to develop the conceptual basis of psychological literacy and then conduct testing of the concept.

Structural

The structure of psychological literacy was explored by factor analysis in two studies (Burton et al., Citation2013; Roberts et al., Citation2015), with inconsistencies between attributes classified as general and domain-specific psychological literacy attributes highlighted in . There are possible explanations for the different factor structures of psychological literacy. Newstead (Citation2015) suggests that results were an artefact of the selected measures, as extracted factors change with different measures selected. So, even though both studies selected measures to align with the same definition (McGovern et al., Citation2010), there may have been differences in how the definition was interpreted. These results also highlight a current issue in psychological literacy research, that conceptual development is driven by theorists with limited testing of these ideas. The lack of consistency between factor analyses should be further examined through additional research, exploring how each attribute is defined. Specifically, the issue is whether attributes are classified as general or specific to psychology (see ) and gaining a clearer picture of the attributes that differ across disciplines would provide some clarity to this. This is discussed further in the external aspect below.

Generalisability

Problems with the reproducibility of psychological literacy factor analyses suggest there is not enough evidence for the generalisability aspect. This is problematic as it suggests that psychological literacy factor structures may differ in additional samples, so trust in the reliability of measures and generalisability of findings are reduced. Conceptual development may provide evidence for generalisability, depending on result consistency in future research.

External

The lack of discriminant and convergent validity remains a significant barrier to establishing construct validity of the included papers. A possible resolution to the lack of discriminant and convergent validity is Murdoch’s (Citation2016) recommendation to identify what attributes or skills are unique to psychology. Unfortunately, this would likely preclude many useful skills that psychology students develop within their degree. Furthermore, it may become apparent that psychology can lay claim to very few specific skills/attributes, which would be a challenge to the identity of the discipline at the undergraduate level. Findings of this nature would mirror Cacioppo’s (Citation2007) understanding of psychology as a hub science, who examined the mapping of a million journal citations by Boyack et al. (Citation2005). Cacioppo’s conclusions revealed that “public health, neuroscience, neurology, radiology, cardiology, and genetics are among the sciences that fall between psychology and medicine, whereas education and gerontology fall between psychology and the social sciences” (p. 1).

Rather than just being related to knowledge from other disciplines, Murdoch (Citation2016) considers literacy in psychology as hierarchical: “a higher order literacy that requires and incorporates other essential literacies … reading, numeracy (statistics), scientific (methodology, physiology, biology, and neuroscience), information and data, computer, emotional intelligence, cultural, and multicultural literacies” (p. 191). An example is provided in Murdoch’s paper, that information literacy (as psychology-specific) includes the knowledge of “appropriate search terms”, and for scientific literacy this would entail knowledge of double-blind methodologies as relevant to psychology. Coulson and Homewood (Citation2016) also considered psychological literacy to include different hierarchical capabilities, namely “the capacity for meta-metacognition, for adaptive application of psychological knowledge and for self-awareness” (p. 2). Other authors also suggest a form of sequential attribute development, concerning the category of “citizenship” (from ), stating that “high end” psychological literacy constitutes psychologically literate citizenship (Cranney & Botwood, Citation2012, p. 9) The issue of unspecified boundaries and relationships within the concept results in difficulty in determining how psychology students differ to others. Reports that suggest students should develop “x” attribute do not discuss how non-psychology students differ in the development of the attribute. Similarly, case studies also do not mention whether other cohorts or disciplines also are exposed to skill development of the desired attribute(s). The extent to which psychological literacy can be separated out from higher-order capabilities that are developed, as well as knowledge from other disciplines that are aligned through the view of psychology as a hub science is not yet understood. This is an area for future research. Determining what is unique to psychology education, according to a range of stakeholders, will provide a means to explore convergent and discriminant validity.

Consequential

Threats to the consequential aspect included concerns for the methods applied to gather data about psychological literacy. In the presented cases, it was suggested that both studies could have employed additional safeguards to ensure that data were a true reflection of participants’ beliefs. Regarding the case of Tomcho and Foels (Citation2017), participants should have been offered the opportunity to suggest concepts they believed were relevant to psychology. Future research should include methods for canvassing a more in-depth approach to extract potential concepts and attributes that could be added, giving more rigour to a conceptualisation of psychological literacy.

Morris et al.’s (Citation2013) study concluded that students who enrolled in specialist cornerstone (first-year) and capstone (third-year) courses that introduced psychological literacy had greater awareness and appreciation of this concept, compared to a content-naive cohort (students that have not taken these specialist units). This seems to be a methodologically flawed approach, given that the findings seem self-evident. Future studies that involve teacher-student assessments of psychological literacy should ensure that independent evaluators are employed to ensure distance between educator and researcher. Concerning the potential for bias, which is the focus of the consequential aspect, following this process may reduce the likelihood for socially desirable responses. Alternatively, Bartholomay and Sifers (Citation2016) recommend that independent researchers could be responsible for participant recruitment.

Conclusions

Researchers and scholars have not been able to show consensus in the majority of proposed psychological literacy attributes. Consequently, the sources have not demonstrated construct validity. Due to variations in the attributes purported to represent the construct, it is difficult to provide evidence for content validity. As a result, threats to determining the substantive, structural, and external aspects were observed. These issues can be overcome with future research, such as defining the skills to be assessed (to satisfy substantive construct validity). Also, obtaining a clear definition of the structure of attributes and their classification as generic or discipline-specific (as evidence of the structural aspect). The external aspect could be satisfied with a clarification of the correlations present between proposed attributes, which would highlight the attributes that are synonyms, developed in tandem, or sequentially.

As demonstrated in our content analysis, literature is one source of information that can be examined to determine consensus. It does appear that a small pool of researchers produces most of the work in this area, so it is important to consider additional perspectives of what the concept is. Understanding the views of a more diverse group is critical, which could involve consulting teachers and students, as well as other stakeholders such as employers, being mindful to mitigate socially desirable responses to satisfy the consequential aspect. Then, investigations into the extent to which other disciplines are exposed to the same skills, will allow us to refine what is uniquely developed in a psychology degree. This is key to psychological literacy’s usefulness as a concept.

Australian government grants from the former Office of Learning and Teaching (Cranney, Botwood et al., Citation2012) culminated in a set of recommendations to APAC to inform the development of future accreditation standards. A revision of graduate attributes was proposed, to “emphasise psychological literacy as the primary outcome of UG [undergraduate] psychology education” (Cranney, Botwood et al., Citation2012, p. 27). Psychological literacy is said to capture graduate attribute lists in Australia (Cranney et al., Citationin press), and recent recommendations propose viewing psychological literacy as an “umbrella” outcome for programs (Cranney et al., Citationin press). Internationally, “the momentum to develop psychological literacy provision has been growing” (Cranney et al., Citationin press, p. 19). With psychological literacy’s increasing influence, it has been claimed that the concept has the potential to determine the distinctiveness of the psychology undergraduate degree (Hulme et al., Citation2020). It provides a tool for universities to justify the value of a psychology degree to future employers, in an era of declining government funding. As such, it is imperative to ensure that the concept is valid. The threats identified in his paper and suggestions to remedy them represent tangible steps to the development of a valid psychological literacy definition.

Supplemental material

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed at https://doi.org/10.1080/00049530.2021.1922069

Additional information

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.