1,439
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

The development, validation and use of an interprofessional project management questionnaire in engineering education

ORCID Icon &
Pages 502-517 | Received 16 Jan 2022, Accepted 27 Dec 2022, Published online: 03 Feb 2023

ABSTRACT

Professional skills of project planning, risk analysis, ethical design, communication, and working in interprofessional teams are now recognised as core engineering skills. Frequently, they are addressed in engineering education through team projects. However, these skills can be difficult for students to learn as they are often not well defined (making it difficult for students to know where to focus their attention), and team projects often lack the reflective opportunities required for their development. This paper describes the development and validation of the Interprofessional Project Management Questionnaire (IPMQ) which has been designed for use in engineering education to provide a tool for reflection on, and clarification of, the learning goals related to these skills. Two studies to assess the reliability and validity of the IPMQ are reported. The instrument shows good validity and reliability in both French and English and as such is suitable for use with students. It is also suitable for research in engineering education and for providing feedback to faculty on student learning of professional skills in team projects. Suggestions on the use of the tool to enable the kinds of reflection that will help students to learn these skills are provided.

Introduction

Over the last forty years, the role of teams in workplaces has been increasingly identified as important, and, consequently, it is now widely accepted that engineering students need to learn to work effectively as part of interdisciplinary and interprofessional teams during their studies (Crawley et al. Citation2014). ABET (Citation2019), for example, requires that students learn a broad set of transversal or professional skills such as the ability to: produce solutions which meet a range of criteria including ‘public health, safety, and welfare, as well as global, cultural, social, environmental, and economic factors’; ‘communicate effectively with a range of audiences’; ‘recognise ethical and professional responsibilities’ and; ‘function effectively on a team whose members together provide leadership, create a collaborative and inclusive environment, establish goals, plan tasks, and meet objectives’. Similar requirements are found in European Engineering Accreditation bodies (e.g. CTI Citation2020). Developing these skills which are important for working with other disciplines are often seen as important reasons for including interdisciplinary work within engineering education (Klaassen Citation2018; Van den Beemt et al. Citation2020). While these are variously referred to as ‘soft’, ‘transversal’ or ‘professional’ skills (Berdanier Citation2022), we use the term ‘professional skills’, and this is also the terminology used in the Engineering Education Research Taxonomy (Finelli Citation2021).

Although students are often expected to learn team and project skills through working on projects in teams (Colbeck, Campbell, and Bjorklund Citation2000; Lehmann et al. Citation2008), it can be difficult for students to learn these skills in this context since these skills are often not explicitly taught and as a consequence it can often be unclear to students (and indeed teachers) what is meant in practice by terms like ‘collaborative and inclusive environment’, or ‘provide leadership’. Valid and reliable self-assessment questionnaires can play a role in helping learners to become more aware of their own thinking and, consequently, in making choices about their learning and behaviour (Coffield et al. Citation2004). Building on this idea, this paper describes the development, validation and use of an Interprofessional Project Management Questionnaire (IPMQ), a self-efficacy beliefs assessment instrument which has been developed for use in engineering education with the goal of providing students with a reflection opportunity to learn skills (Kolb Citation1984) related to efficiently and productively managing team projects. While initially developed for students engaged in interdisciplinary projects, the questionnaire can also be (and has been) administered to students engaged in disciplinary projects. When administered in a pre–post fashion, the IPMQ can also help teachers to self-assess the impact of their courses on student learning.

The research question which this paper addresses is whether the IMPQ has a valid and reliable structure such that it is suitable for use as a reflection tool for students and teachers in engineering education. To answer this question, a latent factor structure in the IPMQ was identified using Exploratory Factor Analysis (EFA) with one sample in English. To further add evidence of validity, a Confirmatory Factor Analysis (CFA) using the factor structure emergent from the EFA was performed on a second sample of respondents and in a different language (French). Together these two studies provide substantial support for the validity and reliability of the IPMQ.

This paper first explores the prior literature on interprofessional project management skills in engineering education, highlighting the need for more explicit teaching approaches for professional skills. Second, it discusses the use and validity of self-report instruments in these settings. Third, it reports on two validations of the IPMQ through its use in engineering education courses. The final section highlights some of the ways in which the IPMQ may be used in engineering education settings, examples of how it has been used and points to some limitations.

Integrating interprofessional project management skills in engineering education

Questions as to the definition of ‘discipline’, ‘interdisciplinary’, ‘professional’ and ‘interprofessional’ have a long history in curriculum studies, in sociology of work, and in studies of the interface between higher education and the labour market (see, for example, Becher and Trowler Citation2001; Freidson Citation2001; Trowler, Saunders, and Bamber Citation2012). Without reopening these large and complex fields, in this paper ‘discipline’ is taken to refer to an (often continually developing) body of knowledge and epistemologies which is embodied in the practices of an academic community which is generally recognised as a community by itself and by other academic groups. Different disciplines are often (but not always) linked to different ‘professions’, which are defined here as occupations which are often subject to regulation, which have a high degree of specialised knowledge and skill which is acquired through long periods of study, and which is typically used in complex and uncertain circumstances that require a high degree of judgement. ‘Interdisciplinary education’ refers to situations in which the knowledge, epistemologies and practices of two or more disciplines are applied to a shared educational endeavour. ‘Interprofessional work’ refers to the analogous situation in which problems are addressed by teams or groups drawn from multiple professions. In engineering education, a key goal is that students develop the skills to work in interprofessional project teams. Hence, the focus of this questionnaire is on students’ self-efficacy beliefs regarding these skills. Students are, however, often expected to learn these skills in engineering education through working in interdisciplinary courses (indeed, as we will describe below, the IPMQ was originally designed for use in such a context). However, since the focus is on the skills (interprofessional project management) rather than on the learning context (interdisciplinary courses), the IPMQ can be used in other contexts also (e.g. single-disciplinary projects, professional settings, etc.).

Interdisciplinary engineering education (IEE) has received increased attention during the past decade (Lattuca et al. Citation2017; Lattuca et al. Citation2017). Such interest is legitimised by the increasing complexity of challenges faced by engineers and the inherent necessity to imagine and develop solutions across disciplines and professions (Van den Beemt et al. Citation2020). Accreditation bodies, such as ABET or CTI, have integrated into their frameworks the skills demanded by the ever-changing socio-economic and socio-technical environments, including elements pertaining to ethical considerations, effective communication and effective functioning in teams. But developing interdisciplinary skills in general, and interprofessional management skills in particular, remains easier said than done (Richter and Paretti Citation2009). Interdisciplinary education not only requires a well-thought through pedagogy but also it still meets with limitations in our understanding of the barriers to and impact of learning (Klaassen Citation2018). Engineering curricula still tend to focus on technical rather than professional skills or transversal skills (Pant and Baroudi Citation2008). Professional skills have historically been devalued in engineering education (Berdanier Citation2022) and are often ill-defined or poorly understood. This means that, while students may be able to identify when they need to improve their technical skills, students may not know exactly what is required to demonstrate these professional skills and, as such, may not realise what and when they need to improve. To take a practical example, the learning goal ‘communicate effectively in an interprofessional team’ may be understood by a student (or indeed a teacher) as meaning little more than ‘talk loudly and a lot about your opinions’ unless the learning goal is articulated in more specific terms such as ‘understand perspectives of other team members’, ‘make sure all the information is shared with other team members’ and ‘explain myself effectively’. Finally, many of the skills directly or indirectly required in such projects (e.g. project management, teamwork, feedback, etc.) take practice and time. Effective acquisition of these skills requires reflexive moments (Tormey and Isaac Citation2021: 53–66). Structuring such reflection can be challenging, however: ‘reflection’ involves a process of serious thought which aims to make sense of an experience (Ryan Citation2013) and to question our own prior assumptions about a situation (Brookfield Citation1998). Such reflective work can seem incompatible with the dominant epistemology of engineering education (Lönngren Citation2021). Reflection therefore needs to be scaffolded. In doing this, the use of tools which can act as a (metaphorical) mirror to allow us to see our experience from another perspective is helpful to the reflection process. One type of ‘mirror’ which has often been used to aid reflection is questionnaires. For example, in the area of approaches to learning and cognition, Coffield et al. (Citation2004) have identified that self-assessment questionnaires may be useful to promote thought and reflection about approaches to learning and study, provided they are valid and reliable. Similarly, a valid and reliable self-assessment tool may be useful in supporting thought and reflection on interprofessional project management skills. With this in mind, we sought to identify or develop such a tool.

The rationale for and development of the interprofessional project management questionnaire

The IPMQ originated in the need to assess to what extent a 3-month long capstone project around product development involving students from engineering, social sciences and design contributed to the development of interdisciplinary or interprofessional skills (Laperrouza Citation2018). Put differently, it aimed to assess whether the large amount of resources deployed by the teachers and students, was a good investment. Initial versions of the questionnaire built on the Readiness for Interprofessional Learning Scale (Parsell and Bligh Citation1999), on learning goals identified by accreditation bodies such as ABET and CTI and on literature on project management skills both in general, and in the specific context of product development (Kerzner Citation2009; Ulrich and Eppinger Citation2016). Based on these sources, a questionnaire was developed to assess skills which were identified as important: namely, (1) scoping a project, (2) planning the project, (3) analysing risk, (4) managing communication and (5) managing conflict. Analysis of the factor structure of this original questionnaire, however, identified an emergent factor structure different to that which had been proposed based on our initial literature review: scoping and planning appeared to merge into one factor (planning), while analysis of risk separated into two (risks to the project, and awareness of risks caused by the project to others, i.e. ethical sensitivity). Communication and conflict factors also merged to produce a single factor (communication), while interprofessional competence items which had been woven through each of the proposed factors appeared to form a separate factor (interprofessional competence). While this experience allowed us to see a potentially appropriate factor structure which might map onto that which emerged from our work with students (planning, assessment of risk, ethical evaluation, communication, and interprofessional competence), the factor structure of this first version of the questionnaire was not valid or reliable enough to suggest it could be used in practice. Alongside working on our own questionnaire, we also continued to search for alternative measures which would assess these features.

It comes as no surprise to note that one can already find numerous questionnaire instruments to assess project management, and interdisciplinary and transversal skills taken separately (Lattuca, Knight, and Bergom Citation2013). Many of them deal with more than one of the dimensions we wished to explore. For instance, Blomquist, Farashah, and Thomas (Citation2016) have looked at project management self-efficacy and performance. Direito, Pereira, and Duarte (Citation2012) examined how undergraduates rate their current proficiency in a range of ‘soft skills’, and their perceived importance for future employment. Instruments along the same lines have been developed by different authors (Chan and Luk Citation2021; Chan, Zhao, and Luk Citation2017; Cruz et al. Citation2021). For their part, Verdín, Godwin, and Benedict (Citation2020) have investigated self-efficacy beliefs related to innovation for first-year engineering students and how those beliefs might differ by gender and engineering discipline. Some of these instruments were built from scratch while others built on competency domains developed by industry (Cruz et al. Citation2021).

None of the questionnaires we explored, however, matched the range of factors which had emerged as meaningful from our work with students. While it would have been possible to mix and match different measures, this approach could easily have produced a set of questions with messy and overlapping factor structures and to have caused confusion through different question formats. Hence, we instead developed a new instrument, the IPMQ, which differs from other instruments in a number of ways. First, it is specific to project management and built around the processes that are integral to project management and which have been found to be meaningful to students in our prior work with them: planning, assessment of risk, ethical evaluation, communication, and interprofessional competence. In particular, the integration of an ethical dimension as a factor to take into consideration when assessing interprofessional management skills is notably different to that which is found in many other questionnaires currently used in engineering education (see Cruz, Saunders-Smits, and Groen Citation2020 for an extensive review of such instruments). Second, rather than assessing interprofessional competence and project management as distinct competence areas (such as, for example, through using the Readiness for Interprofessional Learning Scale alongside a project management self-efficacy beliefs questionnaire), the instrument takes an integrated approach which includes interprofessional dimensions throughout the project management process. It therefore provides a single, short questionnaire which can assess self-efficacy beliefs in relation to multiple areas of competence linked to running interprofessional projects.

Self-efficacy beliefs have been used in different engineering education fields in a number of studies but not without controversy (Paul et al. Citation2018; Ponton et al. Citation2001). One such controversy relates to assessment. In particular, self-report instruments have sometimes been criticised as being susceptible to bias (e.g. Cruz, Saunders-Smits, and Groen Citation2020; Douglas et al. Citation2014), such as the Dunning-Kruger effect (Kruger and Dunning Citation1999). There are two responses to such concerns. First, more recent explorations of the Dunning-Kruger effect suggest that this is not an actual psychological phenomenon but rather a statistical side effect (Nuhfer et al. Citation2017). Second, many of the instruments described above are not intended as self-reports of competence but rather are measures of self-efficacy beliefs in relation to the skill areas. Self-efficacy beliefs are defined as a mechanism of personal agency consisting of individual’s beliefs regarding performance capabilities in a particular domain (Bandura Citation1977). In this sense, self-efficacy can be defined as being a prospective competence-based variable that predicts action (Direito, Pereira, and Duarte Citation2012). Put differently, perceived self-efficacy is a judgment of capability to execute given types of performances (Bandura Citation2006).

The IPMQ is, then, a short questionnaire designed to assess in an integrated way the self-efficacy beliefs of people in a number of competence areas directly linked to the management of interprofessional projects: planning, risk assessment, ethical sensitivity, communication and interprofessional competence. It was designed to be a tool that would act as ‘a mirror’ to aid students (and teachers) in the reflective process which is required for the development of such skills. Such a tool is only likely to be of value, however, if it is valid and reliable.

The central empirical question which this paper aims to address is whether the IPMQ is a valid and reliable instrument. As Cruz, Saunders-Smits, and Groen (Citation2020) have identified, however, less than half of the existing studies on transversal skills measurement in engineering education present evidence of validity and reliability measurement. We therefore turn our attention in the next sections to the validity and reliability of the instrument.

Validating the IPMQ: study 1 – exploratory factor analysis

Methodology

The current version of the IPMQ is the result of multiple iterations and testing of questionnaires aimed at assessing the self-efficacy beliefs of students. While we hypothesised a factor structure for the questionnaire, this hypothesis was initially tentative, and so we undertook Exploratory Factor Analysis (rather than Confirmatory Factor Analysis) as a first step in validation.

Participants: The Interprofessional Project Management Questionnaire (IPMQ) was administered to 147 students in engineering or technical programmes in a European university. Demographic data were not collected, but the students were in programmes in which the percentage of female student ranges from 12% to 30%. The students were all studying selected courses in their second, third or fourth year in which interprofessional project management was deemed particularly relevant – courses involving either a design project or an inquiry laboratory activity. Ethical approval was obtained from the Institutional Human Research Ethics Committee. Since the data were anonymous at the point of collection, there was no trace of which students did and not participate and so students were protected from feeling compelled to take part. Although the students sometimes completed the questionnaire at both the start and the end of the course, only data from the first completion of the questionnaire are included in the dataset. Although the first language of the vast majority of the students was not English, the students were all studying advanced scientific courses in English and so the questionnaire was administered in English. The students were offered the opportunity to take the IPMQ as part of a reflective activity designed to encourage them to think about their existing skills and to plan for their learning during the course.

Instrument: The Interprofessional Project Management Questionnaire (IPMQ) is a self-report questionnaire which was originally designed to assess self-efficacy beliefs in five domains related to interprofessional project management. The questionnaire begins with the following statement ‘Imagine you are working on a project which is complex and requires inputs from a number of different professions (for example, people with legal training, engineers, social scientists, designers, etc.). Please indicate how much you agree with each of the following statements’. This is followed by 24 items across five domains:

  • Project planning, 5 items

  • Risk management, 5 items

  • Ethical sensitivity, 4 items

  • Team communication, 5 items

  • Interprofessional competence, 5 items

The full text of the items is provided in (below). Each item is built on a common stem (‘I am good at … ’), and addresses a specific aspect of the domain in question. This structure is appropriate for assessing self-efficacy in that it directly addresses their beliefs that they are capable in a particular specific domain. Each item is scored on a Likert scale ranging from 5 (strongly agree) to 1 (strongly disagree).

Table 1. The Interprofessional Project Management Questionnaire Items.

Design/Procedure: The goal of this study was to assess the factor structure of the IPMQ. Data were collected through a number of different electronic platforms (including Moodle and a custom-built online platform) from students.

An exploratory factor analysis approach (EFA) using the MinRes method was carried out on the data using the open-source statistics program ‘R’, with the ‘psych’ and ‘GPArotation’ packages. Although the use of principal component analysis (PCA) is now widespread as an alternative to factor analysis and typically gives very similar results, differences between the results of the two methods are sometimes evident when there are fewer than 30 variables and where some communalities are relatively small. The data here were actually analysed using both approaches and, in this case, the results were actually quite similar. Therefore, the less contentious (Field, Miles, and Field Citation2012) EFA is reported upon here.

The Kaiser Meyer Olkin test for sampling adequacy gives KMO = 0.8. KMO scores of about .8 indicates that the sample size is very good (Field, Miles, and Field Citation2012, 770). All items had KMO scores in the range which are describes as being between acceptable to very good (.62 to .86). Bartlett’s test (Chi-square = 1082, df = 276, p < 0.001) also indicates that the sample is of more than adequate size to assess the factor structure (Field, Miles, and Field Citation2012, 770–771). The determinant test for multicollinearity (correlation matrix det = 0.0005, which is > 0.00001) also indicates that the sample is suitable for factor analysis.

Findings

Descriptive statistics for all variables are provided in .

Table 2. Descriptive statistics for the IPMQ items.

A number of different procedures are recommended in literature for extracting the optimal number of factors from a dataset including Kaiser’s or Joliffe’s criterion for eigenvalues or a qualitative review of a scree plot (Field, Miles, and Field Citation2012, 762). Horn’s parallel analysis method has been found to be among the most accurate methods, especially since sample size can impact considerably on the reliability of other methods (Warne and Larsen Citation2014). Parallel analysis indicated four factors as being most appropriate. Since the intended theoretical model was a five-factor model, and since the emergent four factors largely aligned with this five-factor model (albeit with two of the five factors combined into one factor), this suggested that the four-factor model was worthy of exploration.

A four-factor extraction, was applied. Because there was reason to think that the factors may be correlated with each other, an oblique rotation (oblimin rotation) was applied (Field, Miles, and Field Citation2012; Pedhazur and Schmelkin Citation1991). As shows, the four-factor model resolved to a simple structure (with a cut off for presenting factor loading of .3). The four emergent factors are very similar to the five originally proposed factors:

  • Factor 1 addresses ‘Project planning’

  • Factor 2 addressed ‘Risk assessment’ (with one risk management item loading instead on ‘Project planning’)

  • Factor 3 addressed ‘Ethical sensitivity’

  • Factor 4 addressed ‘Interprofessional communication’ (and includes items from both the proposed ‘Team communication’ and ‘Interprofessional competence’ scales).

Table 3. Factor loadings for IPMQ items and Factor standardised Cronbach α scores.

The standardised Cronbach’s alpha scores (see ) for each factor is over the 0.7 cut-off which is typically taken to represent acceptable reliability (Field, Miles, and Field Citation2012; Kline Citation2000). Twenty of the 24 items which we originally proposed are used in this four-factor solution. The remaining four items do not load onto any of the four factors. The factor structure from is reproduced with the question items in Appendix 1.

As can be seen from , the correlations between the factors are moderate to strong. This indicates that the decision to use an oblique rotation rather than orthogonal was correct (see Pedhazur and Schmelkin Citation1991).

Table 4. Correlation matrix between extracted factors.

Overall, the EFA indicates that a four-factor model is a good fit for the data. Three of these factors are close to three of the originally proposed factors, while a fourth combines elements of two of the proposed factors but nonetheless produces a result which is logically coherent.

Validating the IPMQ: study 2 – confirmatory factor analysis

Following the identification of the four-factor model from the exploratory factor analysis, the model was re-tested using a different group of students and in a different language. The logic here is that availability in multiple languages is likely to be useful in a European context, and the re-confirmation of the model with a different dataset adds confidence that it is a robust model that may be applicable more widely.

Methodology

Participants: The IPMQ was administered to 168 students in an engineering programme (13%, were female) at a European university. The students were all studying selected third- or fourth-year courses in which interdisciplinary project management was deemed particularly relevant. As with study 1, the students were offered the opportunity to take the IPMQ as part of reflective activities designed to encourage them to think about their existing skills and to plan for their learning during the course. As before, only data from the first completion of the questionnaire are included in the dataset. As the courses were taught in French, a French-language version of the instrument was used.

As in study 1, the sample adequacy was assessed. The KMO = 0.83 (which rates as very good, according to Kaiser’s criteria). The determinant of the correlation matrix is 0.006 (> than the recommended cut off of 0.00001). Bartlett’s test yields a chi-square = 1488 (df = 276, p < 0.001). All these indicators indicate that the sample size and structure are more than adequate for factor analysis.

Instrument: The IPMQ, as described in Study 1, was translated into French. The questionnaire was first translated into French, then translated back into English, then the ‘new’ English version was compared to the original. This allowed for a number of points of confusion or potential lack of clarity to be identified. The questions were again verified by native speakers before being used.

Design/Procedure: The goal of this study is to assess whether the factor structure emergent from the EFA on the English-language version is a good fit for the French-language dataset. Data were collected through a questionnaire in Moodle.

The data were analysed using the open-source statistics program ‘R’ using the Minimum Likelihood method within the Lavaan package (0.6-5) for Confirmatory Factor Analysis (CFA). How well a model fits with the data is assessed through a range of different measures including Chi-square, Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), the Root mean square error of approximation (RMSEA) and the Standardised root mean square residual (SRMR). These measures of goodness of fit are controversial (Barrett Citation2007; Hayduk et al. Citation2007) and although a range of cut-off points have been proposed for each measure, these should not be applied rigidly. Rather a range of measures should be reported for each model with specific values used as reference points rather than cut-off points. For Chi-square, a p > 0.05 is proposed, for CFI, a cut off of .9 or .95 have been proposed, for TLI, a cut off of .9 or .95 is proposed, for RMSEA, a value of 0.06 or lower is preferred, and, for SRMR, a value of 0.08 or less is proposed (Hu and Bentler Citation1999). Since the chi-square measure depends on sample size and number of variables, particular care should be taken in relying on it as a measure. The TLI has also been found to over-reject true population models unless the sample size is quite large (Hu and Bentler Citation1999) and so should be treated with caution.

Some of the problems which occur in CFA research is the cherry picking of only those fit measures which confirm the authors’ proposed model, and the reporting of a single model as a ‘finished product’ without drawing the reader’s attention to the process of model modification during the CFA. To avoid these issues, a range of measures have been reported, and the original and modified models have also been described in each case.

Findings

The descriptive statistics for all 24 items are described in .

Table 5. Descriptive statistics for the French-language IPMQ items.

The four latent variables identified in study 1 (Project planning, Risk assessment, Ethical sensitivity, Interprofessional communication) were used along with a simple loading of each item onto one of these latent variables (Chi-square = 337, df = 164, p < 0.001; CFI = 0.834; TLI = 0.808; RMSEA = 0.079 [CI90%:0.067–0.091]; SRMR = 0.074). Only the SRMR measure reached the proposed reference value in this case, indicating that a simple loading of each item did not provide a good fit for the data. In line with the normal procedure in CFA a number of modifications to the model were then considered. This included allowing the item ‘I am good at sharing responsibility with the other professions in the team for the overall success of a project’ to load onto ‘Interprofessional competence’ as well as ‘Project planning’ and allowing the item ‘I am good at recognising that other team members definition of what it means for something to ‘go wrong’ may be different from my own’ to load onto ‘Risk assessment’ as well as ‘Project planning’. A number of items were also identified as having covariance with other items which loaded onto the same latent variable (Q11∼Q12; Q13∼Q14; Q15∼18; Q18∼Q19). This notably improved the model fit (Chi-square = 259, df = 159, p < 0.001; CFI = 0.91; TLI = 0.892; RMSEA = 0.059 [CI 90%:0.045–0.073]; SRMR = 0.061). With this model, the CFI, RMSEA and SRMR measures all reach proposed reference points indicating that the slightly modified four-factor model was a good fit for the French-language data.

Overall, the IPMQ has been found to be reliable and valid using two different groups of students, in two different languages (English and French), and using two different statistical approaches (EFA and CFA) to assess factorial validity. There is good reason therefore to see it as suitable for use in pedagogical contexts which aim to teach professional skills related to project management to engineering students.

Discussion of studies 1 and 2

The goal of these studies was to investigate the factor structure underpinning the IPMQ and, in particular, to see if the questionnaire provided valid and reliable measures of self-efficacy beliefs in a number of domains related to the management of such projects.

Study 1 indicated a four-factor model based upon 20 questions, with the originally proposed factors of interprofessional competence and team communication coming together in a single factor which we name ‘interprofessional communication’. While a number of the items which explicitly reference ‘different professions’ (Q 20, 21 and 22) are removed in this process, ‘different professions’ remain referenced in a number of other question items, as well as in the questionnaire instructions which apply to all questions. Thus, the questionnaire retains its ‘interprofessional’ character (even if a number of the scales could well be used in relation to project management contexts more widely). While the emergent structure is a little different to that originally proposed, the differences are a matter of nuance and emphasis rather than in fundamental structure. Study 2 provided replication of this finding with a different dataset and in a different language. It confirmed that the four-factor model was a good fit for the data with this different sample, and in a different language. The four-factor model also produced reliable scales with all Cronbach’s α measures for the proposed scales being > 0.7. The fact that a similar valid and reliable factor structure emerges from two different data sets in two different languages adds considerably to the view that the questionnaire is suitable for wider use and, if appropriately translated and tested, in a variety of linguistic contexts.

Based upon these findings, it seems that it would be reasonable for others to use the four-factor version of the IPMQ, in either French or English.

It is worth drawing attention to the fact that the two studies reported here provide replication of the factor structure and therefore add considerably to confidence in the findings – if studies which adequately report validity and reliability are rare (Cruz, Saunders-Smits, and Groen Citation2020) studies which report on repeated replication of validity and reliability measures are even rarer. It could have been tempting to put the data from the two studies together and complete a single study based on a larger sample, especially in light of often-cited rule of thumb suggesting that one needs 10 participants per question for factor analysis (which would mean a required sample size of 240 in this case). In fact, empirical analysis of the sample sizes (using the KMO measure) showed that each sample was large enough to be regarded as very good (or in Kaiser’s terms ‘meritorious’). Determinant tests and Bartlett’s tests confirmed that each data set individually had a structure which made it suitable for factor analysis. The fact that the two datasets related to different languages also meant that combining the data was questionable. For all these reasons, a replication approach was most appropriate.

Overall, we can conclude from these two studies that the IPMQ appears to have strong factorial validity and reliability. Other types of validity are, however, also of interest. It would add to confidence in the measure if, for example, it could be shown that students’ scores increase after training or if it could be shown that experienced professionals score higher than novices on the measure. In fact, other studies with the instrument have shown this to be the case (Picard et al. Citation2022). Combined with the psychometric data presented here, this provides a strong rationale for suggesting that the measure can be used with some confidence.

As with any study, these studies have some limitations. Both studies were carried out in one university, and so it would be useful to see how the instrument responds in a wider range of settings. Data on possible gender differences could not be reported because of how the data was collected. While the psychometric properties appear consistent across English and French, validity should be reconfirmed if used in a different cultural context or language. The issue of self-reporting has been treated in some detail above: the instrument does not claim to measure skill or competence directly but rather to measure self-efficacy beliefs. As noted above, self-efficacy beliefs are in themselves important in learning. If, however, a direct measure of skill is required, then other tests should be used.

Using the IPMQ

Papers reporting on the factorial validity and reliability of instruments will often end once these issues have been addressed. This approach is problematic, however, since the design and development of psychometric instruments are often fraught with difficulties, one of which is that once the measure has been released into the world, the designers are no longer in a position to ensure that the tool is used appropriately (Tormey Citation2021). Measures designed to give feedback to students and teachers may, for example, be used to evaluate the students or teachers. This, in turn, can have extremely negative consequences if, for example, a questionnaire designed to help students identify skills to be developed is instead used to provide them with a ‘trait profile’ which the students understand as being fixed and immutable. Cognisant of these risks, instrument designers have, therefore, an ethical responsibility for addressing how and in what ways the instrument should and should not be used.

The IPMQ is not designed to be a tool for summative assessment. Rather it is designed to be a tool which can provide formative feedback, acting as a mirror to clarify goals and to aid student reflection as part of a structured learning process. Where students have fixed mind-sets, there is a risk that they may interpret psychometric tools as giving feedback on the type of person they are rather than on the levels of self-efficacy beliefs that they have. When giving feedback to students using the IMPQ, we recommend including a clear statement along the following lines: ‘The questionnaire has been evaluated as being valid and reliable, but as always with questionnaires of this type, the best judge of your competence in this area is yourself and people who know you well. Therefore, you should always regard such questionnaires as a source of feedback and reflection, rather than as a definitive statement of your skills. Skills in these areas can also be learned and developed’. Such feedback should ideally be embedded in a reflective planning process in which students make a plan for the skill area(s) they would like to target for development.

Teachers can use the IPMQ without considerable time investment to raise students’ awareness on a number of challenges they may face, and the specific skills they need to develop, for working in an interprofessional setting. Peer feedback, using the instrument, could also play a role in supporting student learning. The IPMQ could also be more systematically integrated into a course’s pedagogy. A more advanced use may look something like this:

  • Administer the IPMQ to students at the beginning of a team project and give feedback as proposed above. Students then reflect in writing on (i) which skills they actually want to develop during the team project and, (ii) what opportunities they will have to develop such skills. These reflections may be discussed with other team members.

  • During the course of the project, students are provided with opportunities to reflect on their skills development. This may happen through, for example, the use of a training portfolio or alternatively by being asked to review and add to the reflections they completed at the outset of the project.

  • At the end of the project, students review their IPMQ scores and reflect upon (i) what skills they feel they have developed and what helped them to develop their skills, and (ii) what skills still need work and in what contexts may they be able to further develop those skills in the future.

In light of the additional complexity that interdisciplinary courses come with, another possible use of the IMPQ is in giving feedback to teachers on the impact of their courses. Used as a pre- and post-course instrument, the IPMQ can give teachers feedback on how students self-efficacy beliefs have developed over the timespan of the course. This can be done at the same time as using the tool for student reflection. Such data may well be more useful to teachers in reflecting on their courses than are traditional instruments for student feedback on teaching (Picard et al. Citation2022). If used in this way, we would recommend to supplement the pre–post measure with qualitative measures, such as interviews focusing on the items and building on examples drawn from students’ own experiences with project management. As with students, it should not be used as a tool to evaluate teachers’ performance, but rather as a tool to support teachers’ reflection.

Given its psychometric properties, it does seem that the tool is appropriate to use as a research instrument to assess self-efficacy beliefs of students in relation to interprofessional project management.

Conclusion

As Berdanier (Citation2022) has recently identified: ‘Teamwork, conflict management, oral communication, written communication, social justice, equity, and ethical reasoning are some of the core competencies required for today's engineers’. If we are to ensure that our engineers develop these competencies, we will need to find ways of clarifying for students what they mean in operational terms as well as ways of structuring reflection opportunities to allow them to learn these competencies. The IPMQ was developed as one strategy for doing just this.

The IPMQ has been found to have factorial validity and to be reliable. Other research has found that scores on the instrument tend to increase with training and with experience (Picard et al. Citation2022). Hence, there is good reason for thinking that the instrument could be valuable as a pedagogic tool (as well as being an appropriate instrument in engineering education research). This paper provides open access to the full text of the IPMQ as well as its scoring system, with the intention that it is used by engineering educators and that its performance in other contexts can be assessed by engineering education researchers.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Roland Tormey

Roland Tormey is a Senior Scientist in Learning Sciences and the Head of the Teaching Support Centre at the Ecole polytechnique fédérale de Lausanne (EPFL) in Switzerland. A sociologist by training, he researches diversity issues, emotions, and active learning. He has previously worked in teacher education and now focuses on researching teaching and learning in engineering education.

Marc Laperrouza

Marc Laperrouza is a Scientist and Lecturer affiliated with the College of Humanities at EPFL in Switzerland. In the framework of his teaching activities, he has developed different pedagogical scenarios with a particular emphasis on interdisciplinarity, immersion and experiential learning.

References

  • ABET. 2019. Criteria for Accrediting Engineering Programs, 2020-2021. Baltimore, MD: Engineering Accreditation Commission.
  • Bandura, Albert. 1977. “Self-efficacy: Toward a Unifying Theory of Behavioral Change.” Psychological Review 84 (2): 191–215. doi:10.1037/0033-295X.84.2.191
  • Bandura, Albert. 2006. “Guide for Constructing Self-Efficacy Scales.” In Self-efficacy Beliefs of Adolescents, edited by Frank Pajares, and Timothy C. Urdan, 307–337. Greenwich, CT: Information Age Publishing.
  • Barrett, Paul. 2007. “Structural Equation Modelling: Adjudging Model fit.” Personality and Individual Differences 42 (5): 815–824. doi:10.1016/j.paid.2006.09.018
  • Becher, Tony, and Paul Trowler. 2001. Academic Tribes and Territories: Intellectual Enquiry and the Culture of Disciplines. 2nd ed. Philadelphia, PA: Open University Press.
  • Beemt, Van den, Miles MacLeod Antoine, Jan Van der Veen, Anne Van de Ven, Sophie van Baalen, Renate Klaassen, and Mieke Boon. 2020. “Interdisciplinary Engineering Education: A Review of Vision, Teaching, and Support.” Journal of Engineering Education 109 (3): 508–555. doi:10.1002/jee.20347
  • Berdanier, Catherine G. P. 2022. “A Hard Stop to the Term “Soft Skills”.” Journal of Engineering Education 111 (1): 14–18. doi:10.1002/jee.20442
  • Blomquist, T., A. D. Farashah, and J. Thomas. 2016. “Project Management Self-Efficacy as a Predictor of Project Performance: Constructing and Validating a Domain-Specific Scale.” International Journal of Project Management 34 (8): 1417–1432. doi:10.1016/j.ijproman.2016.07.010
  • Brookfield, Stephen. 1998. “Critically Reflective Practice.” Journal of Continuing Education in the Health Professions 18 (4): 197–205. doi:10.1002/chp.1340180402
  • Chan, Cecilia K. Y., and Lillian Y. Y. Luk. 2021. “Development and Validation of an Instrument Measuring Undergraduate Students’ Perceived Holistic Competencies.” Assessment & Evaluation in Higher Education 46 (3): 467–482. doi:10.1080/02602938.2020.1784392.
  • Chan, Cecilia K. Y., Yue Zhao, and Lillian Y. Y. Luk. 2017. “A Validated and Reliable Instrument Investigating Engineering Students’ Perceptions of Competency in Generic Skills.” Journal of Engineering Education 106 (2): 299–325. doi:10.1002/jee.20165
  • Coffield, Frank, David Moseley, Elaine Hall, and Kathryn Ecclestone. 2004. Learning Styles and Pedagogy in Post-16 Learning; A Systematic and Critical Review. London: Learning and Skills Research Centre.
  • Colbeck, Carol L., Susan E. Campbell, and Stefani A. Bjorklund. 2000. “Grouping in the Dark.” The Journal of Higher Education 71 (1): 60–83. doi:10.1080/00221546.2000.11780816.
  • Crawley, E., J. Malmqvist, S. Ostlund, D. Brodeur, and K. Edström. 2014. Rethinking Engineering Education. Cham: Springer.
  • Cruz, Mariana Leandro, Gillian N. Saunders-Smits, and Pim Groen. 2020. “Evaluation of Competency Methods in Engineering Education: A Systematic Review.” European Journal of Engineering Education 45 (5): 729–757. doi:10.1080/03043797.2019.1671810
  • Cruz, Mariana Leandro, Maartje E. D van den Bogaard, Gillian N. Saunders-Smits, and Pim Groen. 2021. “Testing the Validity and Reliability of an Instrument Measuring Engineering Students’ Perceptions of Transversal Competency Levels.” IEEE Transactions on Education 64 (2): 180–186. doi:10.1109/TE.2020.3025378.
  • CTI. 2020. Référentiel 2020 Bachelor. Commission des titres d’ingénieur.
  • Direito, Inês, Anabela Pereira, and A. Manuel de Oliveira Duarte. 2012. “Engineering Undergraduates’ Perceptions of Soft Skills: Relations with Self-Efficacy and Learning Styles.” Procedia - Social and Behavioral Sciences 55: 843–851. doi:10.1016/j.sbspro.2012.09.571
  • Douglas, K. A., R. E. H. Wertz, M. Fosmire, Ş Purzer, and A. S. van Epps. 2014. “First Year and Junior Engineering Students’ Self-Assessment of Information Literacy Skills.” Proceedings of the 121st ASEE Annual conference & exposition, Indianapolis, Indiana.
  • Field, Andy P., Jeremy Miles, and Zoë Field. 2012. Discovering Statistics Using R. London, Thousand Oaks, Calif: Sage.
  • Finelli, C. J. 2021. “EER Taxonomy Version 1.3.” University of Michigan. http://taxonomy.engin.umich.edu/taxonomy/eer-taxonomy-version-1-3/.
  • Freidson, Eliot. 2001. Professionalism: The Third Logic. Chicago: University of Chicago Press.
  • Hayduk, Leslie, Greta Cummings, Kwame Boadu, Hannah Pazderka-Robinson, and Shelley Boulianne. 2007. “Testing! Testing! One, two, Three–Testing the Theory in Structural Equation Models!.” Personality and Individual Differences 42 (5): 841–850. doi:10.1016/j.paid.2006.10.001
  • Hu, Li-tze, and Peter M. Bentler. 1999. “Cutoff Criteria for fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus new Alternatives.” Structural Equation Modeling 6 (1): 1–55. doi:10.1080/10705519909540118
  • Kerzner, Harold. 2009. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. 10th ed. Hoboken, NJ: John Wiley & Sons.
  • Klaassen, Renate G. 2018. “Interdisciplinary Education: A Case Study.” European Journal of Engineering Education 43 (6): 842–859. doi:10.1080/03043797.2018.1442417
  • Kline, Paul. 2000. The Handbook of Psychological Testing. 2nd ed. London; New York: Routledge.
  • Kolb, David A. 1984. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall.
  • Kruger, J., and D. Dunning. 1999. “Unskilled and Unaware of it: How Difficulties in Recognizing One's own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77 (6): 1121–1134. doi:10.1037/0022-3514.77.6.1121
  • Laperrouza, M. 2018. “Assessing transversal skills in an interdisciplianry programme.” 46th SEFI Annual Conference, Copenhagen, Denmark.
  • Lattuca, Lisa, David Knight, and I. M. Bergom. 2013. “Developing a Measure of Interdisciplinary Competence.” International Journal of Engineering Education 29: 726–739. https://www.ijee.ie/contents/c290313.html
  • Lattuca, Lisa R., David B. Knight, Hyun Kyoung Ro, and Brian J. Novoselich. 2017. “Supporting the Development of Engineers’ Interdisciplinary Competence.” Journal of Engineering Education 106 (1): 71–97. doi:10.1002/jee.20155
  • Lattuca, Lisa R., David Knight, Tricia A. Seifert, Robert D. Reason, and Qin Liu. 2017. “Examining the Impact of Interdisciplinary Programs on Student Learning.” Innovative Higher Education 42 (4): 337–353. doi:10.1007/s10755-017-9393-z
  • Lehmann, M., P. Christensen, X. Du, and M. Thrane. 2008. “Problem-oriented and Project-Based Learning (POPBL) as an Innovative Learning Strategy for Sustainable Development in Engineering Education.” European Journal of Engineering Education 33 (3): 283–295. doi:10.1080/03043790802088566
  • Lönngren, Johanna. 2021. “Exploring the Discursive Construction of Ethics in an Introductory Engineering Course.” Journal of Engineering Education 110 (1): 44–69. doi:10.1002/jee.20367
  • Nuhfer, Edward, Christopher Cogan, Steven Fleisher, Karl Wirth, and Eric Gaze. 2017. “How Random Noise and a Graphical Convention Subverted Behavioral Scientists’ Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives.” Numeracy 10 (1): Article 4, doi:10.5038/1936-4660.10.1.4.
  • Pant, Ira, and Bassam Baroudi. 2008. “Project Management Education: The Human Skills Imperative.” International Journal of Project Management 26 (2): 124–128. doi:10.1016/j.ijproman.2007.05.010
  • Parsell, G., and J. Bligh. 1999. “The Development of a Questionnaire to Assess the Readiness of Health Care Students for Interprofessional Learning (RIPLS).” Medical Education 33 (2): 95–100. doi:10.1046/j.1365-2923.1999.00298.x
  • Paul, D., B. Nepal, M. D. Johnson, and T. J. Jacobs. 2018. “Examining Validity of General Self-Efficacy Scale for Assessing Engineering Students’ Self-Efficacy.” International Journal of Engineering Education 34 (5): 1671–1686. https://www.ijee.ie/contents/c340518.html
  • Pedhazur, Elazar J., and Liora Pedhazur Schmelkin. 1991. Measurement, Design, and Analysis: An Integrated Approach. Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Picard, Cyril, Cécile Hardebolle, Roland Tormey, and Jürg Schiffmann. 2022. “Which Professional Skills do Students Learn in Engineering Team-Based Projects?” European Journal of Engineering Education 47 (2): 314–332. doi:10.1080/03043797.2021.1920890.
  • Ponton, Michael K., Julie Horine Edmister, Lawrence S. Ukeiley, and John M. Seiner. 2001. “Understanding the Role of Self-Efficacy in Engineering Education.” Journal of Engineering Education 90 (2): 247–251. doi:10.1002/j.2168-9830.2001.tb00599.x
  • Richter, David M., and Marie C. Paretti. 2009. “Identifying Barriers to and Outcomes of Interdisciplinarity in the Engineering Classroom.” European Journal of Engineering Education 34 (1): 29–45. doi:10.1080/03043790802710185
  • Ryan, Mary. 2013. “The Pedagogical Balancing Act: Teaching Reflection in Higher Education.” Teaching in Higher Education 18 (2): 145–155. doi:10.1080/13562517.2012.694104.
  • Tormey, Roland. 2021. “Rethinking Student-Teacher Relationships in Higher Education: A Multidimensional Approach.” Higher Education 82 (5): 993–1011. doi:10.1007/s10734-021-00711-w
  • Tormey, Roland and Siara Isaac, with Cécile Hardebolle and Ingrid LeDuc. 2021. Facilitating Experiential Learning In Higher Education; Teaching and Supervising in Labs, Fieldwork, Studios and Projects. London: Routledge.
  • Trowler, Paul, Murray Saunders, and Veronica Bamber. 2012. Tribes and Territories in the 21st-Century: Rethinking the Significance of Disciplines in Higher Education. International Studies in Higher Education. London, New York: Routledge.
  • Ulrich, Karl T., and Steven D. Eppinger. 2016. Product Design and Development. Sixth Edition. ed. New York, NY: McGraw-Hill Education.
  • Verdín, D., A. Godwin, and B. Benedict. 2020. “Exploring First-Year Engineering Students’ Innovation Self-Efficacy Beliefs by Gender and Discipline.” Journal of Civil Engineering Education 146 (4): 1–14. doi:10.1061/(ASCE)EI.2643-9115.0000020.
  • Warne, Russell T., and Ross Larsen. 2014. “Evaluating a Proposed Modification of the Guttman Rule for Determining the Number of Factors in an Exploratory Factor Analysis.” Psychological Test and Assessment Modeling 56 (1): 104–123. https://www.psychologie-aktuell.com/fileadmin/download/ptam/1-2014_20140324/06_Warne.pdf

Appendix: Four factor Interprofessional Project Management Questionnaire Structure.

Imagine you are working on a project which is complex and requires inputs from a number of different professions (for example, people with legal training, engineers, social scientists, designers, etc.). Please indicate how much you agree with each of the following statements.