69
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Children’s understanding of well-being related questions: results of cognitive interviews in four European countries

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, , , ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all

ABSTRACT

This paper presents the results of cognitive interviews with 8-year-old children from four European countries – Croatia, France, Finland, and Ireland. The aim of the interviews was to pre-test a selection of well-being-related questions as a part of questionnaire development for the first European multinational birth cohort study – Growing up in Digital Europe (GUIDE)/EuroCohort. Unlike most previous studies, we focused on a younger and more age-homogenous sample, as well as a more diverse set of well-known questionnaires. A total of 68 children participated in the study. The main suggestion for the interviewing procedure is to create a safe environment yet minimize the parents’ interference in answering. The questionnaires should use child-friendly vocabulary, tangible examples, avoid complex sentence structure and negative statements. The use of timeframes in questions should be minimal. The children can use Likert-type scales, but the number of different scales in the questionnaire should be limited.

Introduction

Recently there has been increasing recognition of the importance of children’s perspectives in research. Children can offer unique insights into their lives, which are often very different from the perspective of their parents or other people close to them (L. G. Irwin & Johnson, Citation2005). Therefore, researchers should aim to investigate children’s viewpoints directly. With that in mind, the first European cross-national birth cohort study – Growing Up in Digital Europe (GUIDE)/EuroCohort plans to examine a sample of 8-year-olds across Europe. The study will inquire into children’s outlooks on their well-being, development, and the context in which they are growing up, in order to inform future social policies that promote well-being. The children will be surveyed using a harmonised questionnaire and methodology cross-nationally. Specifically, it will employ the interview computer-assisted personal interviewing (CAPI) procedure, in which an interviewer reads out questions to the child and codes the child’s answers, similar to other large-scale cross-national studies of children such as Young Lives. GUIDE/EuroCohort will, therefore, be highly valuable as it will provide longitudinal data on children’s self-reported well-being across European countries.

However, collecting data directly from children, especially in a cross-national context, is not without its problems. Firstly, children do not always understand the context of research and are generally reserved when talking to strangers, hence making the interviewing process more complex then when interviewing adults (L. G. Irwin & Johnson, Citation2005). Secondly, children can have difficulties in correctly interpreting survey questions due to not having entirely developed cognitive, communication, and social skills (Bell, Citation2007). This is especially true for younger children, who have just started to learn how to read. Most questionaries for children are designed so that children could answer them on their own and validated on such samples. However, little is known about the appropriateness of even well-established questionnaires for children who are not yet proficient readers, meaning who are on the lower end of the age-group the questionnaire targets. Therefore, this study aims to additionally scrutinize the appropriateness of diverse well-known questionnaires focused on children’s well-being on a sample of 8-year-olds. The problem of children having difficulties understanding questionnaire items can be further intensified in cross-national research if the survey was not adequately adapted to children’s primary languages. Hence, another contribution of the study is validation questionnaires for young children in multiple languages, in some of which no translation was previously validated.

One of the methods that can help assess and improve the appropriateness and validity of items and their translation are cognitive interviews. Cognitive interviewing (CI) is a methodological tool for pre-testing questionnaires that focuses on the mental processes respondents use to answer survey questions (Collins, Citation2015b). The aim of cognitive interviewing is to provide evidence that the survey questions meet measurement objectives, namely that respondents can provide meaningful answers (P. Beatty, Citation2004; Collins, Citation2003). It involves in-depth interviews by a trained interviewer who attempts to understand the participants’ cognitive process when answering a certain item (G. B. Willis, Citation2005). This evidence helps the question designer to decide if the original item or its translation is problematic and how to revise it if necessary. The use of CIs improves questionnaire design (Berthelsen et al., Citation2014; G. B. Willis, Citation2005) and enhances measurement validity (P. C. Beatty & Willis, Citation2007; Collins, Citation2003; Drennan, Citation2002).

Cognitive interviewing in studies on children

When used with children, the main goal of CIs is to examine the appropriateness of an instrument for their age group. The interviewing results should provide answers about the readability of an item, its comprehension, and the ability of children in a targeted age group to validly respond to it. Compared to adult participants, children have more limited vocabulary, reading skills, attention, and cognitive ability to represent and manipulate constructs. Consequently, they have issues with word recognition, misunderstanding of the content, response option incongruence, and misapplying response options to content (Bowen, Citation2008). CIs can access the cognitive processing of children in self-report questionnaires and serve as an important tool for designers of instruments for children.

Cognitive interviewing of children has been applied in several studies, such as the Patient-Reported Outcomes Measurement Information System (PROMIS) project on children aged 8 to 17 years old (D. E. Irwin et al., Citation2009), the study conducted by Rebok et al. (Citation2001) on children aged 5 to 11, The Elementary School Success Profile (ESSP) assessment on 3rd, 4th, and 5th graders (Bowen, Citation2008), EU Kids Online (Ogan et al., Citation2012) on children aged 9 to 16 years old and project with 7- and 8-year-olds (Franc et al., Citation2017). The results of these studies show that cognitive interviews with children do serve as an adequate data-gathering tool to change and enhance the quality of the instrument (Bell, Citation2007; D. E. Irwin et al., Citation2009). Some common findings of these studies are that items for children should be short, use simple vocabulary and structure, and be as concrete as possible; that time references should be avoided, or should refer to shorter and specific recall periods; and that a limited number of responses categories gives better results.

While these studies provide useful recommendations, most of them aimed to validate a specific new scale or survey questions for children on sample with diverse age, rather than examining the appropriateness of variety of well-established children questionnaires on a younger age group. Hence, our study will be able to provide deeper insights into cognitive processes of early reading-age children when answering more diverse set of items.

Cognitive interviewing in cross-cultural studies

Another specificity of our sample is that it includes children from four European countries. While there are numerous challenges when conducting research and comparing results from multiple countries (for overview Kish, Citation1994), we have decided to focus on issues such as the translation of the variables, participants understanding of the same concepts in different countries and cultural appropriateness of certain questions. CIs are a particularly useful tool in development of cross-cultural survey questions and for early detection of translation problems (Forsyth et al., Citation2007; Levin et al., Citation2009; Nápoles-Springer et al., Citation2006; G. Willis & Zahnd, Citation2007; G. B. Willis et al., Citation2008). They can be used as a tool for the semantic enhancement of multi-lingual versions of adapted tests. Semantic enhancement is the process of scrutinizing the technical qualities of translations, the connotative meaning, and the psychological impact of items on respondents across different language versions (Gray & Blake, Citation2015). The number of potentially flawed items detected during field testing greatly impacts the likelihood of reaching overall equivalence between multilingual versions of a test (Grisay, Citation2003). Several studies have reported the use of cognitive interviewing for developing multi-lingual versions of national census surveys (e.g. Fitzgerald et al., Citation2011; Goerman & Caspar, Citation2010; Martinez et al., Citation2011), and of health-related surveys (Fujishiro et al., Citation2010; G. B. Willis et al., Citation2008). However, the application of this technique to cross-cultural research is relatively new (G. B. Willis et al., Citation2008), and although guidelines for application of CIs in cross-cultural studies exist (Fitzgerald et al., Citation2011), many studies still lack standardisation (Kudela et al., Citation2004; Lee, Citation2014). Therefore, there is a need for more research reporting on the use of CIs for the adaptation and development of cross-cultural instruments. Hence, we put special care on the methodological approach and sampling to allow for cross-cultural comparability of the study results.

Methodological approaches

Cognitive interviewing can be divided into two main methodological approaches: think-aloud and verbal probing (Forsyth & Lessler, Citation1991; G. B. Willis, Citation1999). In the think-aloud method, respondents are instructed to elaborate on what they are thinking as they answer survey questions, hence they provide free-form answers. While this widely used technique minimises interviewer-imposed bias and requires little interviewer training, it undermines standardisation and comparability. Therefore, in this study we relied on verbal probing method. The verbal probing method involves the interviewer asking specific questions or probes which are designed to elicit how the respondent went about answering the question. After the interviewer asks the survey question and the subject answers, the interviewer asks for information relevant to the question or the given answer. In general, the probes inspect the respondent’s foundation for his or her response. Examples of such probes are: ‘Why did you respond that way?’; ‘What does that mean to you?’; or ‘Please tell me what I was asking in your own words?’. G. B. Willis (Citation2005) distinguishes between four probing variants: (1) anticipated probes – scripted or at least roughly defined based on the anticipated problems with the question; (2) spontaneous probes – flexible, unscripted probes, derived by interviewers who decide to search for potential problems on their initiative; (3) conditional probes – pre-determined probes that are applied only if triggered by particular participant behaviours (e.g. participants hesitation to answer); (4) emergent probes – flexible, unscripted, and reactive probes which the interviewer selects in response to something a participant said (e.g. something that indicates a problem). Hence, most of the probes can be standardised, meaning that this method can be replicated across different research teams, languages, and cultures, and enables the coding and classification of problems. Therefore, it is more appropriate when pre-testing in a cross-national context.

Sampling in cognitive interviews

Sampling in CIs should be purposive (G. B. Willis, Citation2005). Thus, it is not designed to be representative of a larger population, but to reflect the detailed thoughts and problems of a few respondents from a relevant population (DeMaio et al., Citation1998). Consequently, such sampling cannot directly determine the extent to which the questionnaire might be problematic to the general population but can identify question characteristics that are believed to pose problems with unspecified frequency. Usually, 10 CIs per country are believed to be the minimum number necessary to allow within and between-country analyses (Fitzgerald et al., Citation2011). However, it is recommended that the sample represents diverse social and demographic patterns of respondents, and therefore in some cases, quotas are used in sampling procedure (G. B. Willis, Citation2005). Yet, even when quotas are used, analysis of CIs does not produce generalizable findings in a statistical sense. Rather, it provides an explicit exploration of response processes that could lead to response error. Therefore, we aimed to recruit sizable and diverse samples that would enable making cross-national comparisons.

Present study

Ultimately, the aim of the study is to examine selected set of GUIDE/EuroCohort survey questions, using cognitive interviews on samples of 8-year-old children in four European countries (Croatia, Finland, France, and Ireland), as well as assessing the interview protocols which will be used in the main study.

In accordance with previous findings, in our study the focus of testing was on comprehension and appropriateness of instructions, items, and exact wording; recall and judgement in the given timeframes; and recall and judgement in using the given response formats. However, unlike previous studies, this will assess the validity of diverse well-established questionnaires for children on a more age-homogenous younger sample of children. The study will be conducted cross-nationally, examining the translations and cultural appropriateness, strengthening the generalisability of the results, as well as validating some of the questionnaires in certain languages for the first time. In order to be able to draw comparisons between the countries, special care was given to standardisation, which was somewhat lacking in previous cross-national use of cognitive interviews.

Furthermore, the interviewing procedure was also observed and assessed, since the CAPI procedure that the main study will employ has many similarities to the procedure of CIs. Hence, the study will also offer insights in challenges in interviewing children more generally.

Method

Questionnaire content and translation procedure

The questionnaire that was being pre-tested with the CIs focused on well-being, covering children’s hedonic and eudaimonic well-being, various aspects of development, as well as satisfaction with different life domains crucial for children, such as family, school, friends, etc. Some of the items were constructed by the research team, however, the majority were adopted from well-known scales and previous surveys such as the Positive and Negative Affect Schedule for Children-Short Form (PANAS-C-SF; Ebesutani et al., Citation2012), the Patient Reported Outcome Measurement Information System (PROMIS; Forrest et al., Citation2019), the Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS; Van der Kaap-Deeder et al., Citation2020), the Kidscreen-52 (Kidscreen project, Citation2011), the Emotion Regulation Questionnaire for Children and Adolescents (ERQ-CA; Gullone & Taffe, Citation2012), the Strengths and Difficulties Questionnaire (SDQ; Goodman, Citation1997), the Self-Description Questionnaire-I (SDQI; Marsh, Citation1988), the Student Engagement Instrument-Elementary (SEI-E; Carter et al., Citation2012), the Multidimensional Students’ Life Satisfaction Scale (MSLSS; Huebner, Citation1994), and the EU Kids Online (Smahel et al., Citation2020).

The items used eleven different scales: three different 5-point agreement scales and a 3-point agreement scale, three different 5-point frequency scales and frequency scale in hours, 5-point satisfaction scale, dichotomous (yes-no) answers and a specific list of answers that consisted of different family members. The scales were adopted from the original questionnaires.

The research team selected a total of 31 items covering all domains assessed in the questionnaire for the CI. We included items with different topics and instructions, items with different formats, and potentially problematic items, particularly where cross-cultural adaptions appeared difficult. For each item, we formulated aims of pretesting, meaning what aspects of the item will be assessed within the interview.

The questionnaire was created in English and translated to national languages using the Translation, Review, Adjudication, Pretest, and Documentation (TRAPD) method (Harkness, Citation2003). The TRAPD method encompasses minimally two translators separately translating the source questionnaire. Those translations are then discussed with an adjudicator who decides on the final wording. Afterwards, the translation can be additionally changed and adapted based on a pretest. In the study, each national team has translated the questionnaire following this procedure. Since most items were part of well-established questionnaires, some of them were already translated and validated in previous studies. Translations for such items were adopted if publicly available. The CIs served as a method of pretesting the translations, since they can be used in identifying participants issues in interpreting items as intended, terms participant does not understand or culturally specific meaning of terms (Forsyth et al., Citation2007). The national teams made translations of the source questionnaire to the majority language of their country – meaning Croatian, Finnish and French, while the Irish team used the source questionnaire in English. Although we expected some children who participate in CIs to be from families who speak a minority language at home (15% in the overall sample), we have decided not to translate the questionnaires to the minority languages for two reasons. Firstly, we wanted to assess if the children could follow the questionnaire in the majority language for the purposes of future research. Secondly, since we expected very few children whose families did not speak the majority language to be included in the sample, any conclusions about the translations to those languages would have been based on a highly limited sample.

Sample

A sample of 68 children was established with children from Croatia (n = 20), France (n = 18), Finland (n = 10), and Ireland (n = 20) participating in the interviews. The children were recruited through the personal contacts of the researchers and advertising on various social media sites. The participants age ranged from seven to nine, with the majority of children being eight years old (46 children, 68%). The sample used gender quota and, therefore, was gender-balanced with 33 boys (49%) and 35 girls (51%). While no other quotas were imposed when recruiting participants, special care was given to the diversity of sample based on characteristics, we assumed could be connected to participants understanding of survey questions as per recommendation of G. B. Willis (Citation2005). Therefore, we concerned children’s family structure, academic achievement, parent’s education, and mother tongue. Due to the large number of items tested in the CIs, the sample was randomly split in two: half the children were tested using subsample one (the first 18 items of the questionnaire) and the other was tested using subsample two (the second 13 items of the questionnaire). The exception was Finland. The interviews there were divided into two sessions in both of which all children participated in testing the full set of items. Thus, in total 40 children participated in subsample one and 38 in subsample two.

Interviewing procedure

The ethical approvals for the CIs were provided by the national research institutions in the four countries.Footnote1 Before each child interview, the researchers obtained an official signed consent form from all parents and children. Both the parents and the child were informed about the interview’s content, purpose and length, their rights as participants, and that the interview would be audio recorded. They were given an opportunity to ask the interviewer questions if there was a need for clarifications. They were reminded they could stop the interview at any time, and object to the tape recording. To encourage children’s active participation, researchers read out motivational instructions explaining the importance and value of participation in the study at the beginning of the interview, while in the end small incentives were given to children for their participation. The children were handed gift cards worth around 15 euros for a book or convenience store in Croatia, Finland and Ireland, while in France they were given a puzzle. These incentives were age-appropriate and followed the ethical and financial legislation of the project, institution, and/or country. The gathered data is anonymised, treated confidentially, and stored securely.

The interviews with children were conducted in person in quiet and private rooms at the respondent’s home (France), at the research institute’s premises (Croatia), or remotely via Teams or Zoom online platforms (Finland and Ireland). The parents could be present during the interview if they chose so but were instructed not to interfere with children’s answers. All interviews, aside from ones in Finland, were audio-recorded. The Croatian and Finnish teams conducted the interviews with two interviewers at the same time: interviewer A led the interview and asked questions and probes while interviewer B took notes, recorded the child’s answers, and observed the interview process. The French and Irish teams had a single interviewer who conducted the interview and coded the interview from the audio recording later. All interviewers were trained using harmonised training materials (an example can be found in Supplementary material).

The CIs were based on the verbal probing method. The respondents were asked each item. The item and the response were read by the interviewer and a showcard with the answering scale was provided. The respondent answered a specific item, and the interviewer asked the scripted probes related to the testing objective. If a child showed difficulties in understanding the instruction, the question, or in answering with specific response options, conditional probes were asked. In addition to these scripted probes, interviewers could ask spontaneous probes on their initiative to clarify unexpected issues.

Data organisation and analysis

All the narratives for each item were organized in the form of standardized protocols. The protocols were created based on framework matrices proposed by d’Ardenne and Collins (Citation2015). The protocols provide a uniformed categories for coding and noting participant’s sociodemographic characteristics and answers to probes, as well as interviewer’s observations for each tested item (an example can be found in Supplementary material). After the coding and noting data, each national team analysed data for their country in two steps: descriptive analysis and explanatory analysis (Collins, Citation2015a). Firstly, the teams identified and categorized issues associated with each item. Afterwards, for each question and each respondent, the researcher suggested potential causes of detected problems (e.g. too complex, or inaccurate instructions; unknown terms; unfamiliarity of certain words; ambiguous concepts), wrote notes and recommendations, and suggested changes. For each county the detected problems, their interpretation, supposed causes of the problems and recommendations were compiled into a standardised summary table. After that, summary tables for each country were compared in a cross-national analysis, resulting in final conclusions about the questions and recommendations.

Results

The cognitive interviews raised multiple concerns and recommendations about the interview procedure, questionnaire items, and answer scales.

The procedure of the interviews

Training for interviewers

The CIs highlighted that both interviewers conducting CIs and ones who will be conducting the survey in the future should be trained. Specifically, they should be well-acquainted with the content of the questionnaire and the most frequent problems they might encounter interviewing children since most countries reported consistent difficulties in interviewing process and survey questions. It seemed that the use of motivational instructions was well accepted by children since children reacted positively when the interviewer explained the importance of their contribution to the study and was encouraged to participate in it. This was reflected in interviewers’ high assessments of children cooperation on a five-point scale. In all countries children’s cooperation was rated as very good or excellent (M = 4.45–4.75). However, interviewers noted that they needed some time to build rapport with children, emphasizing the importance of the beginning part of the interview.

Location and duration of interviews

Quiet and private locations should be secured for the interviews in both in-person and online settings, within ethical approval guidelines. Interviewers reported that children who were interviewed in the presence of other people (parents, siblings) often had problems focusing due to outside interferences. On average each interview the CIs lasted for 35 minutes (). However, after about 30 minutes, interviewers noted that children became restless and lost attention.

Table 1. Length of cognitive interviews in each country.

Presence of the parents

Although the presence of parents was helpful in keeping the child relaxed during the interview, it was also found to be occasionally disruptive to the child answering the questions. The parents intervened to ‘correct’ their child’s answers, particularly those concerning abilities or skills, or the children, themselves, requested the parent’s assistance to answer a question. Additionally, some children were uncomfortable answering certain questions in front of their parents, especially those concerning their family life.

Questionnaire format and content

The general problems and recommendations that arose from the CIs are summarised in the following sections, while a more detailed analysis of items can be found in the Supplemental material.

Understanding instructions and questions

The CIs indicated that complicated and sophisticated vocabulary should be avoided (e.g. ‘disabled’, ‘income’, ‘honest’). While most children understood there more complicated words, a few in each country had difficulty with them. The participants showed their lack of understanding either immediately asking the interviewer to explain the word or by saying they do not know what it means when asked. Some concepts were overly complex or abstract for children (e.g. ‘Is your life filled with meaning?’, ‘Can you reach your goals?’) and the majority of the interviewed children in all countries either did not know how to explain them, only paraphrased the question or wrongly understood the question. For example, when asked what phrase does ‘life filled with meaning’ mean, children gave answers such as: ‘That life has meaning.’, ‘When it [life] has meaning.’, ‘That life has meaning because nothing is weird is happening.’ or ‘When it [life] is organized: I go to school, I go to sleep on time.’. Furthermore, the instructions and questions should be as brief and as simple as possible to avoid confusion and a decrease in attention. Interviewers noted that almost none of the more detailed instructions contributed to children’s understanding, but rather decreased children’s focus.

Our results show that some children have problems understanding questions phrased in a negative manner. Children found it easier to answer using a response scale when a greater number on the scale indicates a better outcome. For example, when asked the question: ‘Are you worried about the way you look?’, few children needed more time to conclude which answer (1-never to 5-always) indicated their satisfaction with their looks and expressed opinions about the scale such as: ‘5 should mean the best’. Similarly, when asked how they would formulate the question if they were asking it their friend, they gave the following suggestions: ‘Do you think you look nice?’ or ‘Do you like the way you look?’. Hence, such questions should be rephrased in a positive manner. Moreover, very lengthy questions or ones containing double negatives or conditions (e.g. ‘When you want to feel less bad, do you think about something different?’) should be avoided since they confuse children. Children often requested the interviewer to repeat such questions as they did not understand it when it was read to them only once and afterwards had difficulty rephrasing the question.

Furthermore, children often had difficulty understanding timeframes. For example, when they were asked to answer the question in reference to the last week, they had problems defining what days the last week includes, even though they knew that a week consists of seven days. An example that describes this issue well is the following:

Interview: ‘How many days are there in a week?’

Participant: ‘Seven.’

Interview: ‘So if today is Thursday, which days would you consider to be in the last week?’

Participant: ‘From Monday.’

Children also had problems averaging values. For example, when they were asked ‘Do you get a lot of headaches, stomach-aches, or sickness?’ some children answered separately for each symptom, for instance: ‘I never have headaches, but sometimes I have stomach-aches, and I often feel I’m going to throw up in the car.’. Similar issue occurred when asked about frequencies. To illustrate, when asked ‘How long do you normally use your device on most days?’ children gave answers such as: ‘I spend a one and a half hour [using electronic devices] on weekends and one and a half hour [using electronic devices] on schooldays’, ‘Sometimes ten minutes, sometimes half an hour […] but it’s not every day’, ‘From one hour to five hours’. Therefore, approximation and averaging should be avoided.

Response format

Generally, the interviews found that children were able to use 3-point and 5-point Likert-type scales, frequency scales, etc. The children were asked to explain why they choose a particular answer on the scale or differences in meaning of the descriptors on the scales and were able to provide correct and elaborate answers: ‘ … a little is small amount, […] moderately is in the middle and extremely is a lot.’ or ‘Often means a lot of time, but not every time, and always means every time.’. The only exception were the middle response categories, which in most questions implied a neutral opinion or satisfaction. Specifically, response categories, such as ‘neither unhappy nor happy’ are not recommended since children were often confused by mentioning both ‘being unhappy’ and ‘being happy’ in the same descriptor. For instance, when asked what this descriptor means children answered: ‘It means I’m not satisfied.’ or ‘When someone would not like to do it, they would have answered 3 [neither unhappy nor happy]’. Hence, it seems that children do not interpret such a formulation as being neutral but rather negative.

Furthermore, interviewers reported that some of the children needed time at the beginning of the interview to learn how to answer the questions or to adapt to a new scale when introduced. Children reacted positively to showcards. It has been observed, that showcards should contain and the interviewer should read out both the numbers and the descriptors on the scale, since CIs show that children understand the descriptors better in relation to the corresponding numbers. For example, it was easier for them to understand that ‘somewhat’ is less than ‘quite a bit’ when they were accompanied by numbers three and four and they sometimes answered using numbers rather than words.

Translation and cross-national adaptation

The CIs indicated cross-national equivalence of the questionnaire, as most issues that were found concerned all countries rather than specific ones. The most frequent issues in cross-national adaptation were translations of specific words. It seems that although they were simple and understood by children in the English original, the translations used overly complex words in translation. For example, children had problems understanding words fizička aktivnost (physical activity) and prihod (income) in Croatian, words hilpeä (cheerful) and naapurusto (neigburhood) in Finnish and words préoccupé (worry), apparence (appearance) and nausée (sickness). Furthermore, a special difficulty in translation was that they were sometimes overly vague or specific. For example, while there were no issues in the Irish sample, in all translations the questions ‘Can you do things well?’ proved to be problematic since children had problems assessing their abilities in such general manner. Similarly, question ‘Can you reach your goals in life?’ proved to be overly specifically translated in French and Croatian, since children answered only regarding their school achievement.

In terms of cultural differences, some questions proved to be less salient to children from some countries. For example, when asked if they have to take care about a person with disability of somebody who is sick, most children found it somewhat complicated, however children from Finland found in highly difficult since care of such persons is done by social services rather than family. Similarly, when asked about free time activities from Ireland, Finland and France did not know what religious education and Sunday school were, while children from Finland and France had difficulty answering questions about electronic devices and online activity since they often did not possess own device or were forbidden from going online. However, most questions seem to be applicable to children from all countries.

Discussion

The aim of this study was to pre-test questions using the cognitive interviews method as a part of the development of survey questionnaires for the first European cross-national birth cohort study – Growing Up in Digital Europe (GUIDE)/EuroCohort, focusing on 8-year-old children’s perspectives on their well-being. Most other studies that used CIs have focused only on the quality of items rather than the interview process since the items were intended to be used as a part of self-completion questionnaires for children (e.g. Franc et al., Citation2018; D. E. Irwin et al., Citation2009). Our main study, however, will follow the CAPI procedure. Therefore, it was of the utmost importance to examine how children reacted to being interviewed. Furthermore, previous research mainly included older and less age-homogeneous samples, therefore our findings provide important insights for creating questionnaires for children who are not proficient readers. While we pre-tested items from well-known questionnaires from children, some items and scales did not seem appropriate for this age group, hence they should be used with younger children with caution or smaller changes.

The main conclusion about interview procedure drawn from the present study is child’s comfort during interview should be prioritised. L. G. Irwin and Johnson (Citation2005) point out that building rapport is the key to the overall success of the interview. Thus, we recommend focusing on how to create a relationship with the child that enables them to relax and talk openly. In our study, the use of motivating instruction, emphasizing children’s importance and role in the research, and usage of rapport building questions has proved to be useful in gaining children’s cooperation. Similarly, using practice interviews, done by Finnish team, has been useful for interviewers to prepare for their role.

Aside from making the child comfortable, the interviewer should ensure that the child is focused. The interviewers in our study reported that children tended to lose attention faster in when the location was not private and when interview passed half hour duration. In previous studies, the CIs with children of similar age lasted from 45 minutes to an hour (Franc et al., Citation2017; D. E. Irwin et al., Citation2009), however, in context of our findings such interviews seem too lengthy.

Another interviewing challenge is creating an open environment when parents are present. Although parents’ presence is key in creating a feeling of safety and comfort, as well as encouraging children in answering (L. G. Irwin & Johnson, Citation2005), in our study some parents interfered with children’s answers by suggesting responses to their child or correcting them. Some children also felt uncomfortable answering certain questions in front of their parents, which is in line with finding that children are more prone to socially desirable answering in the presence of parents (Ogan et al., Citation2013). Thus, the survey design needs to strike the right balance between ensuring a comfortable environment for children, while minimising biases linked to parental presence. The Irish team had successfully introduced briefs for parents which explain that the interviewers are interested in the children’s perspective and thus request the parents not to participate. Another recommendation could be to use self-completion modules for sensitive questions for children, meaning that the children do not have to answer the question aloud, but rather by choosing an answer on a tablet. However, results from some countries suggested that children may require the assistance of the interviews in reading self-completion modules.

Our findings regarding questionnaire design were in line with previous studies that pre-tested questionnaires using CIs on children of this age group. The most common recommendation from the evaluation process was that items should have simple wording and simple concepts. Specifically, child surveys should use age-appropriate language (Franc et al., Citation2017, Citation2018; D. E. Irwin et al., Citation2009; Ogan et al., Citation2012) and concrete rather than abstract or ambiguous language or ideas (Bell, Citation2007; Bowen, Citation2008; Franc et al., Citation2017, Citation2018; Rebok et al., Citation2001). In light of this, it is unsurprising that children had problems understanding sophisticated and abstract vocabulary. Complex concepts were also difficult for children to understand. This finding supports earlier reports that measuring variables such as autonomy might be hard in this age group (Franc et al., Citation2018). To ensure that children have a better understanding of questions and instructions, such problematic words, and phrases need to be removed, further explained, or changed into more children-friendly language as recommended by Bowen (Citation2008).

Another linguistical concern was complex sentence structure. The results show that children had problems understanding longer sentences, sentences containing conditions, and negative statements, while also losing focus or getting bored when longer instructions were read out. Similar problems were recorded in previous CI studies (Bell, Citation2007; Franc et al., Citation2018; L. G. Irwin & Johnson, Citation2005). Therefore, instructions need to be shortened to contain only necessary information and the syntax of the questions simplified.

An additional concern about the instructions arose regarding timeframes. There are mixed results on this subject. Some research shows that children can understand relatively lengthy time references (D. E. Irwin et al., Citation2009; Rebok et al., Citation2001), while others show that children do not understand them (Bowen, Citation2008; Franc et al., Citation2018). Our results indicate that children have problems using certain timeframes. Namely, they struggle to concretely define the period the instruction is referring to. Furthermore, Franc et al. (Citation2018) concluded that even very specifically defined timeframes (e.g. ‘Monday to Friday’) can pose a problem for some children. Therefore, it is recommended to avoid timeframes where possible or define them precisely.

In line with the literature, children had little trouble with Likert-type scales (Franc et al., Citation2018; D. E. Irwin et al., Citation2009; Rebok et al., Citation2001). The only pronounced difficulty with these types of scales was understanding of the middle descriptor. They did not interpret these as neutral, because they were confused by two emotions or opinions being mentioned within one statement (e.g. neither unhappy nor happy). Hence, descriptors such as ‘In the middle’ or ‘No feelings about it’ might be more appropriate. Furthermore, response options containing estimation and averaging also posed a difficulty. Previous studies also concluded that children have problems remembering and assessing frequencies (Fuchs, Citation2005; Ogan et al., Citation2012). Thus, such answers should be avoided.

The interviewers have noted that children needed a certain time to adapt to each new scale, therefore the use of eleven different scales in our study was impractical. The use of different scales should be limited, and children should have the opportunity to attempt the use of the scales with practice questions at the beginning of the interview. Since the previous CIs indicated that children have problems remembering a long list of response options (Bell, Citation2007), it is recommended that the interviewer reads out the response options and shows the child the response options on a printout or screen. Although some previous research suggests that children understand verbal descriptors better than numeric ones (Bell, Citation2007), our CIs suggest that the children found the combination of both easiest to follow.

Most identified issues were present cross-nationally, meaning they could be attributed to poor source questionnaire design, rather than translation or national characteristics (Fitzgerald et al., Citation2011). However, a few cross-national differences have been found. Following the cross-national error source typology (CNEST; Fitzgerald et al., Citation2011), they could be classified as translation problems resulting from translational error and cultural portability. Translation problems resulting from translational error refers to translators’ incorrect or improper translations (Fitzgerald et al., Citation2011). These were most common types of error discovered in CIs, generally referring to overly complex translations of some words in our CIs. Such errors could be eliminated by using more appropriate words, providing examples or additional explanations (Fitzgerald et al., Citation2011). Only a few observed errors were connected to cultural portability, meaning that some concepts do not exist in the different countries (Fitzgerald et al., Citation2011). In our case, children from different countries had various experiences with free time activities and rules concerning electronic devices. Fitzgerald et al. (Citation2011) propose that items with such issues could be amended. In our case better explanation of certain terms or more varied response options could clarify and provide more culturally-appropriate items.

Conclusion

Despite the identified problems, the procedure and the questionnaire were altogether evaluated as appropriate for the group of 8-year-old children. The findings are consistent with previous research that employed CIs. However, the CIs demonstrated that even well-established questionnaires should be pretested and somewhat adopted when used younger samples or cross-nationally (Franc et al., Citation2018). Hence, future research should continue to use this method in assessing the properties of children’s questionnaires to develop useful new instruments and evaluate the quality of well-established ones, particularly when applied to new contexts. Since our CIs were conducted on young children and a limited number of European countries future studies should look into generalizability of the finding on children of various ages and different cultural contexts.

Supplemental material

Supplemental Material

Download MS Word (41 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplemental data

Supplemental data for this article can be accessed online at https://doi.org/10.1080/13645579.2024.2312621.

Additional information

Funding

This work was supported by the European Union’s Horizon 2020 research and innovation programme under Grant No. 101008589.

Notes on contributors

Babarovic Toni

Toni Babarović is a research advisor at the Ivo Pilar Institute of Social Sciences in Zagreb, and also associate professor at the University of Zagreb, Croatia. His main research interests are in the field of vocational psychology, in particular in areas of career development, counselling and guidance of adolescents. He had also participated in several research projects related to children and youth well-being and educational aspirations and attainment. He published over 50 refereed journal articles and book chapters.

E Krpanec

Eta Krpanec is a research assistant at the Institute of Social Sciences Ivo Pilar in Zagreb. Eta is currently a psychology doctoral student at the University of Zagreb. Her research interests include children’s well-being, the relationship of teaching styles and students’ school engagement, as well as youth political participation.

M Blažev

Mirta Blažev holds master’s degree in psychology and economics and is European licensed integrative psychotherapist. She works as postdoctoral researcher at the Ivo Pilar Institute of Social Sciences in Croatia. So far, she has worked on seven scientific research projects, published eleven scientific papers, chapter in a book, and participated in numerous scientific conferences (about 35 proceedings).

I Dević

Ivan Dević is a research associate at the Institute of Social Sciences Ivo Pilar, Zagreb. His research field includes child and young people’s well-being, education and STEM achievement. He is a co-author of 11 articles in scientific journals, 3 book chapters, and one book. He has presented over 40 papers at conferences. Ivan participated in 33 research projects including three FP7 and HORIZON 2020 funded projects. Ivan teaches as part-time adjunct faculty at the Faculty of Medicine at Josip Juraj Strossmayer University in Osijek, University College Aspira in Zagreb and at the University of Applied Health Studies in Zagreb.

S Downey

Sinead Downey is an aspiring educational psychologist who has completed a BA Honours in Psychology from Trinity College Dublin, an MSc in Children, Education, and Youth from University College Dublin (UCD), and a post-graduate certificate in Neurodiversity from UCD. She currently works as a research associate through the Geary Institute for Public Policy and the School of Education, both in UCD. Her research interests include neurodiversity, autism in girls and women, special educational needs, and child wellbeing.

I Huttunen

Ida Huttunen research interests include socio-emotional skills, school wellbeing and academic motivation. She is interested in research that targets to promote equal learning opportunities between children and adolescents. Currently she is conducting research on middle-school students’ socio-emotional skills and school engagement.

L Panico

Lidia Panico is a Professor at the Center for Social Inequalities Research (CRIS) at Sciences Po Paris, and associated researcher at the French Institute for Demographic Studies (INED). Previously, she was a research fellow at the London School of Economics and Political Sciences (LSE), and at University College London (UCL). She obtained PhD in Sociology from UCL, where she was based in the International Centre for Lifecourse Studies. Her research, at the crossroad between demography, epidemiology and sociology, aims to describe and explain socio-economic inequalities in well-being, with a focus on child outcomes and family processes. Her research uses longitudinal data, notably birth cohorts, and comparative methods.

Z Perron

Zoé Perron is project manager for the GUIDE/EuroCohort survey at the French Institute for Demographic Studies (INED). Previously, she worked on measures of child well-being at the French School of Public Health (EHESP), and on access to autonomy for foster children, which has led to several publications in French journals.

A Santos

Aurélie Santos is a study engineer at the French Institute for Demographic Studies (INED) Survey Department since 2018. She has accompanied research teams on the development of many survey protocols and has focused her interest on “hard to reach” populations and network sampling, in urban studies and migration studies (Trajectories and origins, Chinese immigrants in Paris region, etc.)

L. K Taylor

Laura K. Taylor is a researcher at the School of Psychology, University College Dublin. Her research focuses on peacebuilding in children and young people. Challenging the narrative that youth are either victims or perpetrators of political violence, she studies how conflict-affected young people may make positive contributions to society. Her creative approach integrating developmental psychology and peace studies, summarized by the Developmental Peacebuilding Model (Taylor, 2020), with potential to impact interventions and policy globally. Taylor has published over 80 articles and two books with collaborators in Colombia, Croatia, Israel, Ireland, Kosovo, North Macedonia, Northern Ireland, Pakistan, and South Korea.

K Upadyaya

Katja Upadyaya is an associate professor at the Faculty of Educational Sciences, University of Helsinki, Finland. Her research interests include student engagement, academic motivation, research methodology, cultural context of learning, and lifelong learning. She is also interested in conducting research on teacher-student and parent-child interaction, and children’s and school community’s well-being. Currently she is conducting research on students’ situational experiences while learning, socio-emotional skills, and parental burnout.

J Symonds

Jennifer Symonds is Professor of Human Development at University College London. Jennifer’s research focuses on the development of individual’s well-being and engagement as they transition into and through education across the life course. Jennifer is Scientific Director of Growing Up in Digital Europe: the first pan-European cohort study of children and youth. Jennifer is also Director of CLOSER, the UKRI’s Centre for Longitudinal Studies Enhancement Resources. CLOSER works to promote longitudinal population studies through data discoverability, education and training, and knowledge mobilisation.

G Pollock

Gary Pollock is Professor of Sociology at Manchester Metropolitan University. He has worked in the field of youth and childhood for three decades and is currently working on a range of projects which is developing the first Europe wide birth cohort survey. He is a survey specialist and has published on child wellbeing, youth political participation, the transition from school to work and statistical methodology.

Notes

1. For Croatia the approval was given by the Ivo Pilar Institute Ethical Committee (reference no. 11–73/21–1963), for Finland by the University of Helsinki Ethical Committee (statement 70/2021), for France by the INED Data Protection Officer (reference no. 2021‐DPD‐0036) and for Ireland by the UCD Human Research Ethics Committee – Humanities (reference no. HS-E-21-182-Taylor).

References

  • Beatty, P. (2004). The dynamics of cognitive interviewing. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin, & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires (pp. 45–66). John Wiley and Sons. https://doi.org/10.1002/0471654728.ch3
  • Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006
  • Bell, A. (2007). Designing and testing questionnaires for children. Journal of Research in Nursing, 12(5), 461–469. https://doi.org/10.1177/1744987107079616
  • Berthelsen, H., Lönnblad, A., Hakanen, J. J., Kristensen, T. S., Axtelius, B., Bjorner, J. B., & Westerlund, H. (2014). Cognitive interviewing was used in the development and validation of Copenhagen psychosocial questionnaire in Sweden. Proceedings of the 7th Nordic Working Life Conference. Stream 26: Methodological challenges for working life and labour market studies, Gothenburg, Sweden. https://muep.mau.se/bitstream/handle/2043/18257/Hanne-Berthelsen.pdf?sequence=2
  • Bowen, N. K. (2008). Cognitive testing and the validity of child-report data from the elementary school success profile. Social Work Research, 32(1), 18–28. https://doi.org/10.1093/swr/32.1.18
  • Carter, C. P., Reschly, A. L., Lovelace, M. D., Appleton, J. J., & Thompson, D. (2012). Measuring student engagement among elementary students: Pilot of the student engagement instrument—elementary version. School Psychology Quarterly, 27(2), 61–73. https://doi.org/10.1037/a0029229
  • Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research, 12(3), 229–238. https://doi.org/10.1023/A:1023254226592
  • Collins, D. (2015a). Analysis and interpretation. In D. Collins (Ed.), Cognitive interviewing practice (pp. 162–173). SAGE Publications Ltd. https://doi.org/10.4135/9781473910102
  • Collins, D. (2015b). Cognitive interviewing: Origin, purpose and limitations. In D. Collins (Ed.), Cognitive interviewing practice (pp. 3–27). SAGE Publications Ltd. https://doi.org/10.4135/9781473910102
  • d’Ardenne, J., & Collins, D. (2015). Data Management. In D. Collins (Ed.), Cognitive interviewing practice (pp. 142–159). SAGE Publications Ltd. https://doi.org/10.4135/9781473910102
  • DeMaio, T. J., Rothgeb, J., & Hess, J. (1998). Improving Survey Methods Through Pretesting. US Bureau of the Census. http://www.asasrms.org/Proceedings/papers/1998_007.pdf
  • Drennan, J. (2002). Cognitive interviewing: Verbal data in the design and pretesting of questionnaires. Methodological Issues in Nursing Research, 42(1), 57–63. https://doi.org/10.1046/j.1365-2648.2003.02579.x
  • Ebesutani, C., Regan, J., Smith, A., Reise, S., Higa McMillan, C., & Chorpita, B. F. (2012). The 10-item positive and negative affect schedule for children, child and parent shortened versions: Application of item response theory for more efficient assessment. Journal of Psychopathology and Behavioral Assessment, 34(2), 191–203. https://doi.org/10.1007/s10862-011-9273-2
  • Fitzgerald, R., Widdop, S., Gray, M., & Collins, D. (2011). Identifying sources of error in cross-national questionnaires: Application of an error source typology to cognitive interview data. Journal of Official Statistics, 27(4), 569–599.
  • Forrest, C. B., Bevans, K. B., Filus, A., Devine, J., Becker, B. D., Carle, A. C., Teneralli, R. E., Moon, J., & Ravens-Sieberer, U. (2019). Assessing children’s eudaimonic well-being: The PROMIS pediatric meaning and purpose item banks. Journal of Pediatric Psychology, 44(9), 1074–1082. https://doi.org/10.1093/jpepsy/jsz046
  • Forsyth, B. H., Kudela, M. S., Lawrence, D., Levin, K., & Willis, G. (2007). Methods for translating an English-language survey questionnaire on tobacco use into Mandarin, Cantonese, Korean and Vietnamese. Field Methods, 19(3), 264–283. https://doi.org/10.1177/1525822X07302105
  • Forsyth, B. H., & Lessler, J. T. (1991). Cognitive laboratory methods: A taxonomy. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, & S. Sudman (Eds.), Measurement errors in surveys (pp. 393–418). Wiley. https://doi.org/10.1002/9781118150382
  • Franc, R., Sučić, I., & Babarović, T. (2017). Young children’s understanding of the well‐being survey’s questions – findings from cognitive interviews in Croatia. In I. Burić (Ed.), Book of proceeding from 20th psychology days in Zadar (pp. 41–50). BOSP.
  • Franc, R., Sučić, I., Babarović, T., Brajša-Žganec, A., Kaliterna-Lipovčan, L., & Dević, I. (2018). How to develop well-being survey questions for young children: Lessons learned from cross-cultural cognitive interviews. In G. Pollock, J. Ozan, H. Goswami, G. Rees, & A. Stasulane (Eds.), Measuring youth well-being: How a pan-European longitudinal survey can improve policy (pp. 91–109). Springer.
  • Fuchs, M. (2005). Children and adolescents as respondents. Experiments on question order, response order, scale effects and the effect of numeric values associated with response options. Journal of Official Statistics, 21(4), 701–725.
  • Fujishiro, K., Gong, F., Baron, S., Jacobson, C. J., Jr., DeLaney, S., Flynn, M., & Eggerth, D. E. (2010). Translating questionnaire items for a multi-lingual worker population: The iterative process of translation and cognitive interviews with English-, Spanish-, and Chinese-speaking workers. American Journal of Industrial Medicine, 53(2), 194–203. https://doi.org/10.1002/ajim.20733
  • Goerman, P. L., & Caspar, R. A. (2010). A preferred approach for the cognitive testing of translated materials: Testing the source version as a basis for comparison. International Journal of Social Research Methodology, 13(4), 303–316. https://doi.org/10.1080/13645570903251516
  • Goodman, R. (1997). The strengths and difficulties questionnaire: A research note. Journal of Child Psychology and Psychiatry, 38(5), 581–586. https://doi.org/10.1111/j.1469-7610.1997.tb01545.x
  • Gray, M., & Blake, M. (2015). Cross-national, cross-cultural and multilingual cognitive interviewing. In D. Collins (Ed.), Cognitive interviewing practice (pp. 220–242). SAGE Publications Ltd. https://doi.org/10.4135/9781473910102
  • Grisay, A. (2003). Translation procedures in OECD/PISA 2000 international assessment. Language Testing, 20(2), 225–240. https://doi.org/10.1191/0265532203lt254oa
  • Gullone, E., & Taffe, J. (2012). The Emotion Regulation Questionnaire for Children and Adolescents (ERQ–CA): A psychometric evaluation. Psychological Assessment, 24(2), 409–417. https://doi.org/10.1037/a0025777
  • Harkness, J., Van de Vijver, F. J. R., & Mohler, P. Ph. (2003). Questionnaire translation. Cross-Cultural Survey Methods (pp. 35–56). Hoboken, NJ: Wiley.
  • Huebner, E. S. (1994). Preliminary development and validation of a multidimensional life satisfaction scale for children. Psychological Assessment, 6(2), 149–158. https://doi.org/10.1037/1040-3590.6.2.149
  • Irwin, L. G., & Johnson, J. (2005). Interviewing young children: Explicating our practices and dilemmas. Qualitative Health Research, 15(6), 821–831. https://doi.org/10.1177/1049732304273862
  • Irwin, D. E., Varni, J. W., Yeatts, K., & De Walt, D. A. (2009). Cognitive interviewing methodology in the development of a paediatric item bank: A patient reported outcomes measurement information system (PROMIS) study. Health and Quality of Life Outcomes, 7(3), 1–10. https://doi.org/10.1186/1477-7525-7-3
  • Kidscreen project. (2011). The Kidscreen-52. https://www.kidscreen.org/english/questionnaires/kidscreen-52-long-version/
  • Kish, L. (1994). Multipopulation survey designs: Five types with seven shared aspects. International Statistical Review / Revue Internationale de Statistique, 62(2), 167–186. https://doi.org/10.2307/1403507
  • Kudela, M. S., Levin, K., Tseng, M., Hum, M., Lee, S., Wong, C., McNutt, S., & Lawrence, D. (2004). Tobacco use cessation supplement to the Current population survey Chinese, Korean, and Vietnamese translations: Results of cognitive testing. National Cancer Institute. https://docplayer.net/8867172-Tobacco-use-supplement-to-the-current-population-survey-chinese-korean-and-vietnamese-translations-results-of-cognitive-testing.html
  • Lee, J. (2014). Conducting cognitive interviews in cross-national settings. Assessment, 21(2), 227–40. https://doi.org/10.1177/1073191112436671
  • Levin, K., Willis, G. B., Forsyth, B. H., Norberg, A., Kudela, M. S., Stark, D., & Thompson, F. E. (2009). Using cognitive interviews to evaluate the Spanish-language translation of a dietary questionnaire. Survey Research Methods, 3(1), 13–25. https://doi.org/10.1177/1073191112436671
  • Marsh, H. W. (1988). The Self-Description Questionnaire (SDQ): A theoretical and empirical basis for the measurement of multiple dimensions of preadolescent self-concept: A test manual and research monograph. Psychological Corporation.
  • Martinez, G., Marín, B. V., & Schoua-Glusberg, A. (2011). Analyzing cognitive interview data using the constant comparative method of analysis to understand cross-cultural patterns in survey data. Field Methods, 23(4), 420–438. https://doi.org/10.1177/1525822X11414835
  • Nápoles-Springer, A. M., Santoyo-Olsson, J., O’Brien, H., & Stewart, A. L. (2006). Using cognitive interviews to develop surveys in diverse populations. Medical Care, 44(11), S21–S30. https://doi.org/10.1097/01.mlr.0000245425.65905.1d
  • Ogan, C., Karakuş, T., & Kurşun, E. (2013). Methodological issues in a survey of children’s online risk-taking and other behaviours in Europe. Journal of Children and Media, 7(1), 133–150. https://doi.org/10.1080/17482798.2012.739812
  • Ogan, C., Karakus, T., Kursun, E., Cagiltay, K., & Kasikci, D. (2012). Cognitive interviewing and responses to EU kids online survey questions. In S. Livingstone, L. Haddon, & A. Gorzig (Eds.), Children, risk and safety on the internet: Research and policy challenges in comparative perspective (pp. 33–43). The Policy Press. https://doi.org/10.2307/j.ctt9qgt5z
  • Rebok, G., Riley, A., Forrest, C., Starfield, B., Green, B., Robertson, J., & Tambor, E. (2001). Elementary school-aged children’s reports of their health: A cognitive interview study. Quality of Life Research, 10(1), 59–70. https://doi.org/10.1023/a:1016693417166
  • Smahel, D., Machackova, H., Mascheroni, G., Dedkova, L., Staksrud, E., Ólafsson, K., Livingstone, S., & Hasebrink, U. (2020). EU kids online 2020: Survey results from 19 countries. EU Kids Online. https://doi.org/10.21953/lse.47fdeqj01ofo
  • Van der Kaap-Deeder, J., Soenens, B., Ryan, R. M., & Vansteenkiste, M. (2020). Manual of the Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS). Ghent University.
  • Willis, G. B. (1999). Cognitive interviewing: A ‘‘how to’’ guide. Research Triangle Institute.
  • Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Sage.
  • Willis, G. B., Lawrence, D., Hartman, A., Kudela, M. S., Levin, K., & Forsyth, B. (2008). Translation of a tobacco survey into Spanish and Asian languages: The tobacco use supplement to the current population survey. Nicotine & Tobacco Research, 10(6), 1075–1084. https://doi.org/10.1080/14622200802087572
  • Willis, G., & Zahnd, E. (2007). Questionnaire design from a cross-cultural perspective: An empirical investigation of Koreans and Non-Koreans. Journal of Health Care for the Poor and Underserved, 18(4), 197–217. https://doi.org/10.1353/hpu.2007.0118