3,009
Views
2
CrossRef citations to date
0
Altmetric
Articles

‘Capturing the magic': grassroots perspectives on evaluating open youth work

Pages 486-502 | Received 13 Oct 2021, Accepted 17 Nov 2022, Published online: 08 Dec 2022

ABSTRACT

Youth work’s informal and youth-centred nature raises challenges for evaluation, challenges that are intensified by the growing dominance of measurement, market values and surveillance in the context of the neoliberal restructuring of youth services. This article builds on Griffiths’ (2012, Why Joy in Education Is an Issue for Socially Just Policies. Journal of Education Policy 27 (5): 655–670) philosophical argument for valuing the intrinsic contribution of education, thus conceptualising evaluation as encompassing more than measuring outcomes. It reports the findings of a three-year qualitative study in eight open youth work settings in England that investigated the perspectives of 143 young people, youth workers and policy makers on evaluation in youth work. While young people and youth workers had often participated in evaluations they found meaningful, some approaches to impact measurement were experienced as too formal, intrusive, insensitive and burdensome. The article argues that evaluation and accountability processes must be practice-informed, youth-centred, and anti-oppressive. It recommends the participatory and collaborative development of diverse methods and approaches to evaluation that ‘capture the magic’ of youth work while enabling further reflection and development of practice.

Introduction

How should youth work be evaluated? This has been a controversial question in youth work research and practice internationally over the last decade (Duffy Citation2017; Lovell et al. Citation2016; Ord Citation2014; Taylor and Taylor Citation2013). Aiming to contribute to this debate by providing evidence of the perspectives and experiences of young people and youth workers, this article draws on a three-year qualitative study investigating impact measurement and evaluation in youth work in England (2018–2021).

Our discussion is rooted in an international policy context in which requirements to ‘prove’ impact can constitute a barrier to equitable and community-centred practices (Baldridge Citation2019; Coultas Citation2020). Dominant mechanisms of accountability in public services are based on a neoliberal logic of predefined, standardised and measurable outcomes that encourage competition and comparison between services, and the conversion of data into claims of ‘value for money’. This provides a simplistic and individualised view of how diverse experiences and relationships contribute to young people’s lives in a wider political context of social inequalities (de St Croix, McGimpsey, and Owens Citation2020). Within this international context, England provides an interesting focus due to its long history of youth work contrasted by severe cuts in recent years (YMCA Citation2020), alongside the introduction of a new ‘youth impact agenda’ that was - at least initially - based predominantly on performance targets, predefined outcomes, and impact measurement (De St Croix Citation2018).

These challenges are particularly acute in open youth work, a practice of informal education encompassing youth clubs, detached or street-based youth work, and online group work. Open youth work is open-ended in terms of who participates, how, why, when, and for how long; a crucial aspect is that young people attend by choice (Davies Citation2021). Its guiding logic conflicts with traditional impact measurement methods based on predefined individual outcomes, measured through standardised questions asked of young people before and after an intervention. Open youth work is not an intervention, but rather a practice of indeterminant length; young people engage flexibly on their own terms and outcomes emerge naturally through this process of engagement. While funders and impact measurement advocates are increasingly aware of these tensions (Hill, Scanlon, and Anderton Citation2019) and there is a diverse range of practice in the evaluation field, this article is concerned with how the growing impetus towards the measurement of outcomes and impact is experienced on the ground.

In the next section we draw on literature to elaborate further on the disjuncture between youth work practice and dominant approaches to evaluation, first discussing the policy context, and then conceptualisations of evaluation beyond outcomes. We then introduce our study and its methodology. In the findings section, we discuss what evaluation looks like in youth work, the challenges experienced by young people and youth workers, and how they navigate these challenges. We finish by arguing for a practice-informed, youth-centred, anti-oppressive approach to evaluation in policy and practice, both within and beyond youth work.

Background

We situate this article in conversation with scholarship and practice that engages critically and politically with evaluation, impact measurement, and accountability, within and beyond youth work, in England and beyond. We start from a concern that is widely discussed in youth work practice, research and activism: a disjuncture between the informal and process-oriented nature of youth work, and requirements for prescriptive monitoring and evaluation (e.g. Duffy Citation2017; Fusco et al. Citation2013; Gee Citation2020; Ord Citation2014; Taylor and Taylor Citation2013). The literature highlights key tensions: (1) predefined outcomes and ‘pre and post’ measurement are incompatible with the open timescale and purpose of youth work; (2) anti-oppressive and youth-centred principles are undermined by labelling, surveillance, and externally defined outcomes; and (3) bureaucratic systems of measurement clash with informality. These tensions cannot simply be resolved by selecting ‘better’ evaluation tools, because they raise political questions: Who decides how youth work is evaluated? How is evaluation shaped by structural inequality? How does it challenge and/or reproduce inequalities? How does evaluation challenge and/or enable neoliberal processes of marketisation, competition and performativity? Thus, it is important to start with a focus on policy and politics.

Evaluation in youth policy

In policy terms, youth work is often seen as a ‘Cinderella service’, lacking recognition, status, and resources. While operating ‘under the radar’ enables some flexibility, it renders youth work particularly vulnerable to economic downturns and policy trends. Both in England and internationally, youth work and related practices (such as afterschool, youth development, youth organising, and community education) rely on multiple sources of income including from local and national government, charities, schools, housing providers, philanthropy, fundraising, and revenue (e.g. room rental). This means that policy and practice on evaluation are influenced by multiple resource holders, and by an overall context of precarious and short-term funding.

Evaluation in youth work is currently shaped by three broad interlinking policy agendas. The first and most visible are austerity and cuts. In England, where our study took place, the effects are stark, particularly for low income, working class, ethnically minoritized and rural communities. Local authority spending on youth work in England in 2018 was almost £1 billion (70%) lower than in 2010 (YMCA Citation2020). In London, 130 of 300 youth centres closed between 2011 and 2021; the average number of youth workers per borough fell from 48 to 15 (Berry Citation2021). Austerity acts discursively as a rationale for evaluation, exhorting organisations to provide stronger justifications of impact to survive. Yet cuts are also an impediment to evaluation, reducing staff time and creating an atmosphere of insecurity that militates against evaluation for learning and honest communication of impact and its limitations. Community representatives have drawn attention to the ‘data burden’ that especially affects smaller organisations (Darking et al. Citation2016; see also Lovell et al. Citation2016); it is important to note that these organisations are more likely to be embedded within working class, Black, ethnic minority, disabled, and/or LGBTQ + communities.

The second policy agenda shaping evaluation in youth work is the ‘youth impact agenda’: a broad agreement amongst influential individuals and organisations that youth services must measure their impact, with a common (even if not universal) preference for quantitative techniques and validated psychology tools. This agenda grew in the wake of a House of Commons (Citation2011) enquiry on services for young people that criticised the youth sector for a lack of ‘credible’ evidence of impact. Although the youth impact agenda reflects and reproduces neoliberal managerialism, its adherents are more likely to justify it in terms of science and truth (Duffy Citation2017). Yet there are debates here about whose truth counts; youth worker and activist Farzana Khan has pointed out that services based on Eurocentric or white methods ‘reproduced harm for marginalised communities’ (interviewed by Akram Citation2019) and youth programmes are too often ‘framed as sites of containment and control for Black and other minoritized youth’ (Baldridge Citation2019, 15). Impact measurement implies a shared understanding of what constitutes positive change, often neglecting the effects of unequal social and economic conditions (Recovery in the Bin Citation2016). In recent years, impact advocates have generally moved away from a focus on ‘proving’ the value of youth work towards broader concerns of practice improvement (Hill, Scanlon, and Anderton Citation2019); however quantitative measurement remains dominant.

This prominence is partly incentivised by the third policy agenda influencing evaluation in youth work: decision-making based on ‘social value’ or ‘social return on investment’. In the UK, the interaction of Treasury-led policy and the evaluation practices of a large, short-term youth programme (the National Citizen Service) can be seen as a ‘social investment machine’ that creates new and experimental ways of calculating financial impact from outcomes measures (de St Croix, McGimpsey, and Owens Citation2020). The results are beginning to be seen in evaluation reports, where youth work’s benefits are ‘ascribed a monetary value that signals its relative value compared to other things’ (Kenley and Pritchard Citation2019, 5; see also McMahon Citation2021 on ‘value for money’ discourses in Irish youth work).

These three policy agendas are mutually reinforcing, while producing tensions and enabling contestation. Economic social value claims generally rely on standardised measurement tools to assess young people’s skills or behaviours at ‘baseline’ and on completion of a programme. Austerity is used to justify both social value and impact measurement agendas. Yet it is not as simple as impact measurement only being imposed by funders or government; organisations might measure their impact to mark themselves out as competitive (Arvidson and Lyon Citation2014). Technologies of measurement, comparison and control create cultures of performativity, which restrict what individuals and organisations can do, while rewarding engagement in these processes (Ball Citation2003; De St Croix Citation2018). Thus, dominant approaches to impact measurement and evaluation reproduce neoliberalism through incentivising competition, cost-unit comparisons, and individual progress narratives. This tends to devalue young people’s and practitioners’ understandings of the value of youth work, as we now go on to discuss.

Conceptualising evaluation beyond outcomes

Outcomes-based measurement has been criticised by practitioners, activists and researchers for relying on an outside imposition of what counts as ‘success’ and narrowing the complexities of human experience (e.g. Coultas Citation2020; Lowe Citation2013; Recovery in the Bin Citation2016; Taylor and Taylor Citation2013). The value of youth work is demonstrably wider and more nuanced than what can be measured. Ord et al. (Citation2021) draw on 844 young people’s stories of youth work impact in six European countries to argue that many of the most prominent themes identified by young people – such as enjoyment, friendship, and atmosphere – are neglected in policy or seen as merely instrumental:

Friendship in policy terms is often framed as a means to an end – something that is important in order for youth work to be successful and achieve its policy priorities – whereas young people see it as an end in itself. For young people, friends and friendship are at the very heart of youth work. (Ord et al. Citation2021, 12–13)

The impact of youth work beyond outcomes has been highlighted in several qualitative studies, including in this journal. McPherson’s (Citation2020) interviews with young people demonstrate the central importance of relationships between young people and youth workers, as this working-class young woman explained, ‘They see something in you that your teachers didn’t or your parents don’t … They trust you from day one and respect you, like, as a person … It’s just made me more confident that I can do stuff … They’re honestly the best people I’ve met in my life’ (in McPherson Citation2020, 316). In this account, ‘confidence’ might be seen as an outcome; yet the young woman’s account also refers to aspects that have intrinsic value, such as feeling trusted and respected, and spending time with youth workers. Similarly, Ritchie and Ord’s (Citation2017) article highlights the importance of association with other young people, with one youth club member saying, ‘you gave me a second chance no one’s done that for me before’ and another saying ‘I was really shy [when I started at club], I thought people were going to take the piss out of me for being deaf. But they don’t’ (276).

To support our thinking on the intrinsic value of youth work, we are informed by Morwenna Griffiths’ (Citation2012) philosophical discussion of social justice and joy in education. Griffiths argues that the value of education must not be limited either to its instrumental outcomes (such as economic benefits), nor even to individual outcomes (such as autonomy and citizenship), but should include the intrinsic, integral purpose of education as:

 … good in itself, as part of what makes a good life good, not just as part of what is needed to produce the good life. And here the experience, the process is significant. Aristotle argued convincingly that the good life is experienced as pleasurable as well as being good … . (Griffiths Citation2012, 656)

Griffiths argues that accounts of social justice in education must include attention to the just distribution of intrinsic aspects such as curiosity, satisfaction, delight, and joy in learning, rather than being limited to a concern for equality of outcome (important that this may also be). Although her argument relates to formal education, it has considerable salience for youth work, which has always been concerned with the intrinsic alongside the instrumental, demonstrated through a concern with young people’s experiences in the here and now, not only their future lives or transitions (Davies Citation2021; Spence Citation2004). Social justice thinking in youth work tends to be critical of a focus on individual outcomes; instead, it learns from social movements, uses the language of ‘anti-oppressive practice’ (Chouhan Citation2009), and shifts the focus from individual behaviour towards the impact of social inequalities on young people (Ginwright and Cammarota Citation2002). It is based on a vision of youth work as ‘volatile and voluntary, creative and collective – an association and conversation without guarantees’ (In Defence of Youth Work Citation2009).

These central and integral aspects of youth work can be undermined by funder requirements for evaluation. For example, in Denmead’s (Citation2021) ethnographic research at a youth arts centre in the USA, young people from marginalised communities valued ‘chillaxing’ in the centre, explaining that resting and being unproductive was necessary for them to recover from racist and classist experiences at school. Yet the centre was required to use an auditing tool that measured aspects such as whether materials were ready, activities were clearly explained, and ‘students’ completed activities without distraction. While such requirements might appear harmless, they militated against the open structure and timing that young people preferred. While we acknowledge that some funders take a more sensitive and practice-based approach, we focus in this article on the key challenges and tensions in evaluation, as they are experienced by young people and youth workers.

Methodology

This article draws on a three-year study (2018–2021) that investigated how impact measurement and evaluation tools and processes are experienced and enacted by young people and practitioners in youth work settings. The study took a qualitative approach based on 87 interviews and focus groups with 143 young people, youth workers and policy influencers in England (16 of whom took part in two or more interviews or focus groups), alongside 73 sessions of participant observation. Research took place in eight open youth work settings, purposively selected to encompass a diversity of youth work approaches, locations, and organisation types (see ). This article draws on the perspectives of the 58 young people (mostly aged 13–19) and 59 youth workers who took part, and on fieldnotes from participant observation.Footnote1

Table 1. Participating organisations.

The research began in the first half of 2019, when we made several visits to each of the eight youth work settings. We participated in youth work sessions, debriefs and team meetings, and undertook in-depth semi-structured interviews with 14 managers and administrators, and 22 focus groups with 29 youth workers and 37 young people. We then selected two of these settings (Melham and Seaside) for more in-depth research, enabling us to build a deeper contextualised understanding of evaluation and monitoring in these contrasting settings over time. Our longer engagement in these settings from December 2019 to October 2020 enabled greater relationship building, fluidity, collaboration and creativity. Alongside further regular participant observation in youth work sessions, and interviews and focus groups with 40 young people, youth workers and managers, we were able to use methods that emerged from young people’s needs and interests, and the rhythm of work in these organisations. This included a tour of a youth club; photograph and music elicitation, through the sharing and discussion of photographs and songs that relate to youth work (Levell Citation2021; Varvantakis and Nolas Citation2021); and a ‘paper chatterbox’ with questions selected and asked in collaboration with young people. This approach was akin to what Batsleer and Duggan (Citation2021) have called ‘youth work as method’, drawing on principles of anti-oppressive youth work practice and youth participatory research to engage creatively and flexibly with young people and youth workers, in accordance with their wishes and interests.

From March 2020 the research was impacted by the Covid 19 pandemic; as a result, some of the participant observation, interviews and focus groups at Seaside and Melham took place online. The implications of the pandemic and lockdowns are not the focus of this article; yet it is important to note that, while online interviews and focus groups worked well where we had already built relationships pre-pandemic, it was more difficult to engage with new participants (see Arya and Henn Citation2021). In addition, while Seaside Youth Club quickly moved online and we were welcomed to continue our participant observation, Melham’s detached work was more seriously disrupted in the initial months of the pandemic, when even outdoor social contact was not permitted.

The research was guided by situated principles of ethical youth research and based on principles of trust, respect and informed consent (Batsleer and Duggan Citation2021; te Riele and Brooks Citation2013). While written information sheets and consent forms were provided for participants and young people’s parents and guardians, verbal discussion of the research process was of vital importance in the informal context of youth organisations, and consent was seen as an ongoing process rather than a one-off event (Hill Citation2017). The names of youth organisations and their participants were pseudonymised (some participants selecting their own pseudonyms), and any details likely to identify organisations or individuals were omitted. Interviews and focus groups were audio recorded and professionally transcribed. We wrote fieldnotes immediately after each session, following a loosely structured format (noting thoughts before, our role, what happened in the session, any observations on evaluation and monitoring, key moments, key words and emerging themes); fieldnote extracts have been minimally edited for clarity but without changing substantive details.

We took a collaborative approach to data analysis, aiming to be reflective, fluid, intuitive and responsive to the data whilst remaining grounded in a consistent approach (Gewirtz Citation2001). We used selected tools from constructivist grounded theory, including coding, theme identification, and written analytical memos (Charmaz Citation2006). Initial manual coding identified potential themes and questions, which we discussed in detail to generate broad baseline codes. We used coloured coding stripes and annotations on a selection of paper transcripts using these codes, reflecting on this process before transferring all fieldnotes and transcripts onto data analysis software (NVivo), where we shared out data for analysis as well as coding and comparing a selection of the same transcripts and fieldnotes to ensure consistency. Regular data analysis meetings offered a systematic way to reflect on our analysis and identify tensions, contradictions, omissions, questions and linked themes. As researchers with professional backgrounds in youth work practice, we engaged with social justice driven imperatives while remaining vigilant to our own preconceptions and perspectives, supported through a process of personal and collaborative reflection, and ongoing engagement with wider communities of youth scholarship and practice.

Findings

Throughout our study, youth workers and youth work managers expressed enthusiasm for evaluating their work, and commitment to making this evaluation meaningful for young people and for themselves as practitioners. The title of this article is inspired by part-time youth workers at Journeys, discussing an evaluation activity with young people:

Fern: … we posed these questions for a conversation, and we ended up having a really rich conversation come out of it with people really expressing a huge diversity of opinions, which if we’d just gave them a form to fill out, none of these discussions would’ve happened. And there wouldn’t have been the benefit for the young people of the conversations. It would’ve very much felt like ‘ah, but we’re doing this because the youth workers have asked us to do it, and then we’ll just forget about it’.

Zayn: Yeah. Were you saying, George, about capturing the magic of the work, yeah?

George: Yeah […] you did a write up of that, and that captured a bunch of this stuff that went on. But also it’s like, just knowing the magic is happening when you’re doing an evaluation is pretty special.

The phrase ‘capturing the magic’ evokes the challenges and potential joys in evaluating youth work. It is almost an oxymoron, as it suggests a capture of something unknown and mysterious, yet despite the challenges, the youth workers here are suggesting that magic can happen in evaluation too. In this section we draw on interviews, focus groups and observations to first provide a snapshot of current evaluation practices in open youth work, then share some key challenges and tensions, and finally discuss how these tensions were navigated by youth workers and young people.

Current approaches to evaluating youth work

We start by outlining the main approaches used by the eight organisations in our study to evaluate open youth work (). The table is not exhaustive, and does not include details of frameworks or processes underpinning evaluation (e.g. theory of change; participatory design), nor the databases used to record attendance, participation, demographic data, outcomes and/or activities. It clearly shows that, while there are some commonly used methods, there is also great diversity.

Table 2. Evaluation in youth work settings.

Each organisation’s approach to evaluation was shaped by multiple considerations, including the requirements of resource providers, pragmatic aspects (resource constraints, staff confidence, or young people’s willingness), and organisational and professional principles and values, such as a concern for ethics, anti-oppressive practice, and young people’s perspectives. In post-austerity England, it is perhaps unsurprising that funding and managerial factors were highly influential. Some external funders engaged with practice realities and enabled flexibility; for example, involving grantees in developing evaluation frameworks, or enabling them to choose their own outcomes and evaluation methods. In contrast, some youth services, funders and commissioners required extensive monitoring that was regarded by youth workers as labour intensive and often inappropriate.

In each organisation, conversations played a central role in evaluation, including meetings and discussions with young people, informal chats during sessions, and staff debriefs. Every organisation used databases to record attendance and participation. Most incorporated creative and participatory approaches, case studies and/or storytelling. Questionnaires were common, but these were rarely validated impact measurement tools; the Life Effectiveness Questionnaire at Opal was an exception (and was seen by the manager, Wilson, as unsuited to an open youth work context).

As discussed earlier, we take an open and expansive view of ‘what counts’ as evaluation. Clearly, evaluation has diverse purposes – including organisational learning, practice and professional development, and accountability to resource providers. In some settings, evaluation focused on data collection for upwards accountability rather than learning and development; in others, there was an emphasis on reflective practice and little organised or formal evaluation; and in some organisations, extensive thought and effort was expended in developing creative, youth-centred approaches that enabled learning and reflection alongside providing information to funders. A common theme was that youth work is a particularly challenging arena for evaluation, as we now go on to discuss.

Challenges in evaluation practice

During our study, we experienced many versions of youth work: busy, chaotic, noisy, ‘drop in’ youth clubs; workshops where trans young people enjoyed snacks while reflecting thoughtfully on their hopes and dreams; lively discussions outside the shops after school; relaxed chats while customising old clothes. The youth work process clearly relied on a complex interaction of skilled youth workers, relationships with peers and adults, stimulating conversation, interesting activities, and the chance to chat, relax, take up space, and simply ‘be’. Given this diversity and complexity, it is hardly surprising that youth work is difficult to evaluate. Here, we highlight young people’s and youth workers’ perspectives on the main challenges inherent in evaluating youth work. We conceptualise these as: formalising the informal; intrusive and inappropriate questions; and bureaucratic burden.

Formalising the informal

Most young people and youth workers emphasised that youth work evaluation should suit the distinctively informal nature of youth work. This was commonly characterised by a preference for conversation over form-filling:

Me, I hate writing on paper. So like being able to speak, and like literally just speak thoughts straight out of your mind it’s so much easier. Because like otherwise you have to think of words. And I don’t like thinking of words. (Delilah, young person, Seaside)

Despite saying she doesn’t like ‘thinking of words’, Delilah was highly articulate, active in discussions, and designed flyers for events at the youth club. Yet resistance to ‘paperwork’ was reflected throughout our data, suggesting it needs to be taken seriously:

… young people don’t want to fill out paper, they are like, ‘why are we doing this?’ But if we sort of had like a board where they could write their own quotes, or captures like random words, I feel that that would be more effective … (Mel, youth worker, Melham)

The problem may not be writing per se: a young woman at Dove Street spoke proudly of writing two pages on how youth work had affected her; and we read moving accounts written by young people at Journeys on how youth work had changed their lives. But in our observations in settings where writing was welcomed by young people, it was clear that it was always a choice, not something imposed or expected. We suggest that it is not paper or writing that formalise youth work settings, but rather that extensive or repeated use of forms (whether paper or digital) can tend to evoke ‘school’:

… school is not necessarily a hugely positive experience for all of them. […] And youth club doesn’t have to be like that for them. (Nicole, youth worker, Opal)

… there’s so many different creative ways but form filling, it does just, it wouldn’t fit with here. It would make it feel like school and it would, I dunno, take away from just what this space and what the agenda is all about. And about it being youth led … (Holly, youth worker, Seaside)

Intrusive and inappropriate questions

A related issue was the reductive and labelling nature of some forms of evaluation and monitoring that conflicted with anti-oppressive practice, especially where these were standardised and not co-designed with youth workers and young people. This led to some awkward moments:

In the ‘gender’ section, James [the manager] had indicated for people to circle M or F. One young person requested ‘prefer not to say’. This caused some discussion later as James hadn’t considered that might come up. I reflected on how brave it was for the young person to flag this omission … (Fieldnote, Melham, detached session)

During the discussion, a youth worker explained that it would help with funding if young people could provide their postcode and tick ‘yes or no’ to whether they or their family receive benefits. There was an uncharacteristic silence as young people passed the paper round. Despite it being made clear that it was optional and confidential, it felt out of kilter with the sensitive and inclusive approach in the rest of the session. (Fieldnote, Journeys, Saturday session)Footnote2

Questions that might be seen as appropriate in psychology research could be experienced by young people as labelling, stigmatising and sometimes triggering. While young people were keen to discuss personal issues with youth workers, this was in the context of trusting relationships and sensitive discussion, whereas standardised forms experienced by young people elsewhere (mental health services in particular) were seen as negative and inappropriate for youth work contexts:

… some days you don’t really want to think about that question, and it can make you think too far into it, and then you can be left thinking about it for the rest of the day […] sometimes the things that it says on those questionnaires can actually give you ideas, instead of helping you. (Luna, young person, Seaside)

Bureaucratic burden

Evaluation and monitoring created significant demands on organisations, youth workers and young people. This demand became a burden where methods appeared unfit for purpose, unwieldy, and disproportionately time-consuming:

Sheldon mentioned so many different evaluation and monitoring tools … there was almost a sense of fatigue as he listed them … I'm struck by the weight of this sort of bureaucratic burden … not only the way organisational change happens but by the pressure put on staff to embrace, respond, enact ever-changing systems of evaluation without ever being part of their design. (Fieldnote, Melham, detached session)

The frustration for youth workers was that these systems took time away from work with young people, without providing opportunities for reflection:

Yeah, so [council database] is like a swear word in the youth work world. … you’re gonna put them on this ridiculous database that takes fucking ages, and use a lot of your capacity which could be working with young people … (Dawn, youth worker, Seaside)

It used to take up a really long time. And then it would be frustrating cause you wouldn’t be able to be doing your youth work, but you’d be trying to evidence youth work that you didn’t have time to do. (Nora, youth worker, Riverpath)

At Melham, monitoring and evaluation requirements were literally a burden. Youth workers were required to carry a heavy laptop on detached youth work sessions to enable them to complete the database, regardless of the risks of carrying valuable equipment on remote estates late at night. After two sessions of youth work and returning the youth bus to its depot, they had to fill in a repetitive online form at a supermarket café or in a vehicle, despite poor internet reception. This appeared to be managerial surveillance rather than evaluation for learning; senior managers checked that the database had been completed and social media updated, but without acknowledging the content:

… it’s very longwinded now … I think some of the questions kind of say the same thing but just in a different language, so that can take time. So yeah, you debrief and then that goes onto [database], but I don’t really think the managers really look at it. (Mathew, youth worker, Melham)

In some organisations, practitioners’ disengagement with ever-changing forms of evaluation and monitoring was palpable:

Luke (manager) listed several new forms that would be introduced and a new system in which each staff member was expected to identify two young people in advance of each session and an intended outcome, writing up what happened afterwards. This was agreed to be a bad idea … Throughout the discussion, the part-time workers (who will need to implement this new system) looked really fed up – arms folded, frowning, looking away or at the floor. (Fieldnote, Fairlight, staff meeting)

Throughout our research, youth workers were keen to evaluate their work. They wanted to learn, reflect, hear young people’s views and work alongside them to think of new ideas. Yet they were critical of evaluation that was inappropriately formal, intrusive, burdensome and reductive of the complexity of young people’s lives:

You arrange quite an informal environment … and then at some point someone steps in and goes, ‘cool, ok, how can we turn this chaotic organic madness into data?’ (Dylan, volunteer, Seaside)

It sometimes seems a shame to have to translate life and experiences and great conversations and stuff into something that answers a question. (Gareth, youth worker, Dove Street)

These and other similar perspectives from our research resonate with our earlier argument, drawing on Griffiths’ (Citation2012), that evaluation must centre on the intrinsic value and process of informal education, rather than focusing predominantly on outcomes. The reductive potential of some forms of evaluation were perceived by young people and youth workers as particularly unsuitable and unethical in youth work contexts. However, in every setting we saw youth workers engaging with young people and colleagues to evaluate their work thoughtfully and meaningfully, as we will now go on to discuss.

Navigating the challenges

In this final section, we share how the challenges of evaluation are navigated in practice:

During a chat with two young women that ranged from school to K-pop, Jenny mentioned that she needed to write up a report for the funder, and asked how they would prefer to feed in. They were unsure so she gave examples: a questionnaire, a recorded conversation, film, putting photographs on the wall and adding notes or captions, or using big paper on tables. One said a questionnaire would be ok; the other looked doubtful and suggested a conversation. They discussed how a conversation could be captured, such as by filming or audio recording. (Fieldnote, Dove Street young women’s group)

Being youth-centred and anti-oppressive means responding to the situation in that setting, at that time, with those young people, rather than seeing some methods as fundamentally better than others. Having said that, some methods were recommended by youth workers and young people as having worked well in open youth work contexts – see Box 1.Footnote3
Box 1. Evaluation methods recommended by youth workers and young people.

  • Group conversations.

  • Flipchart sheets with questions, post-its, coloured pens.

  • Creative methods (e.g. video/audio/photography).

  • Fun activities to rate statements/questions.

  • Human thermometer (hands indicate level of agreement).

  • Anonymous suggestion box.

  • ‘Speed-dating’ conversations between young people and funders.

  • Storytelling.

  • Simple, flexible questionnaires.

  • Staff debriefs and reflection.

Standardised validated impact measurement forms were rarely favoured, as they did not allow for flexibility and were sometimes experienced as inappropriate or triggering, as discussed above. However, some settings had positive experiences with ‘tick box’ questionnaires, and some young people said they were happy to complete them if they were short, infrequent, and non-intrusive. At the Vaults, youth workers used active and creative methods such as young people throwing balls in numbered buckets to answer questions. At Journeys, a one-page form asked young people to rate themselves against three outcomes and add optional comments. Youth workers recommended that questionnaires should be used rarely (once or twice a year); questions and any intended outcomes should be reviewed regularly by youth workers and young people; young people should be able to choose whether or not their participation was tracked over time;Footnote4 and they should be quick, optional and relevant:

… the user feedback is a list of like twelve questions asking things like, do you trust staff? Do you feel respected? […] We try and rotate the question, so we don’t ask the same question every time. So we’ll try and ask say two questions in a session. We don’t always get all of them to answer it either, so if we’re doing an activity with them there might be questions behind the stuff and we’ll go right, just do a yes or a no, do a tick or a cross and they’ll quickly fill it out. So the idea of that one, it takes thirty seconds for the young person to do. (Zara, Programme Manager, Vaults)

In some settings, youth workers worked alongside young people to think critically about what constitutes impact and how it might be demonstrated. Crucially, this was enabled by the sensitive and flexible practices and requirements of some funders. The impact questionnaire at Journeys was based on outcomes co-designed with young people and youth workers. Dove Street adapted one of their intended outcomes after discussion with young people and staff. Seaside involved young people in writing bids, in this case responding to their request for more music events:

Aaron [manager] suggested they think about their experience of recent youth club gigs and asked them their views individually. For the question ‘how will you know if it is successful?’, Archie (who makes electronic music) suggested that at a successful gig, young people would watch all the acts, including types of music they don’t normally listen to and performers they don’t know, and that they would enjoy it. Aaron asked how we would know if young people had enjoyed it; Archie suggested they would appear to be listening, would stay in the gig space, might dance, and would encourage each other. (Fieldnote, Seaside, online youth club)

The group became restless within a few minutes and the session moved on; Aaron said he would bring the funding application back another time. Yet despite the unconducive online setting, this conversation resulted in creative evaluation ideas that may not have been identified by adults. Youth-centred evaluation must be dynamic and dialogical. At best, it is a collaborative process that – whether it happens in conversation, in writing, or through artistic and creative methods – is magical in its own right.

Conclusion

Throughout our study, youth workers sought to make evaluation meaningful, ‘capturing the magic’ in the sense of recording the impact of practice, as well as creating the possibility of magic through evaluation. Such evaluation is meaningful, participatory, flexible, and enables young people and youth workers – particularly those from marginalised social groups – to take part in the co-creation of knowledge. This is a critical, dialogical process; it is not based on an assumption that youth work is always and only positive in its contribution to young people’s lives, but starts from a perspective that evaluation is about reflection, mutual learning, process and practice development, rather than surveillance, top-down managerialism, categorisation and extraction. It is a process of growth, challenge and change that seeks to shape policy and funding by centring the perspectives and experiences of young people, and to shape practice by creating space for reflection and learning.

However, the potential for meaningful, participatory evaluation can be constrained by inappropriate demands for data, and through evaluation and monitoring processes that are experienced as boring, meaningless, time-consuming, triggering, intrusive, or reinforcing of inappropriate categorisations and individual progress narratives. That this happens even in youth work, with its central tenets of informality and openness, highlights the hegemonic nature of impact measurement in a neoliberal context. If evaluation in youth work is to be practice informed, youth centred and anti-oppressive, young people and practitioners must be at the heart of thinking not only about methods but also about the wider policy context – the conditions that enable good evaluation practice to take place.

Earlier in this article, we engaged with Griffiths’ (Citation2012) discussion of the ‘joys and delights’ of educational experiences, the integral ‘good’ of education beyond what it produces for individuals or communities. In our study, we observed and experienced joys and delights, fun and inspiration, as well as moments of dreariness, conflict and frustration – a valuable, messy range of experiences and emotions. To enable youth work that is truly magical, it is essential to listen closely and carefully to what young people and youth workers value about youth work, and how evaluation can best support this. While this article has focused on examples of youth work that are highly valued by young people, we recognise that evaluation may have a role to play in identifying and preventing problematic or harmful practice; this is outside of the scope of this article, yet in brief, we are doubtful that the metrics-based systems of evaluation preferred in policy are particularly effective in harm prevention, and we continue to argue that young people and practitioners must be central to thinking about these issues. As we have argued elsewhere, evaluation should suit the setting, challenge inequalities, and capture and value the everyday and intrinsic elements of practice as well as emerging outcomes (Doherty and de St Croix Citation2019). This involves taking an expansive, holistic, flexible, and inclusive view of what ‘counts’ as impact, celebrating creative and experimental approaches, while challenging wider social injustices in policy and practice. We suggest that this way of looking at impact is relevant beyond youth work, to other practices with young people and communities more generally.

Before finishing, we return to the part-time youth workers from Journeys, asked what they would recommend to policy makers and funders:

George: Leave your desk and come and see us. [Others: yeah!]

Harry: Give us access to funding without the criteria. Let us make the criteria because we do know what we’re doing […] And like, we ask the young people what they want as well. Like just let us make the criteria, rather than telling us what we’re meant to be doing.

George: We’re a great bunch, we’re gonna like make some cool evaluation stuff, that would be a good idea.

Zayn: Yeah, I’d like to kind of urge everyone to kind of take a leaf out of our radical youth work and do evaluation more creatively, and in different ways, and you know immediately in the space, with the people involved afterwards, and you know in discussions and writing it down, and just capturing stories. All that stuff, it’s so rich … […] it compels us, it touches us. And that’s what makes us want to keep going, through that whole cycle of evaluation, it takes us through to the next thing.

We endorse the argument that youth workers must be given the freedom to engage creatively with young people and colleagues to evaluate their work appropriately. This requires a political will that accepts that we will never consistently, entirely, scientifically ‘capture the magic’ of youth work, and that overly formal attempts to do so can be damaging. Perhaps the magic can only be captured partially, fleetingly, some of the time. Reflecting on this we thought of the musical, The Sound of Music, where the nuns are singing about the spirited novice Maria: ‘how do you hold a moonbeam in your hand?’ Maybe we don’t need to hold the moonbeam. There is value in scientists understanding how a moonbeam is created and what it means, and in artists evoking a moonbeam in songs and paintings. But for most people, most of the time, we simply want to experience the moonbeam – perhaps using it to light our path, or feeling joy as it glistens on the river, or just walking in its glow without thinking about its impact on us at all.

Acknowledgements

This article is based on research funded by the ESRC, reference ES/R004773/1. The authors would like to thank Sharon Gewirtz and Sorele Cohen for commenting on an earlier draft, PALYCW and Nancy Stephenson for supportive writing retreats, anonymous peer reviewers for useful feedback, and, most of all, our research participants and the many other youth workers, young people, evaluation practitioners and others who have informed this study, whether through our Advisory Group, feedback at events, or informal conversations. A partial dataset associated with this study (excluding fieldnotes and transcripts where consent was not provided or anonymity might be compromised) is available at UK Data Service Reshare to enable wider use of the data; see https://reshare.ukdataservice.ac.uk/855316/, doi:10.5255/UKDA-SN-855316.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Economic and Social Research Council: [Grant Number ES/R004773/1].

Notes

1 Ethics approval was granted by King’s College London’s Research Ethics Office, refs LRS-18/19-8799 and HR-19/20-13776.

2 The Journeys manager, Ira, said in an interview that this reporting was ‘really not suited to our organisation’, and that they would not reapply to this funder.

3 A practice resource accompanying this article shares some of these methods in more detail, suggesting questions for reflection.

4 For example, Journeys can assign young people a code number, enabling the organisation to track individual scores for three outcomes over time (showing ‘distance travelled’). However, young people can choose to participate in evaluations without this tracking aspect.

References