Skip to main content
Contact Us

Exploring hope and fears for AI for those working in Youth Justice

Insights from NEC Digital Studio workshops, conducted in conjunction with the Association of YOT Managers (AYM), exploring their hopes and fears around the use of AI in practice.

AYM-Article-Featured-Image

The rise of AI, and it’s use amongst the general public has risen exponentially in recent years. Whilst Forbes estimate that 79% ( Forbes Advisor – AI Trends) of those working in the UK have used some form of AI in their workplace, there remains significant concerns about its use, including; risk of decision making without human intervention, loss of human skills, and the ethical implications (UK Artificial Intelligence (AI) Statistics And Trends In 2025 – Forbes Advisor UK)Sundar Pichai, CEO of Google parent company Alphabet, has warned against the AI investment boom, urging people to “learn to use these tools for what they’re good at, and not blindly trust everything they say”. (Don’t blindly trust what AI tells you, Google boss tells BBC – BBC News) 

In recent work with practitioners in the youth justice space, we ran four workshops, in conjunction with the Association of YOT Managers (AYM), at which we facilitated interactive discussions about AI, focussing on how those working in youth justice felt about the increased use of AI and what their hopes and fears are for AI in their practice. 

What did we learn from these workshops? 

There were 152 fears shared from the individuals who attended, which we grouped into 29 themes, and 116 hopes, which were grouped into 16 themes. From our analysis, we then developed 7 How Might We questions to identify what organisations working in Youth Justice need to ask themselves when thinking about use of AI to support their teams in its use: 

  • How might we help users to identify where AI can support their current practice and where the boundaries are for appropriate use? 
  • How might we improve governance & ways of working around the use of AI within individual organisations?  
  • How might we educate people on appropriate use of different AI tools? 
  • How might we help people trust AI? 
  • How might we help people trust AI’s outputs? 
  • How might we help people to use AI to increase their efficiency and save time on certain tasks? 
  • How might we use AI to enable more face-to-face relational time? 

How might we help users to identify where AI can support their current practice and where the boundaries are for appropriate use? 

Individuals were open to the ideas of AI supporting their current practice, however, there was significant fear across the groups that AI would lead to the loss of many human elements of their work, including; emotion, personal judgement, creativity, self-reliance and personalisation and could ultimately lead to the loss of skills and jobs. 

Regardless of an individuals’ level of apprehension or enthusiasm about AI, everyone agreed on the importance of clear boundaries between professional practice and AI practice. We discussed how, within Youth Justice, building relationships and supporting young people hinges closely on the use of empathy, personal skills, professional knowledge and judgement, all of which cannot be replicated effectively by AI. Whilst AI can provide useful support, and even inspiration, with specific types of work, it was felt across the groups that AI cannot replace the interpersonal skills needed to work with young people, so maintaining boundaries of it use was seen as incredibly important. 

How might we improve governance & ways of working around the use of AI within individual organisations & how might we educate people on appropriate use of different AI tools? 

We heard concerns about the way AI is used, and the lack of uncertainty about the ever-growing number of AI tools and what the differences in use should be. Youth Justice case workers are often working with sensitive personal data about young people and their network, and individuals had questions and concerns around what data should be used in AI tools, how that data is then being used, and the security of storage.  

Even in those more confident in the use of AI and the various tools, there was discussion about whether everyone knows the risks and impacts of data security. There was agreement that there needs to be comprehensive guidance and training for the use of AI, and clear governance around which tools you can use when, particularly in the Youth Justice context. Individuals also felt it was important to identify and stipulate the role that humans must take in checking AI outputs, and the need to consider what data and prompts are entered into AI.  

More broadly, some individuals talked about the need for clarity, and better education on the economical and environmental impacts of using AI, as an important additional driver to inform people on how and when to use AI appropriately. 

How might we help people trust AI?  

Some of the fears around AI lay in a lack of trust. In some cases, this is leading to substantial fears surrounding safety and the increase of crime. It appears that a lot of these fears stem from the speed at which AI has evolved, and the lack of education surrounding it. It was clear from conversations that staff and teams would value their organisation taking steps to consider how best to support colleagues in increasing their understanding of AI, whilst considering whether to enforce its use and the impact this might have on trust levels.  

How might we help people trust AI’s outputs? 

Along with the wider mistrust around AI lay the lack of trust in AI outputs, where there were concerns over accuracy of information, and the lack of ability to differentiate between a person and AI. A concern that’s shared by many others, including within the technology industry. Sundar Pichai, CEO of Google’s parent company, urges people to use AI alongside other tools, as many AI models are prone to errors. (Don’t blindly trust what AI tells you, Google boss tells BBC – BBC News) 

How might we help people use AI to increase their efficiency and save time on certain tasks? 

Individuals are most hopeful that AI can increase their efficiency, saving time on the tasks that feel less meaningful, such as admin. This closely links to the following HMW question.  

How might we use AI to enable more face-to-face relational time? 

Individuals fear that the rise of AI will lead to a reduction in human interaction, whilst at the same time are hopeful that by using AI to be more efficient, and save time, they will have more time to spend doing the ‘important’ elements of their job that AI cannot do, largely face to face time relational time.  

 

Whilst our findings were based solely in a youth justice space, we feel that the overarching questions that emerged could be considered across a range of sectors and organisations. 

What questions have you and your organisations been asking yourselves when considering the use of AI within your teams?