Finding the edges: Shining a light on the manosphere
In recent years, ‘the manosphere’ – a collective term for online spaces promoting masculinity, misogyny and opposition to feminist ideas – has come under increased focus, with reflection on the challenges facing young men and the harm such influences cause.
The Greater London Authority (GLA), the devolved regional governance body for Greater London, needed to understand the priority concerns for young men in the capital, what was influencing their narratives and opinions, and how those concerns were manifesting in the digital world.
The GLA wanted to know what it was about ‘manosphere’ narratives that were appealing to young men, and what it could do to help young men with a sense of identity and belonging.
To get to the root of a sensitive issue – with a four-week fieldwork period and budget constraints – researchers from 2CV had to take a bespoke approach to methodology.
“We needed to think about something that would be able to lend itself to sensitive and complex questions, and that there wouldn't need to be an element of human imagination,” says Kate Owen, group research director at 2CV.
“We also went in with a bit of an assumption – or, you could say, a bit of bias – about how young boys might be feeling about this and whether they would want to open up to qualitative researchers in the traditional way and share more darker thoughts that they might be harbouring, but they didn't feel like they could share.”
This assumption led 2CV to consider mixing methods and how AI could add value to the project. Hypothesising that they could use AI to draw out participants’ potential extreme views more effectively, the researchers decided to lean into its robotic nature. “We thought AI could be great because it’s a sort of black box tool and we could pitch it [almost] as no one’s really listening and we downplayed the human bit of AI – that it’s a robot and you’ll be able to tell us anything you want to,” says Owen.
On a practical level, the researchers worked with the platform to minimise its ‘human’ elements – such as removing avatars – to suggest a greater sense of anonymity.
Owen says: “We thought that when they think it’s a robot, they'll be really, really honest and they'll be able to share these darker thoughts that they don't feel that they can speak out in more conventional ways.”
Alongside interviews via the AI platform, participants completed digital diaries designed to explore their behaviours in their digital and offline lives. Owen explains: “We really needed to understand their digital life, their sense of belonging and identity, what sort of support that they had. We needed to understand how some of this stuff played out in the behaviours. It was not just grasping the attitudes – which we knew the AI could do really well – but also how do you live those attitudes on a day-to-day basis through the behaviours that you might have, so that might be the stuff that you watch or the conversations that you have with your friends.”
Of course, not all participants harboured ‘manosphere’ views. “In practice, some people had really healthy attitudes towards gender and their role, and we knew that there were these differences, because we weren’t starting from scratch,” says Matt Holt, associate director at 2CV. “There was a lot of literature – the GLA had done the evidence review ahead of time, and so AI seemed like a really good thing to do. There was an element of ‘we know that this is what’s going on globally and we know that this is what’s going on in the UK’. We almost just needed a spring platform saying, ‘is this what’s happening in London as well?’ That was the attitudinal contextualising piece, and we could do it quickly.”
Context needed
To evaluate the AI-based approach, the researchers developed a series of questions and probes through the AI, and when they piloted it with interns, they found that they weren’t “grasping the context” of some of the questions, according to Owen.
“As a qualitative researcher, there is so much in the conversation that you have in your head, so when you're moderating, you can see where participants are not following the same train of thought and you can help by giving a little bit of context – and with the AI, you just don't have that luxury,” she says.
To address this, the researchers revisited the prompts and added context and stimulus – an attitudinal statement – to each question.
"We really needed to understand their digital life, their sense of belonging and identity, what sort of support that they had. We needed to understand how some of this stuff played out in the behaviours ..."
It was an interesting lesson for the researchers, according to Holt. Because the use of AI was about uncovering darker truths, as opposed to being purely exploratory, the focus was on how to get young men feeling comfortable to reveal to what they actually think and feel.
Says Holt: “Perhaps reassuringly, we saw that not all young men do think and feel this, and so the blanket statements without some sort of stimulus to probe thought, and this is not actually something young men have all thought about – it’s not front of mind, and interestingly, the more into the manosphere you are, the more front of mind it is. And so, people who could just respond very easily, who have thought about it – that was actually an indicator in the end that they had been reading a lot of this content.
“But for some 16 year old young men, when you're asking about men and women’s roles in society, these weren't things that they had thought about, and so they really needed that stimulating attitudinal statement, to almost force a reaction and then that forced a reflection.”
Finding the outliers
In addition to the qualitative expertise needed to contextualise, frame and probe in the questions on the AI platform, the researchers had to look at the data in a specific way in the back end.
One of the limitations of the AI platform’s synthesis of the data was that it looked for big themes and found general common grounds – but didn’t initially show the outliers.
Owen explains: “What we needed to do was essentially go back to every single participant and look for those outliers because our gut reaction was: ‘We didn't get the juicy stuff’. It was only when we went back and looked at individual participants that we realised – ‘you are the outlier’.
“You've got to be able to interrogate that data and look in quite imaginative ways in the data to find outliers and stories. By doing that, we could see that there was a spectrum of how young boys thought about it. Some were really dark into the manosphere and quite happy to tell you about it. Some were sitting more in that sort of middle ground and weren't sure, and some weren't even thinking about it at all. And that led us to go down a much more segmentation route that we were able to see play out in the online community.”
Holt adds: “Traditionally in qual research, we look for a range and diversity, not prevalence and trends. So, from an analysis point of view, there were a lot of implicit associations more than explicit – with the structure of the diary and the AI together, you could see that young men with more offline support, for example, were more resilient to what they were encountering online.”
Through the diaries, the researchers could then jump on any differences or something that seemed “a bit discordant”, adds Holt.
“AI works well, but it’s like when you're doing your qualitative training and they say that when you're at the analysis stage, if you didn't collect it during the interview, you can't use it. You can't assume. You're so close and you can see that this is probably it. But if you didn't collect it at the time, you can't make those inference inferences, and I suppose that that was a very big difference, but fortunately we had both working side by side, so any gaps that we saw in the AI we could fill in the diaries and that’s why the AI on its own …. it would [show] a great snapshot of attitudes in London, which is what it’s set out to do, but it didn't answer the GLA’s questions.”
Informing future approaches
The project has offered insights for Owen and Holt on the type of value AI can add to research – as well as its limitations.
Owen notes that the project reinforced the need for the humanity of the researcher – drawing on their lived experience, particularly to look for points of tension and for contradictions.
She says: “AI just isn't this self-serve tool, and the value comes when you bake in human time to really think about the questions and the probes that you're asking because if you put generic stuff into it, you'll get generic stuff out. The quality in AI comes when you spend the time both upfront – how you're designing the questions and the probes – and also then how you're interrogating then analysing the data.
“Yes, AI is here. But let’s just absolutely make sure that we bring all those years and years of research experience to make it work at its very best.”
Holt draws comparisons with previous existential struggles in qual. “We went through this years ago, trying to justify our existence from quant, where the tag line was that anyone could talk to someone – sure, but not well. It’s very similar – this is the big conversation on AI now: who is doing it well and using it well.”
“In terms of what I've learned to implement, I found it quite validating, but also – it is another tool in the toolbox, and it’s no more special than the other 50. It’s just that there’s a time and a place.”

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
0 Comments