The realities of an English-speaking research world

Trends UK

There are various reports on our industry’s global size, such as those from ESOMAR and BDO. Most agree that the UK accounts for approximately 15% of the world’s market research revenue, and the US for around 45%. This means that an enormous chunk of all research is likely conducted in English.

white floating speech bubbles on a blue background

That might not raise any immediate alarm bells for us – until considering:

If we extrapolate these figures to all countries in which we do research, on the understanding that there is a portion of each country’s population that does not speak its official language(s), we realise there are myriad unanswered questions in our industry.

How much integrity can we guarantee for NatRep samples?

Are we alienating potential participants?

Are we producing skewed data through misinterpretation?

In short, what impact does the language we use have on our research?

The answer is equally brief – we simply don’t know. We do know that for children, mother tongue is optimal for learning and education (UNESCO, 2008a). However, there is very little understanding of how market research data is affected by speaking to participants in languages other than their mother tongue.

To complicate the language decision further, even when studies are translated into each country’s ‘official’ language(s), we still need to know if our chosen participants use it as their primary language.

What’s more, multilingual countries may apply languages differently depending on the context. For example, English is preferred for education in South Africa, but Zulu, Xhosa, Afrikaans and others are used frequently for home and lifestyle.

In an ideal world, we make research accessible to all potential participants in our target markets through specialised recruitment, and quality translation into their preferred language.

In reality, global researchers know all too well that this can be enormously resource-consuming.

Can we balance rich, meaningful data with an acceptable Cost Per Interview?

One cost-saving option available to our industry is Machine Translation (MT). If applied correctly, MT can work well for post-fieldwork response analysis thanks to developments in Neural Machine Translation. However, it remains provably lacking for survey translation – to the point of potentially yielding engagement rates and response data that mean surveys may as well be written in English in the first place.

Compounding the quality issue is time. Language is clearly perceived as the lynchpin of successful research, with weeks spent going back and forth about the precise wording of a written survey, and countless hours of researcher training programmes that focus on survey writing and behavioural linguistics. Yet localization agencies are often given 24-48 hours to translate.

Another technique that would pose significant logistical, and even ethical, problems is weighting data. Are we saying that a participant’s language skills determine the value of their input? If so, weighting still does not circumvent the risk of misinterpretation.

Do we need insight into Insight?

As diversity, inclusivity and accessibility take their rightful place at the centre of industry conversations, and while machine translation continues to drag its heels, it has become clear that what we are sorely lacking around inclusivity in research is – ironically – research.

We have ever-growing needs to understand, in tangible and measurable terms, the impact of approaching participants in a language other than their mother tongue. Only then can we devise solutions that increase the value of our hard-earned results, in the knowledge that our industry is only as good as the integrity of our data.

Ruth Partington, CEO of Empower Translate | Member of the MRS Representation in Research Steering Group | Member of MRS ACP Council | Chair of the UK Association of Translation Companies

1 Comment

2 years ago

I'm glad to see this topic come up as the issue goes even further. Researchers usually have many years of education/reading/writing experience. This means our questionnaires are written for people like us, not for people who finished their education before or at highschool/secondary school. That's A LOT of people whose native language is English, but whose reading skills aren't being accommodated. As you said, just because they either couldn't understand what we wrote or they couldn't articulate it in writing doesn't mean their opinions aren't extremely valuable.

Like Report