FEATURE17 February 2020
The route to local understanding
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
FEATURE17 February 2020
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
For market research to yield genuine and accurate insights, cross-cultural understanding is essential. This requires considerable translation expertise, as Tim Phillips reports.
What’s the Russian for ‘doing a Kane’?
To be fair, it’s not immediately obvious what the English is for that phrase, which is unlikely to make it into a translation dictionary in future.
But briefly, during England footballer Harry Kane’s progress towards winning the Golden Boot at the 2018 World Cup, the phrase popped up in global research, and became another challenge to solve for Ruth Partington’s team at market research localisation agency Empower (formerly RP Translate).
Since the 1990s, Empower has been translating and localising market research, both the surveys and the responses, for research agencies (and the clients of those agencies) that want to work across many markets. Partington initially combined her knack for languages, good international contacts, an understanding of what market research was trying to achieve, and a knowledge of WordPerfect, to earn regular work with agencies like Hall & Partners and Virtual Surveys (now Join the Dots).
But the demands on Empower’s much expanded translation team are now far greater, she says. “There are now three layers to market research translation. First, you need to produce invisible translations for the survey; it must be a translation that doesn’t feel like a translation.
“Second, you need to know the industry that has been translated for. To understand FMCG or B2B environments, and the type of language that people are using in those environments. And only then do you add the market research layer.”
Since Google Translate launched in 2006, we’ve seen a dramatic increase in the capability of machine translation, and a consequent raising of expectations that we will soon be able to automate much cross-market research.
Google Translate, for example, now handles 103 languages, and will translate 37 languages if you show it a photo, 32 languages using voice, and even 27 in real-time video. So, in a market research industry in which speed of response and automation are competitive advantages, can we build a machine to translate research? Or will dogged translation agencies, worrying over ‘doing a Kane’, or finessing the translation of ‘Turkish toilet’ so that Germans could understand what the question was about (another recent Empower challenge), always be in demand?
Apparently, they will – not least because the researchers who specialise in international markets understand just how subtle and nuanced the challenge is. Join The Dots, for example, now has 11 offices around the world and was recently bought by Belgium-based agency InSites Consulting; but it is still a client of Empower, two decades on.
“We have done research across maybe 50 countries,” says Gavin Holt, commercial director at Join the Dots. “For example, we work for GlaxoSmithKline in 10 countries. It is important that we are getting it right, not making a faux pas in the way that we reach out to people, and so the translation process has evolved into many layers of quality checks.”
Join the Dots, in its early years running online communities, experienced client pressure for low-cost and speed. It meant there was a constant pressure to run international online communities only in English. The researchers found they got responses, but not engagement. “Just because the respondents in the Netherlands or Germany could speak English, it doesn’t mean they wanted to,” Holt recalls.
“You get very superficial feedback; our moderators need to make a connection with participants, to create a space where people felt they belonged.”
That has led to a careful and nuanced translation process, working with Empower’s local language experts, as well as Join The Dots’ internal ‘culture and trends’ team. The initial translation of a guide for an online community is sense-checked, not just to make sure that it is accurate, but that the tone is consistent with the original (and appropriate to the culture), and that even regional dialect choices are noted – for example, to make both northern and southern Italians feel their responses are valued.
Localisation extends to methodology. A recent project for Diageo in five African markets presented the usual challenges of translation and cultural references – but also the appropriateness of the app. “Internet use data costs are not an issue for our online communities in developed-world markets. But in the markets we needed to go to, sending community members an app that loaded a lot of data is expensive and slow,” Holt says. Being sensitive to the constraints and preferences of the people whose opinions they were trying to elicit was not, in this case, just a choice of words.
Thinking about international research in this depth requires a three-way negotiation, says Alun Byles, the head of quality and resourcing at Engine Transformation – the research, data and tech consultancy of Engine. “In the translation process, there are three stakeholders: the translators, the agency that designed the survey tool, and the client – and they need to work together. But, very often, what will happen is the agency and client work together, and the agency and the translator talk, but not all three.”
This, he says, is understandable if you believe the translator’s job is to give a technically accurate document in another language. But even when all three parties are well-intentioned, the outcome can still be bad research.
“Very often, the corporate office has commissioned a global programme. Our job is to take the global agenda and apply it to all markets. It may be a very important business question for the head office, but the cultural context tends to be ignored, and so an agency needs to engage the local shareholders.”
The agency also has the responsibility to manage the responses of those local shareholders, and make the client aware of them, Byles argues. This creates a tension: the corporate client desires standardisation and comparability, but sometimes even the client’s country office will push back, to say that the survey is either meaningless (for example, because it asks questions on topics or issues that have little weight in the cultural context) or, worse, potentially offensive.
In one example, a recent survey fielded by Engine on behalf of a European client adopted a breezy, informal tone and began with a casual ‘Hey! How’s it going?’ Good for most countries, but far too informal for Japan. Had the tone not been changed, poor-quality responses would have been inevitable, and it would likely have offended a large group of customers.
Byles, who also uses Empower to give feedback, argues that the agency in that case has a responsibility to be an advocate of quality in localisation. It’s not enough to rely on the translation team alone to solve these problems at the end of the survey design process, because “if you just give it to a translator, then translators will do their job: translate what’s in front of them”.
One of the most challenging countries in which to conduct research is China. David Joseph, the founder of Hub of China has undertaken research for Unilever and Avon among other brands in the region (see box: Focus groups in China). He emphasises the need to listen to local expertise, not only within the country, but within each city.
“Recently, there has been increased interest in the lower-tiered cities (as the appetite and purchasing power for western goods increases), but a rookie mistake is failure to appreciate differences between the tiers and various cities in China. For example, someone earning 7,000 RMB a month in Mianyang (a third-tiered city) would be considered upper middle class. However, in Shanghai this is more likely to be the salary of a taxi driver.”
Joseph is also frequently asked to film his qual sessions, which in China is guaranteed to make respondents uneasy and, often, to give answers designed to make whomever is paying for the research feel good about themselves.
The challenges of conducting international research on video are felt every day at Watch Me Think, which calls itself a ‘consumer empathy’ agency. To make good on that promise, it must take great care over how it interprets both the words and the conduct of its respondents.
Its panels in 50 countries record videos of themselves performing everyday tasks. Sometimes, they are briefed and asked questions. Sometimes, they are just asked to do a task with no guidance. Either way, response and actions are translated and indexed so the client can find videos from the set of respondents around the world with a certain response, location or profile.
“It’s as close as possible to being a fly on the wall,” says Simon Thomas, ‘thinker champion’ at the company. “All of our films are transcribed, translated, time-stamped and indexed – and so the quality of the transcription is extremely important. We need to ensure consistency. We transcribe actions as well as words because that’s equally important – to allow searches on what they are doing as well as saying.”
Cultural sensitivity applies in several ways. The first is, when translating the guides for respondents, making sure the phrasing is appropriate for the experience in their context.
“If we are doing some filming in store in India or Thailand, it might not be what we consider to be a store. It might be a stand on the side of the road, at the table with things hanging up behind it,” Thomas says.
Turning around research quickly is optimal, but these videos are a window on intimate details of family life, and that window is not always open. If you want to capture routine chores in the US, clients need to avoid Thanksgiving; and, don’t try to research family mealtimes in Muslim countries during Ramadan, Thomas points out.
Respecting and recognising cultural differences, however, may be what elicits the differences in activity that the client needs to discover. For example, Asian markets tend to follow the guide closely, but Europeans are less rigorous. But this has a business meaning too: Watch Me Think has conducted research into how respondents use product manuals, yielding the same insight.
There’s also a practical challenge for DIY video: filming a roadside vendor isn’t going to be useful if the commentary is obscured by the sound of mopeds. A quiet video of a household chore might not have the same impact if the respondent has left the fan or a television on. It’s also not realistic to expect Vietnamese subjects to speak their mind with the enthusiasm of the average American, especially with no moderator or facilitator to draw them out.
This highlights how difficult it is to establish a baseline – whether the response is the rating in a survey, conduct in a video, or sentiment in natural language. Is the average German going to rate a product lower or higher than the average Italian, given the same level of approval?
Creating a common baseline is essential to avoid misunderstandings or misallocations. Experience can make those adjustments trivial in some cases, but it can also highlight when the search for equivalence is difficult or a guess.
It’s not only China where recording on video is sensitive. German privacy laws meant that Watch Me Think’s respondents’ self-recorded visits to a supermarket had to be done as audio commentary. It’s a reminder that GDPR isn’t the only privacy regulation to bear in mind when collecting data in more than one market.
This is especially relevant when attempting to create international metrics from social data, one area in which using automation and machine learning is inevitable. Ten-year-old Socialbakers now performs global social media analysis for more than 3,000 clients, and employs 500 people based in Prague, with 11 other offices around the world. It promises to apply machine learning to expose, in a comparable way, differences in response in near real time.
An important aspect of this is to centralise data collection and processing. “Traditionally, marketers were using multiple tools to track this data,” says Yuval Ben-Itzhak, CEO of Socialbakers. “Each tool has its own metrics. We offer a unified platform where all the data is processed in the same way and presented in the same way. So, an example would be one of our largest fashion brands that has 25 sub-brands, each operates in different countries around the world… we’re tracking more than $3bn ad spend across social channels. We can tell you in every region in the world, in every vertical, what the performance is of that advertising.”
In this case, social data demonstrates that different markets have different personas that respond to these brands, and those personas have different values and motivations in each market.
The company promises insights from testing messages in local markets in two or three hours, because it is not focused primarily on understanding the market before acting. As all the markets in which a brand operates are constantly in flux, Ben-Itzhak argues that monitoring them constantly, rather than doing a research project once, is essential.
Social media also helps Socialbakers clients pinpoint that important local insight: the influencer. As influencers are, by definition, rooted in the culture whose tastes they help to shape, locating them is the object of much international trend research. Ben-Itzhak claims that social data can short-circuit this search, because the data rigorously exposes who those local influencers are.
“I spoke with a CMO of one of the largest sports brands in the world, and he wanted to promote a new sport shoe in Australia. The brand spent six months and $200,000 just finding an agency to recommend the influencers to them. I could point them out on our platform; you don’t need to do that kind of work any more.”
“How many of you eat noodles for breakfast?”
That was the question asked by Emily Porter-Salmon, associate director at cultural insight agency Sign Salad, in a meeting about the meaning of breakfast.
The clients, overwhelmingly European and American, whose business was built around selling stock cubes to make evening meals, did not. Porter-Salmon showed a world map and pointed to the right-hand side. “Everyone on this side of the world has noodles for breakfast,” she said.
Intimacy with the communities being researched is an essential part of the methodology used by Sign Salad when it seeks to “change a cultural paradigm”, in the words of Alex Gordon, CEO. He argues that an essential part of cross-market research is finding out what is happening outside the boundaries that marketers have drawn around their brands, something that even the biggest data finds it hard to do.
In this case, a roomful of people suddenly stopped considering their product as a dinnertime food, and started to create new ways to market, package and sell it as a result. But the search for the meaning that other cultures give to brands, things and experiences is far broader.
Sign Salad specialises in semiotics and language analysis, and its team of semioticians has worked in 47 markets from “Mexico to South Korea and Sweden to Australia”.
But most importantly to Gordon, many of them identify with more than one culture. “My colleague lived and worked in China. We have Arabic spoken in our office, Brazilian Portuguese…our semioticians are inhibiting two worlds simultaneously and so they can see the world through multifocal glasses, they are not dominated by the cultural paradigms of one culture – because one culture is not the truth. It’s just a lived experience,” Gordon says.
Sign Salad uses networks of local semioticians to uncover unexpected meaning and nuance that go beyond the brief: for example, a recent investigation of the indulgence associated with chocolate in local markets highlighted that it has religious significance in Turkey, where it is used as a gift at the end of Ramadan.
To turn these insights into actionable messages that resonate with Sign Salad’s global clients such as Pepsico, Unilever or Danone, the agency pairs its semioticians: one local, and one with the agency.
“We need to make sure we have a balance of local knowledge, but also objectivity. It’s an objective outsider, working in partnership with a subjective insider. This creates a beautiful balance to raise and ask the right sort of questions. This means we give informed responses to those questions, but we can challenge clients to think about markets in ways they haven’t thought about before,” Gordon adds.
The insider-outsider model is the link between hyperlocal insights, which may be interesting but not immediately actionable, and the brand’s need to identify a common base or set of products, but rethink the packaging, messaging or recipe for local tastes.
Gordon argues that globalisation has allowed many brands to achieve scale, but that this is no longer an advantage in itself. Indeed, it may be a disadvantage, because local brands now have a lower barrier to entry in many markets, and instinctively understand local preferences, values, associations and nuances.
Understanding an international brand’s cultural relevance, and mapping that to new opportunities, is the challenge for those clients that are struggling for, or want to deepen, their relevance or authenticity. “Large brands are not fighting for market share as much as fighting for cultural relevance. You cannot choose solely to be a global brand,” says Gordon.
At Join the Dots, some of Holt’s clients are similarly engaged. In the past, he says, there was a tendency to create a product or service and test it across local or regional markets. “In Europe, for example, some brands would say ‘now you’ve just got to sell this’.”
Instead, some of his more progressive clients are starting with the cross-market research and building the proposition “bottom up”, as he describes it. That means investigating preferences and trends and creating something that is sensitive to the needs of that market. Some products might be relevant only in two countries, but this is preferable to parachuting in a product that is a poor fit and leaving it to the local marketing team to sort it out.
For Crowd DNA, 70% of projects are international by design, and very few projects are just one market.
Matilda Andersson, the London managing director, has a similar desire to understand local markets with empathy. For her, finding generic differences in attitude or preferences among cultures is just the first step in international research: “It doesn’t tell you anything new and doesn’t help you really understand what’s going on.”
More meaningful to her is research, for example, that finds out how participants in research experience common social activities — for example dating — in different cultural contexts, because that exposes both what is common across markets (and the subcultures within those markets), and non-obvious nuances and differences.
To help do this, Crowd DNA also uses a local network, but unlike Sign Salad, it could be anything from experts on street culture, neuroscientists, fashion experts or entrepreneurs. The agency calls them its ‘kin’.
The Crowd DNA researchers also try not to think about ‘problem’ or ‘difficult’ countries for research, because “it means that we’re already starting from a point that isn’t empathetic”. “We’re treating them as ‘the other’, from a standpoint of not actually really understanding what’s going on,” Andersson says.
So respecting local norms – for example, when looking into a woman’s wardrobe in an Arab country, it would mean there would be no men in the room – is not just good manners, it is an essential part of identifying with that participant’s culture.
The reward is a deeper, richer involvement with shifting culture. One result has been that Crowd DNA can identify what it calls “hybrid states”, in which some groups may identify in ways that seem contradictory to outsiders: for example, progressive in attitudes to gender, while traditional in the interpretation of
family values.
That spirit of empathy, common to all successful international research, is what Partington seeks out in her local translators. “On in-depth interviews, it’s not just providing a transcription or translation. They also give a running commentary on tone of voice – did they get the impression that the participant was upset, or maybe aggravated?” she says. “You can’t know that from reading transcripts.”
Organisers of commercial market research can attempt to adjust self-reported quant scores when comparing different cultures. However, this is tougher in public policy research, when adjustments to baselines must be justified in some way. The best example of this problem is probably ‘happiness economics’.
Some economists advocate that happiness rather than economic growth should be the goal of public policy. Happiness, so the argument goes, should become the way we rate our country’s relative success compared with the rest of the world. They reference the Kingdom of Bhutan, which has for decades orientated domestic policy to maximise Gross National Happiness.
Since April 2011, the Office for National Statistics has compiled surveys in which the UK population rates itself out of 10 for happiness. Initially, this was 7.29, but by 2018, it had risen to 7.52. Good news! But are Brits happier than lower-scoring Germans? If we had the same hospitals and schools as high-scoring Finns, would be as happy as they seem to be?
While several surveys compile and compare detailed international statistics on self-reported happiness (Gallup’s World Happiness Report is the largest commercial effort), it is unclear what these numbers really mean, beyond the banal insight that income, health, friends and family, and stability make us happier, to some degree.
It’s also hard to explain comparative country rankings coherently. University of Oxford economist Wilfred Beckerman, who has spent 60 years examining cross-country comparisons, points out that happiness cannot explain why there is persistent migration from high-ranking countries, such as Costa Rica, Guatemala and Romania, to lower-ranked countries. Why would we choose to move to a country if it makes us less happy?
There are several explanations why the country rankings seem flawed.
The most important for international comparisons is probably the ‘Veblen Effect’, or, more informally, ‘keeping up with the Joneses’. Put simply, we are happy when we are as well-off or better-off than our immediate neighbours, but we don’t compare our lives to those in other cultures.
This effect is fatal for international comparisons. It might also explain why, as Diane Coyle of the University of Cambridge puts it, Bhutan’s 40-year national happiness policy has created “one of the poorest and more authoritarian countries in the world”.
More than 36% of all UK research is international. In 2020, these studies are facing increasing levels of global political disruption, legal red-tape, and time and cost pressures brought about by rapidly advancing technologies. The most difficult part of it all? Not speaking the participants’ language.
From my experience, it is widely accepted that language is vital to the quality of insights. Whether it be the effectiveness of an invitation email, the consistency of questions across multiple markets, or understanding response data, the most seemingly insignificant faux pas can cause a fiasco. Linguistic blindness forces global researchers to rely on external providers for the accuracy and actionability of their research. This means that global studies are naturally riddled with risk and can become time-consuming, or even costly, when mishandled at any point in the project’s chain.
It is almost impossible for a global researcher to learn the language of every market involved in their studies to a native level (never mind updating that language-learning each day to account for relentless socio-cultural changes that impact terminology and grammar). As a result, when linguistic mistakes do happen, researchers are still dependent on fieldwork agencies, translation providers, freelancers or even their own clients to fix them.
Ultimately, the opacity of translation makes it the ‘dark matter’ of global research: we understand its nature only by the effect it has on everything around it.
I founded Empower (formerly RP Translate) 25 years ago, to counter poor translations in the marketplace and bring peace of mind to researchers. A drop in the ocean of the translation world – but needed in the global research space.
By making time to celebrate high-end, empowered thinking around something as crucial as language in research, it is possible for researchers to better leverage technology, traverse cultural seas and increase participant engagement.
0 Comments