FEATURE12 November 2013

Lost in translation?

A recent study by Research Now suggests that cultural differences can affect survey response styles. Melanie Courtright and Kartik Pashupati explore what this, alongside other factors, might mean for multi-country studies.

Res_4010792_foreign_languages

The growth of online data collection brings many advantages, including the ability to field surveys simultaneously in multiple countries. Data collected in this manner can be used to make inter-country comparisons, as well as other types of subgroup comparisons.

When interpreting the results of multi-country surveys, researchers sometimes wonder if the data reflect true differences in the respondents’ opinions, or if the observed differences are caused by extraneous factors, such as the cultural characteristics of the respondent’s country, or of the structure of respondent scales themselves.

What are response styles?

At Research Now, we are keen to understand the way in which multiple factors can affect survey responses and data quality. So, in a recent research-on-research study, we decided to examine response styles, and find out what factors do indeed have an effect on them.

Stylistic responding is defined by Delroy Paulhus as “a person’s tendency to systematically respond to questionnaire items regardless of [question] content” (Paulhus 1991 ). There are three most commonly recognised response styles in market research: Extreme Response Style (ERS, the tendency to answer toward the extreme points of a scale), Midpoint Response Style (MRS, the tendency to answer toward the middle points of a scale), and Acquiescence Response Style (ARS, the tendency to agree with items regardless of content, also known as agreement tendency, yea-saying or positivity).

In the past, researchers have hypothesized that response styles are influenced by three variables, namely socio-demographic characteristics of individual respondents; the cultural characteristics of a respondent’s country; and the characteristics of measurement scales, such as the number of items in the scale.

The study

In order to get to the bottom of this issue, we conducted a seven minute attitudinal survey among respondents in 10 countries. The survey included eight multi-item scales that had previously been tested for reliability in the academic literature. The scales measured a variety of globally relevant topics, including materialism, attitudes toward advertising in general, and respondents’ acceptance of telling lies (measured through the “lie acceptability scale”).

Each respondent answered questions on all eight topics, using one of four scale alternatives: 4-point, 5-point, 7-point or 10-point scales. Around 500 respondents received each version of the scale, resulting in a total sample of 20,000 respondents across 10 countries. Response styles were assessed through the use of indices to measure extreme responding styles, midpoint responding styles and acquiescence responding styles (yea-saying).

Cultural differences do impact response styles…

Some of the most significant differences found in this study were found in response styles across countries. Respondents from Brazil and Mexico have the highest tendency to give extreme responses (i.e. responses toward either end of the scale), followed by respondents in India, Russia and China. Consistent with the findings in previous studies, respondents from Japan have significantly lower extreme responding indices, and a significantly higher midpoint index. Our results underline the importance of examining data at the country level, rather than assuming that there are regional similarities. For example, while China and Japan are both Pacific Rim Asian countries, there are significant differences in responding styles between them.

When analysing results from cross-cultural surveys, researchers need to be aware of the fact that the data may reflect differences in responding style, in addition to true differences in the underlying constructs being measured

But gender (and age?) do not…

There were no significant differences between men and women in terms of response styles. There were significant differences in response styles across age groups, but the differences were not systematic; for example, extreme responding appears to increase with age, but it is lower for the 65+ age group.

Using scales effectively

The most actionable results from the study concerned the use of scales. Our data showed that using 7- and 10-point scales can help reduce extreme responding, yea-saying and midpoint responding. For 5-point scales, we found that labelling each item of the scale (rather than just the end-points) helped to reduce stylistic responding. Accordingly, we recommend that, wherever possible, researchers should use 7-point or 10-point scales instead of 5-point scales. If a 5-point scale is used, each point should be fully labelled.

Extension to mobile data collection

In a follow-up study using a sample from the USA, we investigated whether data collected using mobile surveys results in different response styles compared to traditional online surveys administered on desktop and laptop computers. We found significant differences in all three response styles between mobile and computer responses, but there was no consistent pattern in these differences, i.e. there was not enough evidence to conclude if mobile data collection influences responding styles in one predictable direction. Within mobile data collection, we found that mobile sliders did produce lower levels of midpoint responding, and higher levels of extreme responding.

The takeaway?

Based on the current data, we cannot conclude whether or not mobile devices result in any systematic differences in responding styles. The decision to use mobile devices for data collection should therefore be guided by other factors, such as the objectives of the study, and the type of data that will be most useful to aid in marketing decision-making.

Conclusion

When analysing results from cross-cultural surveys, researchers need to be aware of the fact that the data may reflect differences in responding style, in addition to true differences in the underlying constructs being measured (such as attitudes and behaviour). Researchers may even be able to apply some type of quantitative correction to correct for different responding styles. This requires that response style indicators should be carefully calibrated, using measurement scales that have been previously tested for reliability.

Response styles may sound like the dry end of market research, but in our search for data quality, we need to be sure every survey is of the best possible design. Knowing how to deal with response styles means researchers can rest assured that what they think is a difference, really is a difference — which can make all the difference.

Melanie Courtright and Kartik Pashupati work for Research Now

Reference:

Paulhus, Delroy L. ( 1991 ). Measurement and control of response bias. In Robinson, J.P., Sawyer, P.R. and Wrightman, L.S. (eds.), Measures of Personality and Social Psychological Attitudes, Vol. 1. San Diego CA: Academic Press.

7 Comments

10 years ago

Old news this, unless I am missing something obvious? I think country/cultural difference have been very top of mind for many a year to most researchers. What would have been good would have been to see within-country trends, Rural vs Suburban India/China for instance? Huge markets with no doubt huge differences in cultural reponse within them?

Like Report

10 years ago

We find the basics bear retesting, refreshing, and repeating, especially as we continue to evolve our modes and methods. The key was to remind researchers to look at their multi country data through corrective lenses, especially when generating standard banners and tabs.

Like Report

10 years ago

Interesting piece, and a good reminder. I'm inclined to agree that this isn't *new* new data - from experience we know that French and English people answer questions (particularly scales) differently, for example. But it is useful to validate that there is still this difference, particularly as we see other areas of consumer life growing more similar in the internet age. What would be most valuable is some kind of application of the findings. What does one do with a multi-market study involving countries with these different styles of response, and when trying to separate material differences from cultural?

Like Report

10 years ago

Research Now should change its name to Research Then. Last week they announced that the internet is changing shopping habits with people using their mobile devices to interact with retailers(yawn) and now this piece. Are they trawling though their archives for stuff to post?!

Like Report

10 years ago

Don't confuse cultural differences with country differences... as one person has already commented, the study would have provided far more insight had it looked at within-country differences.

Like Report

10 years ago

We thank Shaun and the first anonymous poster for the suggestion regarding in-country differences. The cross-country results are just the first step in analyzing a very rich dataset. The contributions of this study lie in updating, replicating and extending previous findings. Replication is critical in advancing our understanding of the world, even if the results are not necessarily earth-shattering.

Like Report

10 years ago

Thx, NickD. We considered writing up our notes about correction factors but felt we needed more testing. Across different scales and categoriescategories. Stay tuned for that. However, this framework does allow researchers to develop their own correction models.

Like Report