OPINION9 September 2009

Online is the future for national statistics

Technology

Report from Internet Survey conference in Korea: day 1. Cutting cost and improving response through better survey design.

I’m at the First International Workshop on Internet Survey at Daejeon, Korea. It is hosted by Statistics Korea (or Kostat) which has put together an impressive roster of presentations on leading edge thinking in using online research for public policy research and other nationally representative surveys: eighteen speakers, fourteen from around the world, and a nice fat 320-page book of scholarly papers to accompany the event.

My own talk was on software and technology (what else?) and how appropriate technology can help control measurement and non-response error: but unlike many of these events, I did not find myself the pariah for speaking technology. There has been explicit acknowledgment throughout the first day of this two-day event for the need for researchers to be more discriminating and more demanding of the technology being used, in order to improve response, reduce respondent burden and control error more effectively — as well as reducing cost.

The event started with Yi Insill, the Commissioner of Statistics Korea, who predicted “a significant increase in demand for Internet Surveys” in National Statistics work in Korea. “We are expecting them to reduce non-participation and make them engaging for participants,” she stated. She also acknowledged that national statisticians had been reluctant to use online surveys because they were not based on random probability samples and “have been criticised for poor quality”, but that was now changing as the methodology was being understood and tested. Preparations were well advanced for the 2010 e-Survey in Korea, and we heard more of this later on.

One good paper followed another – but I will pull out a few highlights. Frederik Funke (Tübingen University) showed how Visual Analog Scales (VAS), when applied to online surveys, can dramatically reduce measurement error, while showing that conventional 5-point scales, applied to online surveys by convention (and possibly for no better reason) can enforce measurement error on participants by restricting their options – to the extent that different results will arise from a VAS which appear to be more accurate.

Surveys that leak cash

Lars Kaczmirek (GESIS, Mannheim) followed through with three practical changes to survey design that would improve response and reduce error. He showed the results of some experiments that showed how, compared to the effect of providing an incentive on a survey or not, some simple changes to survey design were actually more effective. In other words, you could chop the incentive, improve the design, and still be slightly better off in terms of response.

Kaczmirek was also critical of the way in which new technology was sometimes applied to surveys uncritically, even though it would increase non-response. Another example was the automatic progress bar – inaccurate or misleading progress bars, particularly those that jump due to routing, are such a turn-off to respondents that actually removing them altogether will often improve response. Accurate bars, or bars where jumps are smoothed and averaged out, do better than no bar, though.

Boxes for Goldilocks

Marek Fuchs (University of Kassel) gave us the latest thinking on verbatim response box size and design in online surveys: getting the size right can mean more characters and potentially, more concepts – like Goldilocks and the porridge, they should not be too small or too large. Adding in a Twitter-style count of how many characters remain can also boost response length, provided the starting number is realistic (a couple of hundred, not a thousand characters). However, too much trickery, such as dynamically appearing or extending boxes will also send any gains into reverse. As with the wonky progress bars, the point is that any feedback must be realistic and honest for it to act as a positive motivator.

Questionnaires with added AJAX

Peter Clark (Australian Census Bureau) talked us through the 10 per cent uptake of an online census option in Australia for the 2006 Census, and the plans being made to increase this now to 25% for the 2011 Census. ACB had appointed IBM as its technology partner for 2006 and again for 2011. IBM had pioneered adding browser-based processing in AJAX (a Web 2.0 technology) to the 2011 e-Census form, to cut down server load. It has saved them a fortune in hardware requirements, as the server load is now a third of what it was. For the many participants on slower dial-up connections, the form took longer to load, but once loaded, was actually faster, as all further traffic to the server was minimal and therefore very fast to the user.

Australia, along with other speakers describing their e-census strategies in Singapore and Estonia, had added an online option to the national census as a means of reducing cost. For the obvious coverage reasons, e-census is offered as an option to back up self-completion by mail, and as a last resort, face-to-face for non-responders.

Pritt Potter (Webmedia, Estonia) spoke of the work he had done in providing novel respondent validation methods to the forthcoming e-census in Estonia, which included using trusted third parties such as banks to offer verification through their normal online banking security, and then pass on to the census bureau key identification data. Another method offered to respondents is mobile phone verification (provided that the phone is registered). Both methods have the advantage that the public can respond to ads in the media, visit a website and self-verify, instead of the census bureau having to send out numerous unique passcodes.

And there is more in store tomorrow…

0 Comments