FEATURE1 July 2008

Optimus reviewed


Want to catch out fraudulent respondents? Tim Macer takes a look at a new option from Peanut Labs

Optimus is a standalone software-as-a-service or ASP solution for tackling fraudulent respondents that will work with any sample source and, effectively, any internet interviewing system. It comes from Peanut Labs, an online sample provider, though the service is not in any way tied to their samples.

If you happen to use Confirmit, SPSS Dimensions or Ztelligence – then it is easy to a set a command at the beginning and end of your interview to link your survey to the Optimus service. If you use other software, you will need to do a small amount of ad hoc web programming to link it in each time. Essentially, the link is achieved using a ‘redirect’, where the survey momentarily hands control over to the Optimus server, which then probes the respondent’s browser, gathers some information and then hands back to the server running the survey. None of this to-and-fro is visible to the respondent. Neither is any personally identifiable data involved. All that Optimus holds on your behalf is your respondent ID, so you can later identify problem respondents. It does not use email addresses or cookies.

The real strength of the software, and single reason you wish to use it, is the firm’s proprietary digital fingerprinting technology though which it is able to build up a database of individual PCs which it has ever encountered for your sample and for anyone else’s too. It relies on the fact that any web-browser will reveal a large amount of information about the configuration and resources available on the PC – and there is enough variation for this to be as good as being able to get the manufacturer’s serial number. None of this information is personally identifiable. But once logged against a panellist ID, Optimus is able to start pointing the finger at some respondents for various reasons.

Optimus collects two other factual measures: interview completion times and IP location. Speed is detected as the time taken to complete against the anticipated time, set by the researcher, and short interviews are logged as potential speeding violations.

The IP address of the ISP or company network the respondent uses to access the internet contains some useful high-level geographical information, which will pin the respondent down to a country, if not to a city. This can then be used or ignored as you choose. A panellist on a consumer survey in France is unlikely to be using an ISP in the Philippines, for example, though a business executive could be, if using the wireless network in their hotel bedroom, which could as easily be in Manila as Manchester.

From this raw data, Peanut Labs deduces six measures of suspect behavour: duplicates, Geo-IP violators, hyperactive respondents, respondents belonging to multiple panels, speeding and a sin-bin category of ‘repeat offenders’, where the respondent has repeatedly transgressed in the past.

When you log into the system, you have options to register new surveys and also the different panel sources or companies you wish to use. The ‘controls’ area is where you define your own rules of what constitutes suspect behaviour. You can switch on or off any of the rules for your own samples, and also you have considerable flexibility over adjusting the threshold for each one. For example, for hyperactive respondents, you can set an absolute limit on how much multiple participation is acceptable to you, set a period, and choose whether you restrict this just to your projects or across all projects by all users of the service. It is a pity that you can only have one set of rules for all your projects: the rules for a B2B survey could be very different to what you allow in consumer research, for example.

There are two principal outputs from the system: reports and files containing the IDs of violators, determined by your rules, together with the type of violation recorded, either to update your own panel database or to seek replacements and refunds from sample providers.

A range of largely graphical reports are well presented. The main ones chart each type of violation every day, which you can filter by project or sample source. But reporting choices are limited, and there really there need to be more options available – for example to allow comparisons between different surveys or between different sample sources.

It is also worth considering the effect of scheduled maintenance on the service, which though minimal tends to be scheduled for prime-time Monday morning in Europe, and when it is down, your interviewing will be unprotected.

Ultimately, the success of the solution will depend on the volume of traffic passing through it, so it achieves the critical mass of fingerprinted PCs to be able to differentiate clearly between the responsible and the abusive survey-taker.

Client perspective: Kristin Luck, VP at online data collection firm Decipher Inc
Decipher started to use Optimus in April of this year, to control sample quality when using sample from multiple sources on client projects.

“The system is designed to track respondents from any sample source. Where it really comes in handy is where you are using a multiple source sample approach and you want track people who are trying to enter the survey multiple times, either from a single source or from multiple sources.”

“Some of the other solutions on the market are tied to a particular sample provider. What was appealing to us about Optimus was that it was a technology we could use even if we were not working with Peanut Labs for sample on a particular study.”

Decipher uses Optimus with its own in-house web interviewing solution. Although this means Decipher does not benefit from a direct program interface, as with some mainstream packages, linking a new survey in takes very little time “We currently have to use a programmer to connect into Optimus.” Kristin explains, “and the first time it was about an hour’s work, but it is a pretty short learning curve, and we now have it down to about 15 minutes on a new project. In the future we will be able to implement without the use of a programmer.”

Another attraction was that the web-based interface can provide controlled access to the data to their clients, so that the entire quality control process is transparent to everyone. “It is really easy to use” says Kristin. By using the service, Decipher has identified and removed around 11% of the sample from multiple sources.

“We have found some panel providers have 21% or more of their sample has a problem and we have others where it is 8% or less,” Kristin states. “We tend to see lower percentages from the companies that have been making a lot of noise about panel quality, and higher percentages from those that have been largely silent about this.”

Being able to specify their own rules to determine fraud is another advantage for Kristin, as Decipher tend not to exclude hyperactive respondents. However, Kristin would like more granularity in how rules are applied, so that a client or a project can have its own particular rules applied- currently this is not possible without a manual programming process

The verdict:
Web-based suite of interview fraud detection measures which can be applied to any online panel source, including panel providers or your own samples.

Ease of use – 4.5 out of 5

Cross-platform compatibility – 5 out of 5

Value for money – 3.5 out of 5

From $2,500 to scan 5,000 completes, with discounts for higher volumes.


• Highly accurate detection of the most common types of internet fraud

• User can determine the level of policing

• Interfaces directly with Confirmit, Market Tools Ztelligence and SPSS Dimensions

• Works with most browsers, Windows or Mac


• Some programming involved if using an unsupported interviewing package

• Does not detect all kinds of fraud, such as straightlining and ‘satisficing’

• Rules are system-wide: cannot vary them by project or client

• Fraud not detected during scheduled or unscheduled downtime of the Optimus server

Further info

July | 2008