top of page
Writer's pictureMegan Peitz

Research Data Quality Tips from Market Research Experts

Updated: Nov 1

Research data quality is a major concern in market research.


Unfortunately, advances in automation and AI have made it easier for fraudsters to create and deploy survey bots.  These bots are becoming increasingly intelligent and difficult to detect with traditional approaches to data cleaning.


This creates significant challenges for researchers trying to ensure that only genuine participants contribute to survey data.


To quote The Flaming Lips album "Yoshimi Battles the Pink Robots": "Those evil-natured robots, they are programmed to destroy us... it would be tragic if those evil robots win."

We care a lot about this and even won Best Paper at the Sawtooth Software Analytics & Insights conference for our work on developing new approaches to combat survey bots and ensure reliable data. 


Why traditional methods are falling short


The industry has long focused on identifying and eliminating "cheaters" in surveys. Common techniques include using ghost brands, requiring detailed open-ended responses, and incorporating trap questions with opposing statements.


Researchers also check for inconsistencies in responses, look for straight-lining and speeding, and calculate fit statistics for specialized questions like MaxDiff or trade-off exercises.


And while these methods may be effective at catching fraudulent human respondents, they are becoming less and less effective in catching increasingly sophisticated bots. 


For example, a basic data quality question might ask respondents to identify a vegetable from a list surrounded by fruit options. But these basic questions no longer work well against modern survey bots, which can use natural language processing (NLP) to understand questions and generate human-like responses. 


How are AI-powered bots threatening research data quality?


Survey bots can now understand questions, identify keywords, and generate responses that mimic human behavior. Using NLP algorithms, they process content meaningfully and provide answers that seem human. 


This capability threatens the integrity of survey data, making it important for researchers to develop more robust methods to differentiate between human participants and bots.


 A July 2023 paper* on data science education and large language models found that ChatGPT could solve 104 out of 116 statistical questions. But, they struggled with figure and image-related questions. This insight inspired the Numerious team to incorporate imagery into our bot-detection strategy.


Using imagery to detect bots


To outsmart the bots, instead of using a predefined set of images that bots could easily be memorized through repeated exposure, we used a Canvas HTML element. This allows us to create dynamic, randomized graphics directly on the web page. 


Advantages include: 

  • real-time generation of graphics and visuals

  • support for multiple languages without needing translated images

  • dynamic screen size adaptation

  • prevention of bot memorization through randomization


We first used a screening question to be accessible to those who might be using screen readers as some assisted technology.


Then, we developed questions using shapes and colors, asking respondents to perform simple math or solve quick puzzles based on the displayed image.


For example, "How many more black stars does Brian have than Stephen?" The names, color combinations, and mathematical operations change dynamically for each respondent.


Canvas HTML example


Initial tests with various large language models (ChatGPT, Lama, Google's Gemini) showed promising results. The AI models could not accurately answer the image-based questions, giving responses that suggested they couldn't process the visual information.


How effective is this approach?


In a recent study we conducted, about 10% of the sample provided wrong responses to two Canvas questions and submitted suspicious open-ended answers to questions that were not visible to the naked eye. The bot-like open-ended responses were either irrelevant to the question or overly verbose and generic, confirming the hypothesis that bots were taking the survey.


This approach was tested with two different panel providers for the same survey. The panel with a higher cost per completion ($160) had only 2% of respondents fail both Canvas questions, while the cheaper panel ($40) had a 13% failure rate. 


In a separate study that included a MaxDiff exercise, respondents who failed the Canvas questions also over-indexed on failing the quality criterion for the MaxDiff experiment (i.e., low root likelihood). This suggested that these respondents appeared to answer the MaxDiff questions randomly, which showed that the Canvas approach is effective in spotting low-quality responses.


If you’re interested in incorporating the Canvas HTML element in your next study, check out our github repo for an example made in Sawtooth Software’s Lighthouse Studio.  


Other recommendations for improving research data quality


In addition to the Canvas HTML element, here are several other ideas you can consider using to improve bot detection and survey data quality.


Add Open Ends with Hidden Text Elements 

Include a standard open-ended question about the survey topic at the end of the survey.  You can also include hidden instructions like “make sure your answer includes the word buffalo”, and track responses that include the hidden keywords.


Research data quality example

Pilot testing

Conduct pilot studies to refine the questionnaire and address any clarity or structural issues before launching the full study.


Questionnaire design

Craft clear, neutral, and unbiased questions. Avoid leading language and identify ways to control for potential biases like question order effects.


Extensive quality assurance

Include additional questions alongside the Canvas-based elements to help distinguish between human and bot responses. This approach is useful for comparing data quality between different sample sources, especially when working with both high-cost and low-cost providers.


Survey bots aren’t slowing down, so neither can researchers 


Researchers must continuously improve bot detection methods to stay ahead of AI capabilities. By combining dynamic, image-based questions with strategic follow-up queries and hidden elements, researchers can gain more confidence in the authenticity of their survey responses. 


Want to learn more about research data quality? Watch our webinar about Defeating Survey Bots


*Reference: What Should Data Science Education Do with Large Language Models?  | 7 Jul 2023 | Xinming Tu , James Zou, Weijie J. Su, Linjun Zhang

University of Washington,  Stanford University,  University of Pennsylvania,  Rutgers University


145 views
bottom of page