Our philosophy is simple…
Recruit participants who are genuinely engaged with your product at the moment it matters most.
As UX researchers, designers, product managers, research ops professionals, or anyone conducting remote or in-person research, you already know that fraud in participant recruitment is a growing issue.
User research is only as good as the people you recruit. Whether you’re sourcing external panel participants, using intercepts, or engaging your own customers, proper vetting and asking the right questions are essential. Yet, despite your method of choice, fraud continues to infiltrate usability tests, focus groups, and customer feedback sessions—whether in the form of fake email addresses, repeat sign-ups, or participants gaming the system for incentives.
So, the question isn’t whether fraud exists in user research—it’s how we adapt traditional fraud detection methods and optimize participant vetting to meet the needs of qualitative research, preventing fraud from entering our ecosystem in the first place. This leads us to three pivotal questions:
How well are we equipped to catch and prevent fraud before it impacts our findings?
What solutions are working for teams today?
What can UX research learn from market research—without repeating its mistakes?
As your trusted partner in research recruitment, we’re committed to helping you find the right solutions at every stage of your strategy to fight fraud without compromising participant trust.
In this piece, we’ll explore the evolution of imposter participants, red flags to watch for, and the solutions that can safeguard your research.
What can market research fraud prevention techniques teach user research?
We don’t need to reinvent the wheel. While user experience research and market research differ in scope and focus, both share the same battle.
Rather than adding to the noise of generic screening tips or social media cross-vetting advice (those are already everywhere), let's dig into concrete patterns and practical frameworks for early-stage fraud prevention. These are lessons we can draw from industries that have optimized participant vetting at scale—specifically market research, which has long grappled with participant fraud.
Actually, according to the 2024 Greenbook GRIT Insights Practice Report, about a third of research professionals and data providers have observed an increase in poor business decisions linked to low-quality samples. Platforms like Lucid (now owned by Clint) have been reported by ResearchGate of an increasing number of panel respondents failing basic attention checks, skewing research results. Greenbook estimates 30-40% of online survey responses are fraudulent, resulting in billions in lost revenue and reputational damage (Fast Company, 2022).
One study even found that imposter participants inflated brand awareness by 287%. Another studied analyzed panel participants from five of the top 10 largest online survey providers for market research. The findings? 46% of respondents failed quality control checks—submitting incoherent answers, speeding through surveys, or outright faking responses. Corrupting data at scale so they are already optimizing several fraud prevention strategies that offer valuable lessons for us to refer to.
To combat this, market research has invested heavily in scalable fraud detection, leveraging automation to eliminate fraudulent responses before they ever reach a study.
Their approach includes:
Behavioral Analytics Segmentation and AI Identify recipients cycling through identities. that finds browser, IP, device, and payment destination data to dedupe these multiple identities into the single person controlling them. helps distinguish legitimate survey takers from chronic offenders for Hard-Core Fraudulent Activity Hunting and Automated Survey Fraud Threat Assessment
Device Fingerprinting is enabled for all survey links, providing frontline defense by tracking device attributes like OS, IP, and browser to create unique fingerprints and flag fraud. It identifies sophisticated fraud rings, bots, and cloaked intruders—even those using VPNs, TOR, or ID scrambling. Going beyond simple duplicate detection, it offers risk scoring to assess threat levels and adapt to evolving tactics.
IP Intelligence & Geolocation Sync is a cost-effective, lightweight survey fraud check that uses IP geolocation data to prevent global research fraud. It cross-references device time zone, language settings, and stated geo/time, flagging mismatches to catch fraudsters providing inconsistent location or time details.
Device Reputation Database Check for High-Risk Survey Restriction. It Stops devices already associated with fraud from entering surveys which is Powerful in mitigating prolific “career respondents.
Data scrubbers are implemented for correcting (or removing) corrupt or inaccurate records from a dataset.
As seen here, with right safeguards in place, it’s a challenge you can mitigate. we can build more reliable user research ecosystems to catch and prevent fraud before it impacts our findings. These solutions work well for large-scale surveys because market research focuses on high-volume data collection—but UX research faces a different challenge - how can we implement this without compromising the integrity of our insights or alienating authentic participants?
Verifying identity vs. quality participant recruitment
Which brings us to the question - does our industry place more value on quality recruitment?
Well while fraud prevention techniques like the ones above are critical, it’s equally important that the methods you implement don’t inadvertently create barriers to recruiting the right participants. We must avoid measures that hinder genuine engagement or create unnecessary friction. Ultimately, the goal is to filter out fraud while maintaining trust with authentic participants.
Paul Gooding, Founder and CEO of People for Research, brings over 30 years of experience in participant recruitment and usability testing. He has seen firsthand how fraud evolves and how staying ahead of it requires both technology and human oversight.
Gooding highlights that many researchers still don’t fully recognize the value of quality participant recruitment. It's about verifying identities and ensuring participants genuinely meet the research criteria. There is a clear distinction between market researchers and UX researchers, with the latter placing much more emphasis on the cost of quality recruitment. As Gooding notes in his recent industry insights:
It's vital to have reliable, validated participants for both qual and quant to drive good quality insights.While one industry may prioritize large sample sizes, fraudulent responses undermine confidence in results and decisions across both. As such, participant fraud remains an unfortunate reality of modern research.
What are the most common type of participant fraud that ResOps and UXR deals with?
In general, participant fraud happens when people or robots lie to get money. You know that. But for UXR and ReOps, a lot of fraud happens when people or robots try to qualify for studies they aren’t suited for, again, usually to collect that sweet incentive money. In usability testing or customer feedback studies, this typically shows up in two ways:
Misrepresentation, where participants pretend to be someone they’re not.
Dishonest in feedback by providing false or misleading answers.
Fraud in research isn’t new, but its scale and sophistication are growing fast. For many, research is a full-time gig—not just a side income.
The growing scale of fraud in your day to day
In 2022, KNow Research, a qualitative research agency, noticed a concerning 19% increase in fraudulent participants infiltrating virtual studies. To investigate, strategists Julia Isaacs and Shira Glickman recruited past fraudsters for 1:1 webcam interviews. Within minutes, over 100 sign-ups flooded in—far beyond their original list.
Let’s dive in on the biggest takeaways.
Key factors fueling the rise are:
Post-Pandemic Shift to Online Research
AI-Powered Deception
Incentive-Driven Misrepresentation
Imposter participants’ key behaviors:
✅ Falsify professional backgrounds to meet eligibility criteria.
✅ Exaggerate personal experiences to qualify for specialized studies.
✅ Ignore study instructions and submit incoherent or rushed responses.
✅ Exploit AI to generate responses that mimic real participants.
✅ Manipulate recruitment processes to appear as multiple different personas.
Common tactics
They actively seek studies through social media, Craigslist, and Reddit.
Age is the most common lie—they know what demographic cutoffs look like.
Some are coached to stretch the truth in screeners to qualify.
IP address manipulation is standard – Fraudsters use VPNs and blacklisted IPs to appear in different locations.
Google Voice numbers protect anonymity – Many use virtual numbers instead of real phone lines.
Gift card incentives are preferred – Some scammers even have U.S.-based accomplices to collect physical incentives on their behalf.
They stick to their script – Even when confronted with inconsistencies, they rarely break character.
The impact of this type of fraud stretches across all research disciplines, from academic studies to industry research, particularly in qualitative studies where participant insights are essential to decision-making.
Types of fraud include:
Fraudsters🕵️: Participants who outright steal incentives.
Identity fakers🎭: Those who misrepresent personal information to qualify for studies. Often, the participant who shows up for the interview does not match their screener response—or their details are wildly different.
Bots🤖 : Automated systems or AI tools used to fill out surveys, screeners, and panels just to collect incentives. These bots can mimic real participants and flood your research with fake data.
Scammers🕵️: Those who attempt to steal incentives without actually engaging in the study itself.
Professional participants🎭: Survey-takers who game the system by using the same IP with multiple email addresses. They manipulate screeners to qualify for studies, often treating research participation as their primary income source (earning $3,000–$5,000 per month).
The fraud funnel concept
Julia Isaacs and Shira Glickman, strategists at KNow Research, set out with a clear goal: to refine internal practices and provide valuable resources to the broader insights industry, helping to protect the integrity of qualitative data. Through their research, they developed the Fraud Funnel concept, which identifies three key stages where fraud occurs:
1️⃣ Initial Screening – Fraudsters manipulate screeners to qualify for studies.
2️⃣ Participant Scheduling – Bad actors get locked into research sessions, making it harder to detect fraud before incentives are issued.
3️⃣ Fieldwork – The fraudster makes it into the study, corrupting insights before being identified—if they’re caught at all.
They found most research teams focus on detecting fraud at the final stage (Fieldwork)—but by that point, the damage is already done.
Ideally we believe it’’s best to catch bad actors at the earliest stage of the fraud funnel, ideally before they reach the fieldwork phase.
Real-time validation during participant recruitment—where participants are verified at the moment they engage with your study—ensures that fraudulent actors don’t slip through. This is where intercepts and live recruiting come into play, revolutionizing how we detect fraud at the first touchpoint.
Intercepts = the proactive approach to fraud detection and prevention
Live recruiting provides flexibility and proactive fraud detection and prevention. Hence the ideal tool for recruiting participants from the web for ANY kind of research—at a fraction of the cost of traditional panels or agencies.
Don’t just take our word for it—hear from Magera Moon, Co-founder of Related Works, a research and product strategy firm in New York City. Previously a Senior Product Design Manager at Etsy, Magera has firsthand experience with the power of In-Product Intercepts for User Research Recruitment. She reported that intercepts worked three times better than other recruitment methods, yielding a 7% response rate, more than triple the response rate of their initial sprint.
We’ve found intercepts to be an essential recruitment tool in our research. Not only did we get over three times the response rate, but the quality of the interviews also improved since we were able to catch people within a day of attending their event.If you want a fraud-proof workflow, do intercepts for randomized short bursts from different parts of your web and apps. All 100% unique to Ethnio.That’s the shortest version. But here’s the 411:
What is it?
Interrupting someone digitally (via pop-up window, chat, banner ad, etc) or physically (in-person) to answer a few initial screening questions, to assess their potential as a research participant, and interest in taking part, in a qualitative or quantitative research study.
When is it best used?
When recruiting a specific audience for a research study from a designated virtual or physical location), where they likely spend time (e.g users of a website or a shoppers at a specific retail store).
What does it entail?
The goal is to see if they qualify to be referred to a more complete recruiting method such as a full screener. Intercept respondents across your product ecosystem: web, iOS/Android. Customize with 30+ targeting variables to collect precise feedback from the right audience at the right moment of the product journey.
App intercepts in iOS and Android
Web intercepts in desktop and mobile viewports
Set timers, limiters, and delays
Learn how our customers, including Carmax and Toyota, use live intercepts to engage participants across their product ecosystem, proactively preventing fraudulent participants from skewing their data.
The real cost of particpant fraud
As your research projects scale, so does the challenge of managing fraudulent participants across larger studies or panels. It becomes an operational burden to manually screen and monitor participants, especially as the scope and frequency of studies increase.
Take the team at 55 Minutes as an example. Lynn, an experienced UX researcher, understands that research is not just about collecting data—it’s about capturing genuine human experiences to ensure products truly resonate. During a project interviewing social workers in Singapore, her team encountered a fraudulent participant who initially passed their screening process. Though confident and articulate, this imposter’s fabricated responses could have compromised the entire study had the fraud not been uncovered in time.
Here’s a closer look at what’s at stake when fraud slips through the screening process:
Skewed data
Fraudulent participants (whether they’re fakers or bots) provide inaccurate data. This undermines the entire research process, leading to misleading conclusions, invalid insights, and poor decision-making based on data that’s not reflective of the target audience. In this case, inconsistencies in the imposter’s responses raised immediate concerns. After a deeper analysis, it became clear that much of the data was not reflective of the target user group and had to be discarded. Had the team not caught the fraud in time, these flawed findings could have compromised the study’s validity.
Wasted resources
Recruiting participants takes time, effort, and money. Fraudsters who slip through the cracks force you to waste resources on people who don’t belong in your study. Screening, onboarding, and compensating participants who don’t align with your study criteria delays your research and eats into your budget. In Lynn’s case, after interviewing the imposter, the team found themselves wasting time on follow-up calls and verification procedures.
Cost of incentives
Incentives are an essential part of recruiting participants, but fraudulent behavior—whether stealing incentives or misrepresenting themselves to qualify—leads to increased costs without adding any real value. Once the imposter was identified, Lynn and her team made the decision not to disburse the agreed-upon incentive. However, this led to concerns that the imposter might challenge their decision. To avoid future disputes, they added a clause to their recruitment forms stating that incentives could be withheld if fake information was provided
Time loss and delays
Lynn’s decision to schedule a follow-up call with the suspected imposter helped prevent a bigger setback. Without that step, the team would have been forced to backtrack, causing a major delay in the process. but she spent a lot of time Before the follow-up call, I found myself questioning myself a lot, as I was the only person in the team who felt strongly that this person could be an imposter. Others saw potential explanations for the inconsistencies, which made me feel as though I wasn’t keeping an open mind and that I was being judgmental (the worst flaws a researcher could have). Going back to the drawing board, re-screening participants, and sometimes redoing entire sections of the study. This disrupts timelines and slows progress—time that could have been spent gathering valuable insights.
Ethical and legal risks
Misrepresentation of participant data doesn’t only have practical implications—it also introduces serious ethical and legal risks, especially when handling sensitive personal information (PII). Ensuring that your research process remains ethical and compliant is crucial for maintaining trust with stakeholders. For example, if a participant provides false health information, this can trigger compliance issues and necessitate a full audit. The team would have had to halt the study, report the incident to the compliance team, and endure a lengthy review process—all of which delay progress and add unnecessary complexity.
Difficulty scaling
For Lynn and her team, as the study expanded, the difficulties of tracking genuine participants only grew. This operational strain slowed progress and added additional pressure on already limited resources.
Beyond the operational burden, participant fraud can have far-reaching consequences. If misrepresented data influences key business decisions, it risks creating products that don’t truly serve their intended users. Worse, failing to address fraud can erode stakeholder confidence in the research process altogether.
In the case of the 55 Minutes UX study, this case highlights the true cost of participant fraud—one that extends beyond financial losses to impact research validity, resource allocation, and project timelines. You can read more from here team on medium.
Final thoughts
Combatting participant fraud is a top priority at Ethnio. The real challenge isn’t whether fraud exists in user research—it’s how to adapt and optimize participant vetting processes to meet the demands of modern qualitative research.
We recognize the importance of maintaining trust in participant networks, which is why we’ve implemented robust measures to block and remove imposters—across all types of research—at a fraction of the cost of traditional panels or agencies.
The question now is: how will you and your teams evolve your vetting methods to prevent fraud and safeguard the integrity of your research ecosystem?