Conducting social and behavioral research has no doubt benefitted from the normalized presence of technology in our everyday lives. Access to smart phones has never been greater, with 91% of the population owning a smart phone (Pew, 2025). These devices are exponentially more powerful than computers which landed the Apollo 11 astronauts on the moon, and we use them- a lot. In fact, the average American checks their phone 144 times per day and spend a total of roughly 4.5 hours per day on their device (Consumer Affairs, 2025). With all this technology, and our willingness to use it, survey researchers have never had a more accessible population.
As with any good thing, there must be some detractors. Imposter participants have always existed, even in the dark ages of pencil and paper surveys. When incentives are offered, some individuals will misrepresent themselves to receive them. Yet, while technological advances have made research participation easier than ever, they have also made deception easier than ever.
Incorporating these falsified responses into a dataset have likely always been a part of the research landscape. Not every respondent is reliable, or participates as expected, and researchers accept that reality. Statistical analysis will (hopefully) rectify these anomalies as outliers. But what happens when the outliers cease to be outliers? When fraudulent responses reach a scale large enough to skew the curve, we have a serious problem.
There have been increasing reports of concerted and coordinated efforts to subvert the screening questions to enter studies solely to collect incentives or compensation. I have spoken with researchers conducting phone or teleconference-based surveys characterized by call center style background noise, scripted responses, and what has been described as real-time coaching on how to answer questions. Initially, I was reluctant to lend credibility to these accounts. They sounded incredible and were easy to dismiss as the frustrations of easily excitable researchers. However, after similar reports steadily increased, it became clear that this issue was not only real, it was becoming increasingly prevalent.
I have spoken with researchers exploring and implementing increasingly sophisticated screening methodologies designed to “weed out” imposter participants. Screening methods, much like consumer product claims, come with the familiar disclaimer: results may vary. Guidance found in the “Prevent FRaudulent Online STudy participation” (P-FROST) recommendations are a wonderful starting point for increasing screening rigor. However, as screening methods advance, so advance the measures to defeat them.
From an ethical and IRB perspective, screening methods must be evaluated not only against applicable regulatory requirements, but also the risk/benefit ratio to participants. Stated plainly, we cannot allow the well-intentioned pursuit of “better” data to overly burden (or endanger) participants. We are continuously working with researchers to strike this balance, and implement study-specific, ethical solutions that protect participants while preserving data integrity.
As always, if you have questions about imposter participants, or any other IRB-related issues, please feel free to contact me at cgillespie@pearlpathways.com.
Pew Research Center. (2025, November 20). Mobile fact sheet. Pew Research Center: Internet & Technology. https://www.pewresearch.org/internet/fact-sheet/mobile/
ConsumerAffairs. (2025, March 20). Cell phone statistics 2026 [2025]. https://www.consumeraffairs.com/cell_phones/cell-phone-statistics.html
Mistry K, Merrick S, Cabecinha M, et al. Fraudulent Participation in Online Qualitative Studies: Practical Recommendations on an Emerging Phenomenon. Qualitative Health Research. 2024;0(0). doi:10.1177/10497323241288181