I’m been avoiding blogging as I finish up some research projects and spend lots of time playing with my son. This is a quick entry about a failed survey experiment.
In academia, like many professions, we are judged on outputs, not inputs. You can work long hours and have very little to show for it. This is most obvious in the publication process, where you start a project and fingers crossed you take it to the stage of a completed manuscript. Then it is rejected at a journal. I’d guess that most of my published papers get published after rejections at the first two journals. Some never make it. The advice I got in grad school was to grow a thick skin and keep trying.
In some cases, research projects don’t even make it to that stage. I have a research project on agriculture projection that I blogged about here and here. Basically, I find that framing US agriculture subsidies relative to other countries has a massive impact on individual preferences.
As part of another survey project, I decided to field an internet survey in India using Amazon’s Mechanical Turk. A few papers have documented the use of mTurk for US surveys , but very little has been done on the use of mTurk outside of the US. A great resource for these papers is this blog. But the one unpublished paper I found showed that about 40% of the “workers” on mTurk are in India. Unlike the US sample, mTurk workers in India are overwhelmingly male and college educated. Not a big surprise, but this would be a serious challenge to publication with this same.
A few days ago I fielded a 2-3 minute survey through mTurk at the cost of $0.25 per survey response using Qualtrics to program the survey. Paying 1,000 respondents plus Amazon’s fee came to a total bill of $275 for the survey. With the exception of explaining this all to WashU’s human subjects folks over the course of the month, this is as easy as it gets.
My experimental question framed India’s agriculture policies as either more or less generous than their neighbors. No need to get into the details. For both groups, almost exactly 90% of respondents supported increasing agriculture protecton.
Previous research has suggested that less educated individuals and women tend to be more in favor of trade protection. My survey was over 65% men and 80% of respondents had completed college and the majority of respondents came from relatively rich regions. My expectation that a more representative sample would generate at least as high of a level of support for agriculture protection. There is almost no variance to explain and my experimental treatments had literally zero effect.
I think this is intellectually interesting, but a professional roadblock. What do I do with these results? Do I write a paper based on this survey? I’m 100% sure this paper wouldn’t get published. I’m thinking about adding this to my paper on the US survey experiment, but I'm not sure how exactly to fit this into the paper.
But it is an experiment I designed and these are clearly null results. A few people in political science have pitched the idea of registering experiments and I’ve even talked with a few colleagues about getting a journal special issue to commit to publishing work based on the experimental design, not on the results (and thus precommitting to publish, or at least not hold against an author, null results).
This is a pretty minor project, but since I blogged about the US results I thought I would be intellectually honest and present the India results. I also found similar null results in the UK, but probably for different reasons.