As experimental researchers, we try to design studies that prevent or handle confounds. We try to ensure that the groups we are comparing differ only in what we care about. To learn if coin flipping or die throwing leads to more lying, we randomly assign a given pool of participants to do one or the other, matching tasks on everything that matters (e.g., incentives, instructions, whether participants are nuns, etc). This is Research Methods 101. But somehow once the word “meta-analysis” is put in front of us, it’s like everything we learned (or even teach!) in Research Methods 101 temporarily exits our brain, replaced by the mantra that “meta-analysis is the gold standard”. We ignore the fact that coin-flip tasks and die-roll tasks may differ in all sorts of ways – not only in the psychology the tasks evoke in participants, but in whether those studies are run on economics students or nuns, in whether the incentive to lie is small or large, in how dishonesty is coded, etc. But we trust analyses that compare them as though they were on equal footing.
Joe Leif Uri