We tend to use predicted values for missing variables. One of my advisors would recommend doing it a bunch of different ways and hoping they all tell the same story in the end.
I thought we could "trust the science"... :)
reply
Ah yes, the magical "robustness check".
Possibly the most commonly asked for exercise by referees, but the least scientifically grounded one (at least, from the point of view of statistical rigor.)
For those not in the know, it basically means try your results under a variety of different assumptions and if the main result still holds, then the result is "robust".
I have to say, there's a certain logic to it---but it's weird that econometricians obsess with how to formally calculate the asymptotic variance of an estimator, and then on the other hand ask for these totally hand-wavy exercises like robustness checks.
reply
In my advisor's defense, she doesn't care that much about econometric nit-picking. Her preference is very much to find highly defensible natural experiments and then do a simple regression analysis.
I actually hadn't even connected this practice to robustness checks in my mind, because it comes up towards the beginning of the process. It always just seemed like her attitude was "Why don't you see if it's a problem before you spend a bunch of time worrying about it?"
reply
Oh yeah I’m not really criticizing robustness checks.
If anything, the people who trust too much in the formal statistics deserve more criticism (imo). And I’m mainly referring to social science here as well.
reply
I remember my office-mate going down a crazy rabbit hole trying to figure out exactly what the right standard error calculations should be for his job market paper. He probably spent a month obsessing over it and never did come to a definitive conclusion.
Probably no surprise, but each of the half dozen different approaches told pretty much the same story.
reply