Saturday, November 16, 2013

How Science (Econometrics?) is Really Done

If you tweet, you may be familiar with #OverlyHonestMethods. If not, this link to Popular Science will set you on the right track. As it says: "In 140 characters or less, the info that didn't get through peer review."

Here are some beauties that may strike an accord with certain applied econometricians:
  • "Our results were non-significant at p > 0.05, but they're humdingers at p > 0.1"
  • "Experiment was repeated until we had three statistically significant similar results and could discard the outliers"
  • "We decided to use Technique Y because it's new and sexy, plus hot and cool. And because we could."
  • "I can't send you the original data because I don't remember what my excel file names mean anymore."
  • "Non-linear regression analysis was performed in Graph Pad Prism because SPSS is a nightmare."
  • "We made a thorough comparison of all post-hoc tests while our statistician wasn't looking."
  • "Our paper lacks post-2010 references as it's taken the co-authors that long to agree on where to submit the final draft."
  • "If you pay close attention to our degrees-of-freedom you will realize we have no idea what test we actually ran."
  • "Additional variables were not considered because everyone involved is tired of working on this paper."
  • "We used jargon instead of plain English to prove that a decade of grad school and postdoc made us smart."

Oh yes!!!!

© 2013, David E. Giles

A Talk With Lars Peter Hansen

Now that some of the commotion and excitement over this year's Economics Nobel Prize has died down a little, an informal chat with co-winner Lars Peter Hansen is definitely in order.

So, a hat-tip to Mark Thoma for alerting me to this interview of Hansen by Jeff Sommer in the New York Times, today. Jeff manages to get a comment about efficient markets from his interviewee.

And here's a comment that all students of econometrics should take to heart:

"The thing to remember about models is they’re always approximations and they will always turn out to be wrong at some point. When someone says all the models that economists use are wrong, well, in a sense that’s true. But you need to ask, are the models wrong in ways that are central to the questions, or are they wrong in ways that aren’t so central?
And so part of the task of statistical analysis is to look at models and try to figure out what the gaps are so that people will build better models in the future."

© 2013, David E. Giles