2013-02-14

Reflections on a Free Online Course on Quantitative Methods in Political Sciences

Last year I watched some videos of Gary King's lectures on Advanced Quantitative Research Methodology (GOV 2001). The course teaches ongoing political scientists how to develop new approaches to research methods, data analysis, and statistical theory. The course material (videos and slides) seems to be still online, a subsequent course apparently has started end of January 2013.

I only watched some videos and did not work through the assignments. Nevertheless I learned a lot, and I am writing this post to reduce my pile of loose leafs (new year's resolution) and summarize the take-aways.

Theoretical concepts

In one of the first lessons, the goals of empirical research are stepwise partitioned until the concept of counterfactual inference appears, a new term for me. It denotes “using facts you know to learn facts you cannot know, because they do not (yet) exist” and can further differentiated into prediction, what-if analysis, and causal inference. I liked the stepwise approximation) to the concept: summarize vs. inference, descriptive inference vs. counterfactual inference.

In the course was presented a likelihood theory of inference. New to me was the likelihood axiom which states that a likelihood function L(t',y) must be proportional to the probability of the data given the hypothetical parameter and the “model world”. Proportional means here a constant that only depends on the data y, i.e. L(t',y) = k(y) P(y|t'). Likelihood is a relative measure of uncertainty, relative to the data set y. Comparisons of values of the likelihood function across data sets is meaningless. The data affects inferences only through the likelihood function.

In contrast to likelihood inference, Bayesian inference models a posterior distribution P(t'|y) which incorporates prior information P(t'), i.e. P(t'|y) = P(t') P(y|t')/P(y). To me, it seems the likelihood theory of inference is more straightforward as it is not necessary to treat prior information P(t'). I have heard that there discussions between “frequentists” and “Bayesians”, but it was new to me to hear from a third group “Likelihoodists”.

Modeling

At the beginning of the course, some model specifications with “systematic” and “stochastic” components were introduced. I like this notation, it makes very clear what goes on and where the uncertainty is.

An motivation was given of the negative binomial distribution as a compounding distribution of the Poisson and the Gamma distribution (aka Gamma mixture). The negative binomial distribution can be viewed as a Poisson distribution where the Poisson parameter is itself a random variable, distributed according to a Gamma distribution. With g(y|\lambda) as density of the Poisson distribution and h(\lambda|\phi, \sigma^2) as density of the Gamma distribution, the negative binomial distribution f arises after collapsing their joint distribution: f(y|\phi, \sigma^2) = \int_0^+\infty g(y|\lambda) h(\lambda|\phi, \sigma^2) d\lambda

There were many other modeling topics, including missing value imputation and matching as a technique for causal inference. I did not look into it, maybe later/someday.

The assignments did move very fast to simulation techniques. I did not work through them, but got interested in the subject and will work some chapters of Ripley's “Stochastic Simulation” book, when time permits.

Didactical notes

I was really impressed by the efforts Mr. King and his teaching assistants took to teach their material. Students taking the (non-free) full course prepare replication papers. The assignments involve programming. In the lectures quizzes are shown, the students vote using facebook and the result is presented two minutes later. The professor interrupted his talk once per lecture and said “Here is the question; discuss this with your neighbour for five minutes”. Very good idea.

No comments: