Math 5062
Spring 2015

This page has been updated to reflect topics actually covered for future reference.

Course Details:

Instructor: Todd Kuffner

Lecture: 8:30-10am, Monday and Wednesday, Cupples I, Room 218

Required textbooks:
We also made use the following references:
Exams: 1 midterm and 1 final (also a Ph.D. qualifying exam for those students electing to take it)

Homework: there were homework assignments

In-class presentation: students were required to read a recent paper in a leading statistics journal and give a critical presentation of the methods, making reference to tools learned during Math 5061-5062. Two groups were formed (4 students each) and 40-minute presentations with slides were given on the following two papers:
Grades: 30% Homework, 25% Midterm, 35% Final, 10% In-class presentation

Topics List (reflecting what was actually covered):
  1. Preliminaries: stochastic orders of magnitude, stochastic convergencem, characteristic functions, multivariate normal distribution, Kullback-Leibler divergence, pivots, asymptotically pivotal quantities
  2. Optimal Estimation: asymptotic unbiasedness, consistency, various WLLN and SLLN
  3. Convergence in Distribution: continuous mapping theorem, Helly-Bray theorem, dominated convergence theorem, Skorohod's representation theorem, the joys of Fatou's lemma
  4. Asymptotic normality: Cramer-Wold theorem, Lindeberg and Lyapunov conditions, triangular arrays, Lindeberg-Feller CLT
  5. Refinements: Berry-Esseen theorem, Edgeworth expansions
  6. Toolbox: delta method, variance stabilizing transformations
  7. Applications: asymptotic distributions of sample moments and order statistics
  8. Likelihood theory: MLE consistency, Fisher information, asymptotic normality, MLE with many parameters; profile likelihood; Wald, score and LR tests; Wilks' theorem
  9. Bayesian Parametric Asymptotics: Scheffe's theorem, Bernstein-von Mises theorem, Doob's theorem, difference between Bayes estimates and MLE
  10. Optimal Hypothesis Testing: consistency for tests, consistency under local alternatives, asymptotic relative efficiency (in the context of testing)
  11. Frequentist Nonparametric Methods: consistency of nonparametric bootstrap;  why and how the nonparametric bootstrap can be used to achieve bias correction; why and how the nonparametric bootstrap can be used to achieve refinement of first-order asymptotic distribution theory for asymptotically standard normal pivots
  12. Bayesian Nonparametric Methods: key concepts: prior construction (random measures), Dirichlet processes, consistency (Schwartz's theorem), posterior rates of contraction