Workshop Description

The Third Workshop on Higher-Order Asymptotics and Post-Selection Inference (WHOA-PSI)^{3} seeks to build upon the success of the first workshop and second workshop, by presenting the latest developments in post-selection inference, and discussing how tools from higher-order asymptotics can both elucidate important properties of post-selection inference procedures, as well as suggest new directions which may ultimately yield more accurate small-sample performance. The workshop format is intended to encourage collaboration and lively discussion, and to give a voice to all participants with online discussion forums (a result of a successful experiment from the first two workshops). More specific details will soon be posted below. Contact: Todd Kuffner, email: kuffner@wustl.edu

This conference supports the Welcoming Environment Statement of the Association for Women in Mathematics (AWM).

We gratefully acknowledge our sponsors:
     

Update September 4th 2018:  The schedule and other details can be found below. These links will be updated if there are any changes.


Location and travel instructions: click here A picture of the room: click here Parking instructions: click here Restaurant information: click here
Meeting logistics (no schedule here): click here Todd's Stuffed Animal Notification System: click here AllAnswered.com Forum: click here Titles and abstracts: click here


Organizing Committee





List of Registered Participants

Schedule

Detailed Schedule (pdf)

Speakers

*Confirmed






















Registration Details


Click here to register. The registration fee is $320. This includes all breakfasts, lunches, coffee breaks, and the banquet dinner on Saturday September 8th. The deadline to register has been extended until August 31st at 11:59pm US Central time. Spaces are limited, and registration may close sooner if the capacity is reached, so it is recommended that you register early. The number of available places is listed at the bottom of the registration page.

Dates, Times, and Location

The workshop is a full 3 days. The talks will begin around 8:30am on Saturday September 8th, and will end by 5pm on Monday September 10th, 2018. The workshop will take place at a conference center on the campus of Washington University in St. Louis in St. Louis, Missouri, USA.

Availability of Funding for Junior Participants

We have allocated all available NSF funds to registered participants. Unfortunately there is no additional financial support available.

Poster Session

There will be one or two poster sessions. Anyone wishing to present a poster should indicate this on the online registration form. The topic of the poster should be related to the content of the workshop. Please check with the organizers if you are unsure about the suitability of your poster topic. The deadline for titles and abstracts is August 14th. More details about the poster session will be emailed to poster presenters after registration closes.

Lodging Information

All workshop participants will have to make their own lodging arrangements. The Knight Center (the location of the workshop on campus) is holding 66 guest rooms at a special rate of $119/night for each of the 4 nights Sept 7, 8, 9, and 10. These rooms are available on a first-come, first-serve basis and the deadline to book a room is August 17th. Each of these rooms has a queen-size bed, and the price includes parking. To reserve your room, please call the Knight Center at +1 314-933-9400 and use the block code `Mango Statistics' to book one of the rooms being held for workshop participants. Alternatively, some other nearby hotels include the Clayton Plaza Hotel and the Moonrise Hotel.

Local Information


For those arriving early or thinking about staying longer, St. Louis is a lovely place to visit. Besides the iconic Gateway Arch and the nearby Old Courthouse which houses exhibits on the Dred Scott case, St. Louis has a stunning botanical garden, a high density of good restaurants (BBQ is a specialty), and is close to many rivers (Missouri, Mississippi and Meremac) which are great for float trips. There are many nearby parks and nature reserves which are excellent for hiking, as well as a wolf sanctuary. Mark Twain's boyhood home lies an hour north of the city. Anheuser-Busch is headquartered in St. Louis and offers tours of the brewery (requires advance booking due to popularity). For those unfamiliar with the institution, Washington University in St. Louis is a leading national research university, ranked 23rd in the world in the 2016 Academic Ranking of World Universities. Our statistics presence is concentrated in the Dept. of Mathematics. You are encouraged to look around this beautiful campus on the western edge of St. Louis, which faces Forest Park, the site of the 1904 World's Fair and home to the Saint Louis Zoo and Saint Louis Art Museum (both free admission, walking distance from campus).


Potential Topics include (but are certainly not limited to):   Participants: feel free to send me updates!
Principles and general views of post-selection inference, for example
Benjamini (2010). `Simultaneous and selective inference: current successes and future challenges', Biometrical Journal 52, 708-721.
Taylor & Tibshirani (2015), `Statistical learning and selective inference', Proceedings of the National Academy Sciences 112, 7629-7634.
Leeb & Potscher (2005), `Model selection and inference: facts and fiction', Econometric Theory 21, 21-59.

Foundations (general, not necessarily post-selection),
Keli Liu and Xiao-Li Meng (2016). There is individualized treatment. Why not individualized inference? Annual Review of Statistics and Its Application 3, 79-111.
Suzanne Thornton and Min-ge Xie (2017). Approximate confidence distribution computing: an effective likelihood-free method with statistical guarantees, arXiv:1705.10347.
Ryan Martin and Chuanhai Liu (2016). Validity and the foundations of statistical inference, arXiv: 1607.05051.

Comparisons of naive intervals and post-selection inference, for example
Zhao, Shojaie & Witten (2017), `In defense of the indefensible: a very naive approach to high-dimensional inference', arXiv: 1705.05543.
Leeb, Potscher & Ewald (2015), `On various confidence intervals post-model-selection', Statistical Science 30, 216-227.

Incorporating resampling and asymptotic refinements into inference procedures relevant for this workshop, for example
Stephen M.S. Lee and Yilei Wu (2017). Resampling-based post-model-selection inference for linear regression models.
Andreas Buja and Werner Stuetzle (2017). Smoothing effects of bagging: von Mises expansions of bagged statistical functionals, arXiv: 1612.02528.
Noureddine El Karoui and Elizabeth Purdom (2015). Can we trust the bootstrap in high dimensions? Submitted.
Noureddine El Karoui and Elizabeth Purdom (2016). The bootstrap, covariance matrices, and PCA in moderate and high-dimensions. Submitted.
McCarthy, Zhang, Brown, Berk, Buja, George & Zhao (2017). Calibrated Percentile Double Bootstrap for Robust Linear Regression Inference, Statistica Sinica, accepted.
Mayya Zhilova (2016). Non-classical Berry-Esseen inequality and accuracy of the weighted bootstrap, arXiv: 1611.02686 .
Mayya Zhilova (2015). Simultaneous likelihood-based bootstrap confidence sets for a large number of models, arXiv: 1506.05779 .
Ian McKeague and Min Qian (2015). An adaptive resampling test for detecing the presence of significant predictors (with discussion). J. Amer. Statist. Assoc. 110, 1422-1433.

Cross-Validation, AIC, inference and prediction post-selection, for example
Jing Lei (2017). Cross-validation with confidence, arXiv: 1703.07904.
Ali Charkhi & Gerda Claeskens (2017). Asymptotic post-selection inference for Akaike's information criterion.
Lukas Steinberger and Hannes Leeb (2016). Leave-one-out prediction intervals in linear regression models with many variables, arXiv: 1602.05801.
Liang Hong, Todd Kuffner & Ryan Martin (2018). On overfitting and post-selection uncertainty assessments. Biometrika 105(1), 221-224.
Liang Hong, Todd Kuffner & Ryan Martin (2017). On prediction of future insurance claims when the model is uncertain. Submitted.
Francois Bachoc, Hannes Leeb & Benedikt Potscher (2017). Valid confidence intervals for post-model-selection predictors, arXiv: 1412.4605.
Hannes Leeb (2009). Conditional predictive inference post model selection. Annals of Statistics 37(5B), 2838-2876.
Hannes Leeb (2008). Evaluation and selection of models for out-of-sample prediction when the sample size is small relative to the complexity of the data-generating process. Bernoulli 14(3), 661-690.

Assumption-lean and distribution-free inference, conformal prediction and robustness, for example
Kuchibhotla, Brown, Buja, George & Zhao (2018). Valid Post-selection Inference in Assumption-lean Linear Regression. arXiv: 1806.04119.
Kuchibhotla, Brown, Buja, George & Zhao (2018). A Model Free Perspective for Linear Regression: Uniform-in-model Bounds for Post Selection Inference. arXiv: 1802.05801.
Buja, Berk, Brown, George, Kuchibhotla & Zhao. Models as Approximations II: A General Theory of Model-Robust Regression. arXiv: 1612.03257.
Anru Zhang, Larry Brown & Tony Cai (2016). Semi-supervised inference: general theory and estimation of means, arXiv: 1606.07268.
Lei, G'Sell, Rinaldo, Tibshirani & Wasserman (2017). Distribution-free predictive inference for regression, J. Amer. Statist. Assoc., to appear.
Fan Yang and Rina Foygel Barber (2017). Contraction and uniform convergence of isotonic regression. arXiv: 1706.01852.
Azriel, Brown, Sklar, Berk, Buja & Zhao (2016). Semi-supervised linear regression, arXiv: 1612.02391.

Statistical efficiency and inference in machine learning, for example
Susan Athey & Stefan Wager (2017). Efficient policy learning, arXiv: 1702.02896.
Qingyuan Zhao & Trevor Hastie (2017). Causal interpretations of black-box models, J. of Business & Economic Statistics, to appear.

Model-based clustering and cluster-based models and inference
, for example
Bunea, Eisenbach, Ning, Dinicu and Liu (2017). Inference in cluster-based graphical models.
Bunea, Ning and Wegkamp (2017). Overlapping clustering with statistical guarantees, arXiv: 1704.06977.

Selective inference (conditional approaches), for example
Azais, de Castro & Mourareau (2018). Power of the spacing test for least-angle regression. Bernoulli 24(1), 465-492.
Qingyuan Zhao, Dylan Small and Ashkan Ertefaie (2017). Selective inference for effect modification via the lasso, arXiv: 1705.08020.
Yuval Benjamini, Jonathan Taylor & Rafael Irizarry (2016). Selection corrected statistical inference for region detection with high-dimensional throughput assays, bioRxiv preprint.
Hyun, G'Sell & Tibshirani (2016), `Exact post-selection inference for changepoint detection and other generalized lasso problems', arXiv: 1606.03552
Taylor & Tibshirani (2016), `Post-selection inference for L1-penalized likelihood models', arXiv: 1602.07358
Fithian, Taylor, Tibshirani & Tibshirani (2015+), `Selective sequential model selection', arXiv: 1512.02565
Tibshirani, Taylor, Lockhart, Tibshirani (2015+), `Exact post-selection inference for sequential regression procedures', J. Amer. Statist. Assoc., to appear.
Lockhart, Taylor, Tibshirani & Tibshirani (2014), `A significance test for the lasso', Annals of Statistics 42, 413-468.
Tibshirani, Rinaldo, Tibshirani & Wasserman (2015), `Uniform asymptotic inference and the bootstrap after model selection', arXiv: 1506.06266
Tian & Taylor (2015), `Asymptotics of selective inference', arXiv: 1501.03588
Lee, Sun, Sun & Taylor (2016), `Exact post-selection inference with the lasso', to appear in the Annals of Statistics.

Simultaneous inference, false discovery rates (FDR), false coverage statement rates (FCR), family-wise error rates (FWER), for example
Pallavi Basu, Tony Cai, Kiranmoy Das & Wenguang Sun (2018). Weighted false discovery rate control in large-scale multiple testing. J. Amer. Statist. Assoc., to appear.
Katsevich & Ramdas (2018). Towards ``simultaneous selective inference": post-hoc bounds on the false discovery proportion, arXiv: 1803.06790
Ramdas, Barber, Wainwright & Jordan (2017). A unified treatment of multiple testing with prior knowledge using the p-filter, arXiv: 1703.06222
Lihua Lei, Aaditya Ramdas & Will Fithian (2017). STAR: a general interactive framework for FDR control under structural constaints, arXiv: 1710.02776
Bachoc, Preinerstorfer & Steinberger (2017). Uniformly valid confidence intervals post-model-selection, arXiv: 1611.01043.
Berk, Brown, Buja, Zhang & Zhao (2013), `Valid post-selection inference', Annals of Statistics 41, 802-837.
Benjamini (2010), `Discovering the false discovery rate', J. Roy. Statist. Soc. Ser. B 72, 405-416.
Benjamini & Yekutieli (2005), `False discovery rate-adjusted multiple confidence intervals for selected parameters', J. Amer. Statist. Assoc. 100, 71-93.
G'Sell, Wager, Chouldechova & Tibshirani (2015+), `Sequential selection procedures and false discovery rate control', J. Roy. Statist. Soc. Ser. B, to appear.
Barber & Candes (2015), `Controlling the false discovery rate via knockoffs', Annals of Statistics 43, 2055-2085.
Su, Bogdan & Candes (2016+), `False discoveries occur early on the Lasso path', arXiv: 1511.01957.

Principled statistical inference, for example
Todd Kuffner and Alastair Young (2017). Principled statistical inference in data science. Submitted.
Todd Kuffner and Alastair Young (2017). Philosophy of science, principled statistical inference, and data science.

Bayesian post-selection inference, for example
Panigrahi, Taylor & Weinstein (2016). `Bayesian post-selection inference in the linear model', arXiv: 1605.08824
Yekutieli (2012). `Adjusted Bayesian inference for selected parameters', J. Roy. Statist. Soc. Ser. B, 74(3), 515-541.

Bagging and Boosting, for example
Bradic (2016). `Randomized maximum-contrast selection: subagging for large-scale regression', Elec. J. Statist. 10(1), 121-170.
Li & Bradic (2015). `Boosting in the presence of outliers: adaptive classification with non-convex loss functions', arXiv: 1510.01064.
Efron (2014), `Estimation and accuracy after model selection', J. Amer. Statist. Assoc. 109, 991-1007.
Buhlmann & Yu (2002), `Analyzing bagging', Annals of Statistics 30, 927-961.

High-dimensional inference, for example
Po-Ling Loh (2017). Statistical consistency and asymptotic normality for high-dimensional robust M-estimators, Annals of Statistics 45(2), 866-896.
Fan, Shao & Zhou (2015), `Are discoveries spurious? Distributions of Maximum Spurious Correlations and their applications', arXiv: 1502.04237
Cai & Guo (2015), `Confidence intervals for high-dimensional linear regression: minimax rates and adaptivity', arXiv: 1506.05539
Ning & Liu (2015), `A general theory of hypothesis tests and confidence regions for sparse high dimensional models', arXiv: 1412.8765
Ning, Zhao & Liu (2015), `A likelihood ratio framework for high dimensional semiparametric regression', arXiv: 1412.2295
Shah & Samworth (2013), `Variable selection with error control: another look at stability selection', J. Roy. Statist. Soc. B 75, 55-80.
Meinshausen & Buhlmann (2010), `Stability selection', J. Roy. Statist. Soc. Ser. B 72, 417-473.
van de Geer, Buhlmann, Ritov & Dezeure (2014), `On asymptotically optimal confidence regions and tests for high-dimensional models', Annals of Statistics 42, 1166-1202.
Javanmard & Montanari (2015+), `Hypothesis testing in high-dimensional regression under the Gaussian random design model: asymptotic theory', IEEE Trans. Inform. Theory, to appear.
Liu & Yu (2013), `Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression', Electronic J. Statist. 7, 3124-3169.
Zhang & Zhang (2014), `Confidence intervals for low-dimensional parameters in high-dimensional linear models', J. Roy. Statist. Soc. Ser. B 76, 217-242.
Belloni, Chernozhukov & Hansen, `Inference methods for high-dimensional sparse econometric models', Advances in Economics & Econometrics, Econometric Society World Congress 2010.

Selection and inference for weak signals, for example
Shi & Qu (2016). `Weak signal identification and inference in penalized model selection', Annals of Statistics, to appear.
Jeng (2016). `Detecting weak signals in high dimensions', J. Multivariate Statist. 147, 234-246.


The aspects of the above topics and other post-selection inference procedures which will be emphasized in the workshop are those related to higher-order asymptotics, including both analytic- and resampling-based tools and refinements, some of which are described in:
Some recent references for post-selection inference include
Chapter 3 of Fithian (2015), Topics in Adaptive Inference, Ph.D. thesis, Stanford University.
Chapter 6 of Hastie, Tibshirani & Wainwright (2015), Statistical Learning with Sparsity: The Lasso and Generalizations, Chapman & Hall.
Chapters 10-11 of Buhlmann & van de Geer (2011), Statistics for High-Dimensional Data, Springer.