Weekly Seminar Series: Laura Liu
Oct 29, 2025
11:30AM to 12:30PM
Date/Time
Date(s) - 29/10/2025
11:30 am - 12:30 pm
Laura Liu, Associate Professor from the University of Pittsburgh will present to our graduate students and faculty on the 29th in KTH 334!
Laura Liu’s research interests encompass both micro- and macro-econometrics. Her research is characterized by two primary themes: constructing methods that make better use of newly available granular data for estimation and forecasting, and developing semi parametric methods that balance efficiency and flexibility. She has recently worked on four topics: (1) panel data and forecasting, (2) panel data and heterogeneous effects, (3) structural macroeconomic models with granular data, and (4) large vector auto-regressions and networks. Her research has been published in Econometrica, Journal of Econometrics, Quantitative Economics, Journal of Business and Economic Statistics, and Journal of Applied Econometrics. She currently serves as an Associate Editor for the Journal of Applied Econometrics, The Econometrics Journal, and the Journal of Econometric Methods.
Laura will present, “Bayesian Double Machine Learning for Causal Inference“.
Abstract
This paper proposes a simple, novel, and fully-Bayesian approach for causal inference in partially linear models with high-dimensional control variables. Off-the-shelf machine learning methods can introduce biases in the causal parameter known as regularization-induced confounding. To address this, we propose a Bayesian Double Machine Learning (BDML) method, which modifies a standard Bayesian multivariate regression model and recovers the causal effect of interest from the reduced-form covariance matrix. Our BDML is related to the burgeoning frequentist literature on DML while addressing its limitations in finite-sample inference. Moreover, the BDML is based on a fully generative probability model in the DML context, adhering to the likelihood principle. We show that in high dimensional setups the naive estimator implicitly assumes no selection on observables–unlike our BDML. The BDML exhibits lower asymptotic bias and achieves asymptotic normality and semiparametric efficiency as established by a Bernstein-von Mises theorem, thereby ensuring robustness to misspecification. In simulations, our BDML achieves lower RMSE, better frequentist coverage, and shorter confidence interval width than alternatives from the literature, both Bayesian and frequentist.