You are What Your Parents Expect: Height and Local Reference Points
Recent estimates are that about 150 million children under five years of age are stunted, with substantial negative consequences for their schooling, cognitive skills, health, and economic productivity. Therefore, understanding what determines such growth retardation is significant for designing public policies that aim to address this issue. We build a model for nutritional choices and health with reference-dependent preferences. Parents care about the health of their children relative to some reference population. In our empirical model, we use height as the health outcome that parents target. Reference height is an equilibrium object determined by earlier cohorts' parents' nutritional choices in the same village. We explore the exogenous variation in reference height produced by a protein-supplementation experiment in Guatemala to estimate our model's parameters. We use our model to decompose the impact of the protein intervention on height into price and reference-point effects. We find that the changes in reference points account for 65% of the height difference between two-year-old children in experimental and control villages in the sixth annual cohort born after the initiation of the intervention.
Dealing with imperfect randomization: Inference for the highscope perry preschool program
This paper considers the problem of making inferences about the effects of a program on multiple outcomes when the assignment of treatment status is imperfectly randomized. By imperfect randomization we mean that treatment status is reassigned after an initial randomization on the basis of characteristics that may be observed or unobserved by the analyst. We develop a partial identification approach to this problem that makes use of information limiting the extent to which randomization is imperfect to show that it is still possible to make nontrivial inferences about the effects of the program in such settings. We consider a family of null hypotheses in which each null hypothesis specifies that the program has no effect on one of several outcomes of interest. Under weak assumptions, we construct a procedure for testing this family of null hypotheses in a way that controls the familywise error rate - the probability of even one false rejection - in finite samples. We develop our methodology in the context of a reanalysis of the HighScope Perry Preschool program. We find statistically significant effects of the program on a number of different outcomes of interest, including outcomes related to criminal activity for males and females, even after accounting for the imperfectness of the randomization and the multiplicity of null hypotheses.
Econometric Causality: The Central Role of Thought Experiments
This paper examines the econometric causal model and the interpretation of empirical evidence based on thought experiments that was developed by Ragnar Frisch and Trygve Haavelmo. We compare the econometric causal model with two currently popular causal frameworks: the Neyman-Rubin causal model and the Do-Calculus. The Neyman-Rubin causal model is based on the language of potential outcomes and was largely developed by statisticians. Instead of being based on thought experiments, it takes statistical experiments as its foundation. The Do-Calculus, developed by Judea Pearl and co-authors, relies on Directed Acyclic Graphs (DAGs) and is a popular causal framework in computer science and applied mathematics. We make the case that economists who uncritically use these frameworks often discard the substantial benefits of the econometric causal model to the detriment of more informative analyses. We illustrate the versatility and capabilities of the econometric framework using causal models developed in economics.
Assumption-lean falsification tests of rate double-robustness of double-machine-learning estimators
The class of doubly robust (DR) functionals studied by Rotnitzky et al. (2021) is of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. (2022b) and the class of functionals studied by Robins et al. (2008). The present state-of-the-art estimators for DR functionals are double-machine-learning (DML) estimators (Chernozhukov et al., 2018). A DML estimator of depends on estimates and of a pair of nuisance functions and , and is said to satisfy "rate double-robustness" if the Cauchy-Schwarz upper bound of its bias is . Were it achievable, our scientific goal would have been to construct valid, assumption-lean (i.e. no complexity-reducing assumptions on or ) tests of the validity of a nominal () Wald confidence interval (CI) centered at . But this would require a test of the bias to be , which can be shown not to exist. We therefore adopt the less ambitious goal of falsifying, when possible, an analyst's justification for her claim that the reported () Wald CI is valid. In many instances, an analyst justifies her claim by imposing complexity-reducing assumptions on and to ensure "rate double-robustness". Here we exhibit valid, assumption-lean tests of : "rate double-robustness holds", with non-trivial power against certain alternatives. If is rejected, we will have falsified her justification. However, no assumption-lean test of , including ours, can be a consistent test. Thus, the failure of our test to reject is not meaningful evidence in favor of .
Policy evaluation during a pandemic
National and local governments have implemented a large number of policies in response to the Covid-19 pandemic. Evaluating the effects of these policies, both on the number of Covid-19 cases as well as on other economic outcomes is a key ingredient for policymakers to be able to determine which policies are most effective as well as the relative costs and benefits of particular policies. In this paper, we consider the relative merits of common identification strategies that exploit variation in the timing of policies across different locations by checking whether the identification strategies are compatible with leading epidemic models in the epidemiology literature. We argue that unconfoundedness type approaches, that condition on the pre-treatment "state" of the pandemic, are likely to be more useful for evaluating policies than difference-in-differences type approaches due to the highly nonlinear spread of cases during a pandemic. For difference-in-differences, we further show that a version of this problem continues to exist even when one is interested in understanding the effect of a policy on other economic outcomes when those outcomes also depend on the number of Covid-19 cases. We propose alternative approaches that are able to circumvent these issues. We apply our proposed approach to study the effect of state level shelter-in-place orders early in the pandemic.
The spread of COVID-19 in London: Network effects and optimal lockdowns
We generalise a stochastic version of the workhorse SIR (Susceptible-Infectious-Removed) epidemiological model to account for spatial dynamics generated by network interactions. Using the London metropolitan area as a salient case study, we show that commuter network externalities account for about 42% of the propagation of COVID-19. We find that the UK lockdown measure reduced total propagation by 44%, with more than one third of the effect coming from the reduction in network externalities. Counterfactual analyses suggest that: the lockdown was somehow late, but further delay would have had more extreme consequences; a targeted lockdown of a small number of highly connected geographic regions would have been equally effective, arguably with significantly lower economic costs; targeted lockdowns based on threshold number of cases are not effective, since they fail to account for network externalities.
Distribution-Invariant Differential Privacy
Differential privacy is becoming one gold standard for protecting the privacy of publicly shared data. It has been widely used in social science, data science, public health, information technology, and the U.S. decennial census. Nevertheless, to guarantee differential privacy, existing methods may unavoidably alter the conclusion of original data analysis, as privatization often changes the sample distribution. This phenomenon is known as the trade-off between privacy protection and statistical accuracy. In this work, we mitigate this trade-off by developing a distribution-invariant privatization (DIP) method to reconcile both high statistical accuracy and strict differential privacy. As a result, any downstream statistical or machine learning task yields essentially the same conclusion as if one used the original data. Numerically, under the same strictness of privacy protection, DIP achieves superior statistical accuracy in a wide range of simulation studies and real-world benchmarks.
Dividend suspensions and cash flows during the Covid-19 pandemic: A dynamic econometric model
Firms suspended dividend payments in unprecedented numbers in response to the outbreak of the Covid-19 pandemic. We develop a multivariate dynamic econometric model that allows dividend suspensions to affect the conditional mean, volatility, and jump probability of growth in daily industry-level dividends and demonstrate how the parameters of this model can be estimated using Bayesian Gibbs sampling methods. We find considerable heterogeneity across industries in the dynamics of daily dividend growth and the impact of dividend suspensions.
Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic
Mediation analysis draws increasing attention in many research areas such as economics, finance and social sciences. In this paper, we propose new statistical inference procedures for high dimensional mediation models, in which both the outcome model and the mediator model are linear with high dimensional mediators. Traditional procedures for mediation analysis cannot be used to make statistical inference for high dimensional linear mediation models due to high-dimensionality of the mediators. We propose an estimation procedure for the indirect effects of the models via a partially penalized least squares method, and further establish its theoretical properties. We further develop a partially penalized Wald test on the indirect effects, and prove that the proposed test has a limiting null distribution. We also propose an -type test for direct effects and show that the proposed test asymptotically follows a -distribution under null hypothesis and a noncentral -distribution under local alternatives. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed tests and compare their performance with existing ones. We further apply the newly proposed statistical inference procedures to study stock reaction to COVID-19 pandemic via an empirical analysis of studying the mediation effects of financial metrics that bridge company's sector and stock return.
Information criteria for latent factor models: a study on factor pervasiveness and adaptivity
We study the information criteria extensively under general conditions for high-dimensional latent factor models. Upon carefully analyzing the estimation errors of the principal component analysis method, we establish theoretical results on the estimation accuracy of the latent factor scores, incorporating the impact from possibly weak factor pervasiveness; our analysis does not require the same factor strength of all the leading factors. To estimate the number of the latent factors, we propose a new penalty specification with a two-fold consideration: i) being adaptive to the strength of the factor pervasiveness, and ii) favoring more parsimonious models. Our theory establishes the validity of the proposed approach under general conditions. Additionally, we construct examples to demonstrate that when the factor strength is too weak, scenarios exist such that no information criterion can consistently identify the latent factors. We illustrate the performance of the proposed adaptive information criteria with extensive numerical examples, including simulations and a real data analysis.
Smoothed Quantile Regression with Large-Scale Inference
Quantile regression is a powerful tool for learning the relationship between a response variable and a multivariate predictor while exploring heterogeneous effects. This paper focuses on statistical inference for quantile regression in the "increasing dimension" regime. We provide a comprehensive analysis of a convolution smoothed approach that achieves adequate approximation to computation and inference for quantile regression. This method, which we refer to as turns the non-differentiable check function into a twice-differentiable, convex and locally strongly convex surrogate, which admits fast and scalable gradient-based algorithms to perform optimization, and multiplier bootstrap for statistical inference. Theoretically, we establish explicit non-asymptotic bounds on estimation and Bahadur-Kiefer linearization errors, from which we show that the asymptotic normality of the conquer estimator holds under a weaker requirement on dimensionality than needed for conventional quantile regression. The validity of multiplier bootstrap is also provided. Numerical studies confirm conquer as a practical and reliable approach to large-scale inference for quantile regression. Software implementing the methodology is available in the R package conquer.
Time varying Markov process with partially observed aggregate data: An application to coronavirus
A major difficulty in the analysis of Covid-19 transmission is that many infected individuals are asymptomatic. For this reason, the total counts of infected individuals and of recovered immunized individuals are unknown, especially during the early phase of the epidemic. In this paper, we consider a parametric time varying Markov process of Coronavirus transmission and show how to estimate the model parameters and approximate the unobserved counts from daily data on infected and detected individuals and the total daily death counts. This model-based approach is illustrated in an application to French data, performed on April 6, 2020.
How to go viral: A COVID-19 model with endogenously time-varying parameters
We estimate a panel model with endogenously time-varying parameters for COVID-19 cases and deaths in U.S. states. The functional form for infections incorporates important features of epidemiological models but is flexibly parameterized to capture different trajectories of the pandemic. Daily deaths are modeled as a spike-and-slab regression on lagged cases. Our Bayesian estimation reveals that social distancing and testing have significant effects on the parameters. For example, a 10 percentage point increase in the positive test rate is associated with a 2 percentage point increase in the death rate among reported cases. The model forecasts perform well, even relative to models from epidemiology and statistics.
Nonparametric comparison of epidemic time trends: The case of COVID-19
The COVID-19 pandemic is one of the most pressing issues at present. A question which is particularly important for governments and policy makers is the following: Does the virus spread in the same way in different countries? Or are there significant differences in the development of the epidemic? In this paper, we devise new inference methods that allow to detect differences in the development of the COVID-19 epidemic across countries in a statistically rigorous way. In our empirical study, we use the methods to compare the outbreak patterns of the epidemic in a number of European countries.
Tail and Center Rounding of Probabilistic Expectations in the Health and Retirement Study
We study rounding of numerical expectations in the Health and Retirement Study (HRS) between 2002 and 2014. We document that respondent-specific rounding patterns across questions in individual waves are quite stable across waves. We discover a tendency by about half of the respondents to provide more refined responses in the tails of the 0-100 scale than the center. In contrast, only about five percent of the respondents give more refined responses in the center than the tails. We find that respondents tend to report the values 25 and 75 more frequently than other values ending in 5. We also find that rounding practices vary somewhat across question domains and respondent characteristics. We propose an inferential approach that assumes stability of response tendencies across questions and waves to infer person-specific rounding in each question domain and scale segment and that replaces each point-response with an interval representing the range of possible values of the true latent belief. Using expectations from the 2016 wave of the HRS, we validate our approach. To demonstrate the consequences of rounding on inference, we compare best-predictor estimates from face-value expectations with those implied by our intervals.
Maternal Subjective Expectations about the Technology of Skill Formation Predict Investments in Children One Year Later
A growing literature reports significant socio-economic gaps in investments in the human capital of young children. Because the returns to these investments may be huge, parenting programs attempt to improve children's environments by increasing parental expectations about the importance of investments for their children's human capital formation. We contribute to this literature by investigating the relevance of maternal subjective expectations (MSE) about the technology of skill formation in predicting investments in the human capital of children. We develop and implement a framework to elicit and analyze MSE data. We launch a longitudinal study with 822 participants, all of whom were women in the second trimester of their first pregnancy at the date of enrollment. In the first wave of the study, during pregnancy, we elicited the woman's MSE. In the second wave, approximately one year later, we measured maternal investments using the Home Observation for the Measurement of the Environment (HOME) Inventory. The vast majority of study participants believe that the Cobb-Douglas technology of skill formation describes the process of child development accurately. We observed substantial heterogeneity in MSE about the impact of human capital at birth and investments in child development at age two. Family income explains part of this heterogeneity in MSE. The higher the family income, the higher the MSE about the impact of investment in child development. We find that a one-standard-deviation of MSE measured at pregnancy is associated with 11% of a standard deviation in investments measured when the child is approximately nine months old.
Predictive Functional Linear Models with Diverging Number of Semiparametric Single-Index Interactions
When predicting crop yield using both functional and multivariate predictors, the prediction performances benefit from the inclusion of the interactions between the two sets of predictors. We assume the interaction depends on a nonparametric, single-index structure of the multivariate predictor and reduce each functional predictor's dimension using functional principal component analysis (FPCA). Allowing the number of FPCA scores to diverge to infinity, we consider a sequence of semiparametric working models with a diverging number of predictors, which are FPCA scores with estimation errors. We show that the parametric component of the model is root-n consistent and asymptotically normal, the overall prediction error is dominated by the estimation of the nonparametric interaction function, and justify a CV-based procedure to select the tuning parameters.
Bayesian Factor-adjusted Sparse Regression
Many sparse regression methods are based on the assumption that covariates are weakly correlated, which unfortunately do not hold in many economic and financial datasets. To address this challenge, we model the strongly-correlated covariates by a factor structure: strong correlations among covariates are explained by common factors and the remaining variations are interpreted as idiosyncratic components. We then propose a factor-adjusted sparse regression model with both common factors and idiosyncratic components as decorrelated covariates and develop a semi-Bayesian method. Parameter estimation rate-optimality and model selection consistency are established by non-asymptotic analyses. We show on simulated data that the semi-Bayesian method outperforms its Lasso analogue, manifests insensitivity to the overestimates of the number of common factors, pays a negligible price when covariates are not correlated, scales up well with increasing sample size, dimensionality and sparsity, and converges fast to the equilibrium of the posterior distribution. Numerical results on a real dataset of U.S. bond risk premia and macroeconomic indicators also lend strong supports to the proposed method.
Disentangling Moral Hazard and Adverse Selection in Private Health Insurance
Moral hazard and adverse selection create inefficiencies in private health insurance markets and understanding the relative importance of each factor is critical for addressing these inefficiencies. We use claims data from a large firm which changed health insurance plan options to isolate moral hazard from plan selection, estimating a discrete choice model to predict household plan preferences and attrition. Variation in plan preferences identifies the differential causal impact of each health insurance plan on the entire distribution of medical expenditures. Our estimates imply that 53% of the additional medical spending observed in the most generous plan in our data relative to the least generous is due to adverse selection. We find that quantifying adverse selection by using prior medical expenditures overstates the true magnitude of selection due to mean reversion. We also statistically reject that individual health care consumption responds solely to the end-of-the-year marginal price.
When will the Covid-19 pandemic peak?
We carry out some analysis of the daily data on the number of new cases and the number of new deaths by (191) countries as reported to the European Centre for Disease Prevention and Control (ECDC). Our benchmark model is a quadratic time trend model applied to the log of new cases for each country. We use our model to predict when the peak of the epidemic will arise in terms of new cases or new deaths in each country and the peak level. We also predict how long the number of new daily cases in each country will fall by an order of magnitude. Finally, we also forecast the total number of cases and deaths for each country. We consider two models that link the joint evolution of new cases and new deaths.