STATISTICS AND COMPUTING

A generalized likelihood-based Bayesian approach for scalable joint regression and covariance selection in high dimensions
Samanta S, Khare K and Michailidis G
The paper addresses joint sparsity selection in the regression coefficient matrix and the error precision (inverse covariance) matrix for high-dimensional multivariate regression models in the Bayesian paradigm. The selected sparsity patterns are crucial to help understand the network of relationships between the predictor and response variables, as well as the conditional relationships among the latter. While Bayesian methods have the advantage of providing natural uncertainty quantification through posterior inclusion probabilities and credible intervals, current Bayesian approaches either restrict to specific sub-classes of sparsity patterns and/or are not scalable to settings with hundreds of responses and predictors. Bayesian approaches which only focus on estimating the posterior mode are scalable, but do not generate samples from the posterior distribution for uncertainty quantification. Using a bi-convex regression based generalized likelihood and spike-and-slab priors, we develop an algorithm called Joint Regression Network Selector (JRNS) for joint regression and covariance selection which (a) can accommodate general sparsity patterns, (b) provides posterior samples for uncertainty quantification, and (c) is scalable and orders of magnitude faster than the state-of-the-art Bayesian approaches providing uncertainty quantification. We demonstrate the statistical and computational efficacy of the proposed approach on synthetic data and through the analysis of selected cancer data sets. We also establish high-dimensional posterior consistency for one of the developed algorithms.
A Joint estimation approach to sparse additive ordinary differential equations
Zhang N, Nanshan M and Cao J
Ordinary differential equations (ODEs) are widely used to characterize the dynamics of complex systems in real applications. In this article, we propose a novel joint estimation approach for generalized sparse additive ODEs where observations are allowed to be non-Gaussian. The new method is unified with existing collocation methods by considering the likelihood, ODE fidelity and sparse regularization simultaneously. We design a block coordinate descent algorithm for optimizing the non-convex and non-differentiable objective function. The global convergence of the algorithm is established. The simulation study and two applications demonstrate the superior performance of the proposed method in estimation and improved performance of identifying the sparse structure.
Default risk prediction and feature extraction using a penalized deep neural network
Lin C, Qiao N, Zhang W, Li Y and Ma S
Online peer-to-peer lending platforms provide loans directly from lenders to borrowers without passing through traditional financial institutions. For lenders on these platforms to avoid loss, it is crucial that they accurately assess default risk so that they can make appropriate decisions. In this study, we develop a penalized deep learning model to predict default risk based on survival data. As opposed to simply predicting whether default will occur, we focus on predicting the probability of default over time. Moreover, by adding an additional one-to-one layer in the neural network, we achieve feature selection and estimation simultaneously by incorporating an -penalty into the objective function. The minibatch gradient descent algorithm makes it possible to handle massive data. An analysis of a real-world loan data and simulations demonstrate the model's competitive practical performance, which suggests favorable potential applications in peer-to-peer lending platforms.
Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics
Zhang BJ, Marzouk YM and Spiliopoulos K
We introduce a novel geometry-informed irreversible perturbation that accelerates convergence of the Langevin algorithm for Bayesian computation. It is well documented that there exist perturbations to the Langevin dynamics that preserve its invariant measure while accelerating its convergence. Irreversible perturbations and reversible perturbations (such as Riemannian manifold Langevin dynamics (RMLD)) have separately been shown to improve the performance of Langevin samplers. We consider these two perturbations simultaneously by presenting a novel form of irreversible perturbation for RMLD that is informed by the underlying geometry. Through numerical examples, we show that this new irreversible perturbation can improve estimation performance over irreversible perturbations that do not take the geometry into account. Moreover we demonstrate that irreversible perturbations generally can be implemented in conjunction with the stochastic gradient version of the Langevin algorithm. Lastly, while continuous-time irreversible perturbations cannot impair the performance of a Langevin estimator, the situation can sometimes be more complicated when discretization is considered. To this end, we describe a discrete-time example in which irreversibility increases both the bias and variance of the resulting estimator.
VAE: a stochastic process prior for Bayesian deep learning with MCMC
Mishra S, Flaxman S, Berah T, Zhu H, Pakkanen M and Bhatt S
Stochastic processes provide a mathematically elegant way to model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. However, in practice efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ( VAE). VAE is a new continuous stochastic process. We use VAE to learn low dimensional embeddings of function classes by combining a trainable feature mapping with generative model using a VAE. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions such as their integrals. For popular tasks, such as spatial interpolation, VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate an elegant and scalable means of performing fully Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.
Evaluation of combinatorial optimisation algorithms for c-optimal experimental designs with correlated observations
Watson SI and Pan Y
We show how combinatorial optimisation algorithms can be applied to the problem of identifying c-optimal experimental designs when there may be correlation between and within experimental units and evaluate the performance of relevant algorithms. We assume the data generating process is a generalised linear mixed model and show that the c-optimal design criterion is a monotone supermodular function amenable to a set of simple minimisation algorithms. We evaluate the performance of three relevant algorithms: the local search, the greedy search, and the reverse greedy search. We show that the local and reverse greedy searches provide comparable performance with the worst design outputs having variance greater than the best design, across a range of covariance structures. We show that these algorithms perform as well or better than multiplicative methods that generate weights to place on experimental units. We extend these algorithms to identifying modle-robust c-optimal designs.
Eigenfunction martingale estimating functions and filtered data for drift estimation of discretely observed multiscale diffusions
Abdulle A, Pavliotis GA and Zanoni A
We propose a novel method for drift estimation of multiscale diffusion processes when a sequence of discrete observations is given. For the Langevin dynamics in a two-scale potential, our approach relies on the eigenvalues and the eigenfunctions of the homogenized dynamics. Our first estimator is derived from a martingale estimating function of the generator of the homogenized diffusion process. However, the unbiasedness of the estimator depends on the rate with which the observations are sampled. We therefore introduce a second estimator which relies also on filtering the data, and we prove that it is asymptotically unbiased independently of the sampling rate. A series of numerical experiments illustrate the reliability and efficiency of our different estimators.
Parsimonious hidden Markov models for matrix-variate longitudinal data
Tomarchio SD, Punzo A and Maruotti A
Hidden Markov models (HMMs) have been extensively used in the univariate and multivariate literature. However, there has been an increased interest in the analysis of matrix-variate data over the recent years. In this manuscript we introduce HMMs for matrix-variate balanced longitudinal data, by assuming a matrix normal distribution in each hidden state. Such data are arranged in a four-way array. To address for possible overparameterization issues, we consider the eigen decomposition of the covariance matrices, leading to a total of 98 HMMs. An expectation-conditional maximization algorithm is discussed for parameter estimation. The proposed models are firstly investigated on simulated data, in terms of parameter recovery, computational times and model selection. Then, they are fitted to a four-way real data set concerning the unemployment rates of the Italian provinces, evaluated by gender and age classes, over the last 16 years.
On randomized sketching algorithms and the Tracy-Widom law
Ahfock D, Astle WJ and Richardson S
There is an increasing body of work exploring the integration of random projection into algorithms for numerical linear algebra. The primary motivation is to reduce the overall computational cost of processing large datasets. A suitably chosen random projection can be used to embed the original dataset in a lower-dimensional space such that key properties of the original dataset are retained. These algorithms are often referred to as sketching algorithms, as the projected dataset can be used as a compressed representation of the full dataset. We show that random matrix theory, in particular the Tracy-Widom law, is useful for describing the operating characteristics of sketching algorithms in the tall-data regime when the sample size is much greater than the number of variables . Asymptotic large sample results are of particular interest as this is the regime where sketching is most useful for data compression. In particular, we develop asymptotic approximations for the success rate in generating random subspace embeddings and the convergence probability of iterative sketching algorithms. We test a number of sketching algorithms on real large high-dimensional datasets and find that the asymptotic expressions give accurate predictions of the empirical performance.
On predictive inference for intractable models via approximate Bayesian computation
Järvenpää M and Corander J
Approximate Bayesian computation (ABC) is commonly used for parameter estimation and model comparison for intractable simulator-based statistical models whose likelihood function cannot be evaluated. In this paper we instead investigate the feasibility of ABC as a generic approximate method for predictive inference, in particular, for computing the posterior predictive distribution of future observations or missing data of interest. We consider three complementary ABC approaches for this goal, each based on different assumptions regarding which predictive density of the intractable model can be sampled from. The case where only simulation from the joint density of the observed and future data given the model parameters can be used for inference is given particular attention and it is shown that the ideal summary statistic in this setting is minimal predictive sufficient instead of merely minimal sufficient (in the ordinary sense). An ABC prediction approach that takes advantage of a certain latent variable representation is also investigated. We additionally show how common ABC sampling algorithms can be used in the predictive settings considered. Our main results are first illustrated by using simple time-series models that facilitate analytical treatment, and later by using two common intractable dynamic models.
Variable selection using a smooth information criterion for distributional regression models
O'Neill M and Burke K
Modern variable selection procedures make use of penalization methods to execute simultaneous model selection and estimation. A popular method is the least absolute shrinkage and selection operator, the use of which requires selecting the value of a tuning parameter. This parameter is typically tuned by minimizing the cross-validation error or Bayesian information criterion, but this can be computationally intensive as it involves fitting an array of different models and selecting the best one. In contrast with this standard approach, we have developed a procedure based on the so-called "smooth IC" (SIC) in which the tuning parameter is automatically selected in one step. We also extend this model selection procedure to the distributional regression framework, which is more flexible than classical regression modelling. Distributional regression, also known as multiparameter regression, introduces flexibility by taking account of the effect of covariates through multiple distributional parameters simultaneously, e.g., mean and variance. These models are useful in the context of normal linear regression when the process under study exhibits heteroscedastic behaviour. Reformulating the distributional regression estimation problem in terms of penalized likelihood enables us to take advantage of the close relationship between model selection criteria and penalization. Utilizing the SIC is computationally advantageous, as it obviates the issue of having to choose multiple tuning parameters.
Phylourny: efficiently calculating elimination tournament win probabilities via phylogenetic methods
Bettisworth B, Jordan AI and Stamatakis A
The prediction of knockout tournaments represents an area of large public interest and active academic as well as industrial research. Here, we show how one can leverage the computational analogies between calculating the phylogenetic likelihood score used in the area of molecular evolution to efficiently calculate, instead of approximate via simulations, the exact per-team tournament win probabilities, given a pairwise win probability matrix between all teams. We implement and make available our method as open-source code and show that it is two orders of magnitude faster than simulations and two or more orders of magnitude faster than calculating the exact per-team win probabilities naïvely, without taking into account the substantial computational savings induced by the tournament tree structure. Furthermore, we showcase novel prediction approaches that now become feasible due to this order of magnitude improvement in calculating tournament win probabilities. We demonstrate how to quantify prediction uncertainty by calculating 100,000 distinct tournament win probabilities for a tournament with 16 teams under slight variations of a reasonable pairwise win probability matrix within one minute on a standard laptop. We also conduct an analogous analysis for a tournament with 64 teams.
A fast look-up method for Bayesian mean-parameterised Conway-Maxwell-Poisson regression models
Philipson P and Huang A
Count data that are subject to both under and overdispersion at some hierarchical level cannot be readily accommodated by classic models such as Poisson or negative binomial regression models. The mean-parameterised Conway-Maxwell-Poisson distribution allows for both types of dispersion within the same model, but is doubly intractable with an embedded normalising constant. We propose a look-up method where pre-computing values of the rate parameter dramatically reduces computing times and renders the proposed model a practicable alternative when faced with such bidispersed data. The approach is demonstrated and verified using a simulation study and applied to three datasets: an underdispersed small dataset on takeover bids, a medium dataset on yellow cards issued by referees in the English Premier League prior to and during the Covid-19 pandemic, and a large Test match cricket bowling dataset, the latter two of which each exhibit over and underdispersion at the individual level.
Quantile hidden semi-Markov models for multivariate time series
Merlo L, Maruotti A, Petrella L and Punzo A
This paper develops a quantile hidden semi-Markov regression to jointly estimate multiple quantiles for the analysis of multivariate time series. The approach is based upon the Multivariate Asymmetric Laplace (MAL) distribution, which allows to model the quantiles of all univariate conditional distributions of a multivariate response simultaneously, incorporating the correlation structure among the outcomes. Unobserved serial heterogeneity across observations is modeled by introducing regime-dependent parameters that evolve according to a latent finite-state semi-Markov chain. Exploiting the hierarchical representation of the MAL, inference is carried out using an efficient Expectation-Maximization algorithm based on closed form updates for all model parameters, without parametric assumptions about the states' sojourn distributions. The validity of the proposed methodology is analyzed both by a simulation study and through the empirical analysis of air pollutant concentrations in a small Italian city.
Automatic search intervals for the smoothing parameter in penalized splines
Li Z and Cao J
The selection of smoothing parameter is central to the estimation of penalized splines. The best value of the smoothing parameter is often the one that optimizes a smoothness selection criterion, such as generalized cross-validation error (GCV) and restricted likelihood (REML). To correctly identify the global optimum rather than being trapped in an undesired local optimum, grid search is recommended for optimization. Unfortunately, the grid search method requires a pre-specified search interval that contains the unknown global optimum, yet no guideline is available for providing this interval. As a result, practitioners have to find it by trial and error. To overcome such difficulty, we develop novel algorithms to automatically find this interval. Our automatic search interval has four advantages. (i) It specifies a smoothing parameter range where the associated penalized least squares problem is numerically solvable. (ii) It is criterion-independent so that different criteria, such as GCV and REML, can be explored on the same parameter range. (iii) It is sufficiently wide to contain the global optimum of any criterion, so that for example, the global minimum of GCV and the global maximum of REML can both be identified. (iv) It is computationally cheap compared with the grid search itself, carrying no extra computational burden in practice. Our method is ready to use through our recently developed package (  version 1.1). It may be embedded in more advanced statistical modeling methods that rely on penalized splines.
On automatic bias reduction for extreme expectile estimation
Girard S, Stupfler G and Usseglio-Carleve A
Expectiles induce a law-invariant risk measure that has recently gained popularity in actuarial and financial risk management applications. Unlike quantiles or the quantile-based Expected Shortfall, the expectile risk measure is coherent and elicitable. The estimation of extreme expectiles in the heavy-tailed framework, which is reasonable for extreme financial or actuarial risk management, is not without difficulties; currently available estimators of extreme expectiles are typically biased and hence may show poor finite-sample performance even in fairly large samples. We focus here on the construction of bias-reduced extreme expectile estimators for heavy-tailed distributions. The rationale for our construction hinges on a careful investigation of the asymptotic proportionality relationship between extreme expectiles and their quantile counterparts, as well as of the extrapolation formula motivated by the heavy-tailed context. We accurately quantify and estimate the bias incurred by the use of these relationships when constructing extreme expectile estimators. This motivates the introduction of classes of bias-reduced estimators whose asymptotic properties are rigorously shown, and whose finite-sample properties are assessed on a simulation study and three samples of real data from economics, insurance and finance.
Interpolating log-determinant and trace of the powers of matrix
Ameli S and Shadden SC
We develop heuristic interpolation methods for the functions and where the matrices and are Hermitian and positive (semi) definite and and are real variables. These functions are featured in many applications in statistics, machine learning, and computational physics. The presented interpolation functions are based on the modification of sharp bounds for these functions. We demonstrate the accuracy and performance of the proposed method with numerical examples, namely, the marginal maximum likelihood estimation for Gaussian process regression and the estimation of the regularization parameter of ridge regression with the generalized cross-validation method.
Distributional anchor regression
Kook L, Sick B and Bühlmann P
Prediction models often fail if train and test data do not stem from the same distribution. Out-of-distribution (OOD) generalization to unseen, perturbed test data is a desirable but difficult-to-achieve property for prediction models and in general requires strong assumptions on the data generating process (DGP). In a causally inspired perspective on OOD generalization, the test data arise from a specific class of interventions on exogenous random variables of the DGP, called anchors. Anchor regression models, introduced by Rothenhäusler et al. (J R Stat Soc Ser B 83(2):215-246, 2021. 10.1111/rssb.12398), protect against distributional shifts in the test data by employing causal regularization. However, so far anchor regression has only been used with a squared-error loss which is inapplicable to common responses such as censored continuous or ordinal data. Here, we propose a distributional version of anchor regression which generalizes the method to potentially censored responses with at least an ordered sample space. To this end, we combine a flexible class of parametric transformation models for distributional regression with an appropriate causal regularizer under a more general notion of residuals. In an exemplary application and several simulation scenarios we demonstrate the extent to which OOD generalization is possible.
A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
Ma X, Sardy S, Hengartner N, Bobenko N and Lin YT
To fit sparse linear associations, a LASSO sparsity inducing penalty with a single hyperparameter provably allows to recover the important features (needles) with high probability in certain regimes even if the sample size is smaller than the dimension of the input vector (haystack). More recently learners known as artificial neural networks (ANN) have shown great successes in many machine learning tasks, in particular fitting nonlinear associations. Small learning rate, stochastic gradient descent algorithm and large training set help to cope with the explosion in the number of parameters present in deep neural networks. Yet few ANN learners have been developed and studied to find needles in nonlinear haystacks. Driven by a single hyperparameter, our ANN learner, like for sparse linear associations, exhibits a phase transition in the probability of retrieving the needles, which we do not observe with other ANN learners. To select our penalty parameter, we generalize the universal threshold of Donoho and Johnstone (Biometrika 81(3):425-455, 1994) which is a better rule than the conservative (too many false detections) and expensive cross-validation. In the spirit of simulated annealing, we propose a warm-start sparsity inducing algorithm to solve the high-dimensional, non-convex and non-differentiable optimization problem. We perform simulated and real data Monte Carlo experiments to quantify the effectiveness of our approach.
A numerically stable algorithm for integrating Bayesian models using Markov melding
Manderson AA and Goudie RJB
When statistical analyses consider multiple data sources, Markov melding provides a method for combining the source-specific Bayesian models. Markov melding joins together submodels that have a common quantity. One challenge is that the prior for this quantity can be implicit, and its prior density must be estimated. We show that error in this density estimate makes the two-stage Markov chain Monte Carlo sampler employed by Markov melding unstable and unreliable. We propose a robust two-stage algorithm that estimates the required prior marginal self-density ratios using weighted samples, dramatically improving accuracy in the tails of the distribution. The stabilised version of the algorithm is pragmatic and provides reliable inference. We demonstrate our approach using an evidence synthesis for inferring HIV prevalence, and an evidence synthesis of A/H1N1 influenza.
A Bayesian multilevel model for populations of networks using exponential-family random graphs
Lehmann B and White S
The collection of data on populations of networks is becoming increasingly common, where each data point can be seen as a realisation of a network-valued random variable. Moreover, each data point may be accompanied by some additional covariate information and one may be interested in assessing the effect of these covariates on network structure within the population. A canonical example is that of brain networks: a typical neuroimaging study collects one or more brain scans across multiple individuals, each of which can be modelled as a network with nodes corresponding to distinct brain regions and edges corresponding to structural or functional connections between these regions. Most statistical network models, however, were originally proposed to describe a single underlying relational structure, although recent years have seen a drive to extend these models to populations of networks. Here, we describe a model for when the outcome of interest is a network-valued random variable whose distribution is given by an exponential random graph model. To perform inference, we implement an exchange-within-Gibbs MCMC algorithm that generates samples from the doubly-intractable posterior. To illustrate this approach, we use it to assess population-level variations in networks derived from fMRI scans, enabling the inference of age- and intelligence-related differences in the topological structure of the brain's functional connectivity.