Sparse Machine Learning in Banach Spaces
The aim of this expository paper is to explain to graduate students and beginning researchers in the field of mathematics, statistics and engineering the fundamental concept of sparse machine learning in Banach spaces. In particular, we use binary classification as an example to explain the essence of learning in a reproducing kernel Hilbert space and sparse learning in a reproducing kernel Banach space (RKBS). We then utilize the Banach space to illustrate the basic concepts of the RKBS in an elementary yet rigorous fashion. This paper reviews existing results in the author's perspectives to reflect the state of the art of the field of sparse learning, and includes new theoretical observations on the RKBS. Several open problems critical to the theory of the RKBS are also discussed at the end of this paper.
Comparison of reduced models for blood flow using Runge-Kutta discontinuous Galerkin methods
One-dimensional blood flow models take the general form of nonlinear hyperbolic systems but differ in their formulation. One class of models considers the physically conserved quantities of mass and momentum, while another class describes mass and velocity. Further, the averaging process employed in the model derivation requires the specification of the axial velocity profile; this choice differentiates models within each class. Discrepancies among differing models have yet to be investigated. In this paper, we comment on some theoretical differences among models and systematically compare them for physiologically relevant vessel parameters, network topology, and boundary data. In particular, the effect of the velocity profile is investigated in the cases of both smooth and discontinuous solutions, and a recommendation for a physiological model is provided. The models are discretized by a class of Runge-Kutta discontinuous Galerkin methods.
Simultaneous optical flow and source estimation: Space-time discretization and preconditioning
We consider the simultaneous estimation of an optical flow field and an illumination source term in a movie sequence. The particular optical flow equation is obtained by assuming that the image intensity is a conserved quantity up to possible sources and sinks which represent varying illumination. We formulate this problem as an energy minimization problem and propose a space-time simultaneous discretization for the optimality system in saddle-point form. We investigate a preconditioning strategy that renders the discrete system well-conditioned uniformly in the discretization resolution. Numerical experiments complement the theory.
Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
A study of different modeling choices for simulating platelets within the immersed boundary method
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid-structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations - radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations - for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations.
Estimator reduction and convergence of adaptive BEM
A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.
Convergence of adaptive BEM for some mixed boundary value problem
For a boundary integral formulation of the 2D Laplace equation with mixed boundary conditions, we consider an adaptive Galerkin BEM based on an [Formula: see text]-type error estimator. We include the resolution of the Dirichlet, Neumann, and volume data into the adaptive algorithm. In particular, an implementation of the developed algorithm has only to deal with discrete integral operators. We prove that the proposed adaptive scheme leads to a sequence of discrete solutions, for which the corresponding error estimators tend to zero. Under a saturation assumption for the non-perturbed problem which is observed empirically, the sequence of discrete solutions thus converges to the exact solution in the energy norm.
Accuracy and run-time comparison for different potential approaches and iterative solvers in finite element method based EEG source analysis
Accuracy and run-time play an important role in medical diagnostics and research as well as in the field of neuroscience. In Electroencephalography (EEG) source reconstruction, a current distribution in the human brain is reconstructed noninvasively from measured potentials at the head surface (the EEG inverse problem). Numerical modeling techniques are used to simulate head surface potentials for dipolar current sources in the human cortex, the so-called EEG forward problem.In this paper, the efficiency of algebraic multigrid (AMG), incomplete Cholesky (IC) and Jacobi preconditioners for the conjugate gradient (CG) method are compared for iteratively solving the finite element (FE) method based EEG forward problem. The interplay of the three solvers with a full subtraction approach and two direct potential approaches, the Venant and the partial integration method for the treatment of the dipole singularity is examined. The examination is performed in a four-compartment sphere model with anisotropic skull layer, where quasi-analytical solutions allow for an exact quantification of computational speed versus numerical error. Specifically-tuned constrained Delaunay tetrahedralization (CDT) FE meshes lead to high accuracies for both the full subtraction and the direct potential approaches. Best accuracies are achieved by the full subtraction approach if the homogeneity condition is fulfilled. It is shown that the AMG-CG achieves an order of magnitude higher computational speed than the CG with the standard preconditioners with an increasing gain factor when decreasing mesh size. Our results should broaden the application of accurate and fast high-resolution FE volume conductor modeling in source analysis routine.