Abstract: It is well known that the quadratic-cost optimal transportation problem is formally equivalent to the Monge-Ampere equation, a fully nonlinear elliptic PDE. Instead of a traditional boundary condition, the PDE is equipped with a global constraint on the solution gradient, which constrains the transport of mass. Recently, several numerical methods have been proposed for this problem, but no convergence proofs are available. Viscosity solutions have become a powerful tool for analyzing methods for fully nonlinear elliptic equations. However, existing convergence frameworks for viscosity solutions are not valid for this problem. We introduce an alternative PDE that couples the usual Monge-Ampere equation to a Hamilton-Jacobi equation that restricts the transportation of mass. Using this reformulation, we develop a framework for
proving convergence of a large class of approximation schemes for the optimal transport problem. We describe several examples of convergent schemes, as well as possible extensions to more general optimal transportation problems.
Abstract: Optimization problems constrained by deterministic steady-state partial differential equations (PDEs) are computationally challenging. This is even more so if the constraints are deterministic unsteady PDEs since one would then need to solve a system of PDEs coupled globally in time and space, and time-stepping methods quickly reach their limitations due to the enormous demand for storage. Yet more challenging are problems constrained by unsteady PDEs involving (countably many) parametric or uncertain inputs. A viable solution approach to optimization problems with stochastic constraints employs the spectral stochastic Galerkin finite element method (SGFEM). However, the SGFEM often leads to the so-called curse of dimensionality, in the sense that it results in prohibitively high-dimensional linear systems with tensor product structure. Moreover, a typical model for an optimal control problem with stochastic inputs (OCPS) will usually be used for the quantification of the statistics of the system response â a task that could in turn result in additional enormous computational expense. In this talk, we consider two prototypical model OCPS and discretize them with SGFEM. We exploit the underlying mathematical structure of the discretized systems at the heart of the optimization routine to derive and analyze low-rank iterative solvers and robust block-diagonal preconditioners for solving the resulting stochastic Galerkin systems. The developed solvers are efficient in the reduction of temporal and storage requirements of the high-dimensional linear systems. Finally, we illustrate the effectiveness of our solvers with numerical experiments.
Abstract: What is a random function? What is noise? The standard answers are nonsmooth, defined pointwise via the Wiener process and Brownian motion. In the Chebfun project, we have found it more natural to work with smooth random functions defined by finite Fourier series with random coefficients. The length of the series is determined by a wavelength parameter lambda. Integrals give smooth random walks, which approach Brownian paths as lambda shrinks to 0, and smooth random ODEs, which approach stochastic DEs of the Stratonovich variety. Numerical explorations become very easy in this framework. There are plenty of conceptual challenges in this subject, starting with the fact that white noise has infinite amplitude and infinite energy, a paradox that goes back in two different ways to Einstein in 1905.
Abstract: The hypercube is the standard domain for computation in higher dimensions. We describe two respects in which the anisotropy of this domain has practical consequences. The first is a matter well known to experts: the importance of axis-alignment in low-rank compression of multivariate functions. Rotating a function by a few degrees in two or more dimensions may change its numerical rank completely. The second is new. The standard notion of degree of a multivariate polynomial, total degree, is isotropic -- invariant under rotation. The hypercube, however, is highly anisotropic. We present a theorem showing that as a consequence, the convergence rate of multivariate polynomial approximations in a hypercube is determined not by the total degree but by the Euclidean degree, defined in terms of not the 1-norm but the 2-norm of the vector of exponents.
Abstract: The Fire Dynamics Simulator (FDS) is a simulation software developed at the Fire Research Division of the National Institute of Standards and Technology. Over the years it has become one of the industry preferred tools for simulation of fire scenarios in design of fire protection systems in buildings and civil structures, forensic studies and wildland fires, among others. At its core, FDS is a Large Eddy Simulation (LES) solver of the Low Mach approximation for thermally driven buoyant flows, employing standard discretization schemes on structured meshes. Traditionally, grid conforming âlego blockâ geometries are added to describe obstacles within the simulation domain. This talk will discuss our work implementing the capability to simulate fire scenarios around more complex geometries, defined by surface triangulations that donât conform to the fluid grid, within FDS. Model equations, numerical discretization and implementation on a parallel computing setting using continuous integration will be discussed. Ongoing verification and validation work will be presented. Directions and challenges on this particular development, as well as other FDS simulation areas will be considered.
Abstract: A stellarator is a magnetic field configuration for confining charged particles, and stellarators such as the new W7-X experiment have successfully confined plasmas at temperatures exceeding those of the sun's core, relevant for fusion energy. Stellarator design and modeling present numerous computational science challenges. For instance, numerical shape optimization is critical for designing the shaping of the magnetic field and the electromagnetic coils that produce it. Shape optimization concepts can also inform the tolerances to which stellarators must be built. Furthermore, modeling of the plasma confined in a stellarator requires the numerical solution of high-dimensional advection dominated kinetic equations. In this talk, several such topics will be presented in which computational advances could contribute significantly to future stellarator experiments.
Abstract: Four years ago, an asteroid with a 20 meter diameter exploded in the atmosphere over Chelyabinsk, Russia, causing injury and damage 20 kilometers away but no deaths. We are studying the question of what would occur if such an airburst happened over the ocean. Would the blast wave generate a tsunami that could threaten coastal cities far away?
We begin with several simulations of tsunami propagation from asteroid-generated airbursts under a range of conditions. We use the open source software package GeoClaw, which has been successful in modeling earthquake-generated tsunamis. GeoClaw uses a basic model of ocean waves called the shallow water equations (SWE). We then present a simplified one dimensional model problem with an explicit solution in closed form to understand some of the unexpected results.
The SWE model however may not be accurate enough for airburst-generated tsunamis, which have shorter length and time scales than earthquake-generated waves. We extend our model problem to the linearized Euler equations of fluid mechanics to explore the effects of wave dispersion and water compressibility. We end with a discussion of suitable models for airburst-generated tsunamis, and speculate as to appropriate tools to study the more serious case of an asteroid that impacts the water.
Abstract: We present different numerical techniques for solving electromagnetic scattering problems of perfect electric conductors and piecewise homogeneous dielectrics using integral equation methods. In the case of penetrable objects, we use a new constrain-free vector Helmholtz-like partial differential equation that is equivalent to the Maxwell equations. In the case of perfect electric conductors, we have developed a new formulation in terms of well-posed boundary value problems for the vector and scalar potential. In both cases, this approach leads naturally to well-conditioned Fredholm integral equations.
Abstract: Interface Control Domain Decomposition (ICDD) is a method designed to address partial differential equations (PDEs) in computational regions split into overlapping subdomains. The âinterface controlsâ are unknown functions used as Dirichlet boundary data on the subdomain interfaces that are obtained by solving an optimal control problem with boundary observation. When the ICDD method is applied to classical (homogeneous) elliptic equations, it can be regarded as (yet) another domain decomposition method to solve elliptic problems. However, what makes it interesting is its convergence rate that is grid independent, its robustness with respect to the possible variation of operator coefficients, and the possibility to use non-matching grids and non-conforming discretizations inside different subdomains.
ICDD methods become especially attractive when applied to solve heterogeneous PDEs (like those occurring in multi-physics problems). A noticeable example is provided by the coupling between (Navier) Stokes and Darcy equations, with application to surface-subsurface flows, or to the coupling of blood flow in large arteries and the fluid flow in the arterial wall. In this case, the minimization problem set on the interface control variables, that is enforced by ICDD method, can in principle assure the correct matching between the two âdifferent physicsâ without requiring the a-priori determination of the transmission conditions at their interface.
Abstract: Surface finite element methods are commonly used to approximate solutions to the Laplace-Beltrami problem posed on surfaces. These methods generally employ two levels of approximation. One is the Galerkin approximation typical of finite element methods, and the other is a polynomial approximation of the surface on which the PDE is posed. Errors due to the latter approximation are called geometric errors. Geometric errors generally exhibit a consistent order of convergence with respect to the polynomial degree of the approximation in a wide variety of situations. In this talk we will focus on two recently discovered situations in which the geometric error has more subtle behavior. These are eigenvalue problems and a posteriori error estimation. In both of these cases, the particular way in which the discrete surface is constructed or represented in codes affects the order of the geometric error. Time permitting, we may also briefly discuss geometric errors in problems involving vector Laplace-type surface operators, such as the surface Stokes equations.
Abstract: There is a growing availability of multiprecision arithmetic: floating point arithmetic in multiple, possibly arbitrary, precisions. Demand in applications includes for both low precision (deep learning and climate modelling) and high precision (long-term simulations and solving very ill conditioned problems). We discuss
- Half-precision arithmetic: its characteristics, availability, attractions, pitfalls, and rounding error analysis implications.
- Quadruple precision arithmetic: the need for it in applications, its cost, and how to exploit it.
As an example of the use of multiple precisions we discuss iterative refinement for solving linear systems. We explain the benefits of combining three different precisions of arithmetic (say, half, single, and double) and show how a new form of preconditioned iterative refinement can be used to solve very ill conditioned sparse linear systems to high accuracy.
Abstract: We discuss a distributed Lagrange multiplier formulation of the Finite Element Immersed Boundary Method for the numerical approximation of the interaction between fluids and solids. The discretization of the problem leads to a mixed problem for which a rigorous stability analysis is provided. Optimal convergence estimates are proved for the finite element space discretization. The model, originally introduced for the coupling of incompressible fluids and solids, can be extended to include the simulation of compressible structures.
Abstract: Trefftz methods can be used for the numerical discretization of problems with increasing complexity. In this presentation, we will consider angular approximations of transport equations (linear Boltzman equations with stiff coefficients), and Friedrichs systems with stiff relaxation, often encountered for in neutron propagation and transfer equations.
It appears that Trefftz methods provide a natural way to define numerical methods with genuine balance between the differential part and the relaxation part. These methods are naturally written in the context of Discontinuous Galerkin (DG) methods, and so are called Trefftz Discontinuous Galerkin (TDG) methods.
I will report on the construction of the method, numerical estimates of convergence (h-convergence) and excellent behavior for boundary layers. It certain cases, the numerical accuracy of TDG just outperform more traditional polynomial based DG.
Abstract: Inverse problems are pervasive in cyberinfrastructure research, especially in scientific discovery and decision-making for complex, natural, engineered, and societal systems. They are perhaps the most popular mathematical approaches for enabling predictive scientific simulations that integrate observational/experimental data, simulations and/or models. For inverse problems that serve as a basis for design, control, discovery, and decision-making, their solutions must be equipped with the degree of confidence. In other words, we have to quantify the uncertainty in the solution due to observational noise, discretization errors,model inadequacy, etc.
We choose to quantify the uncertainty in the inverse solution using the Bayesian framework. This approach is appealing since it can incorporate most, if not all, uncertainties in a systematic manner. Unfortunately, inverse/UQ problems for practical complex systems possess three challenges simultaneously, namely, the large-scale forward problem challenge, the high dimensional parameter space challenge, and the big data challenge. This talk presents multi-facet computationally-efficient methods to tackle these challenges simultaneously. Rigorous theoretical results and extensive numerical results for various applications including geophysical inversion and inverse electromagnetic wave scattering will be presented.
Abstract: In this presentation, we propose and investigate a new approach to the numerical solution of diffusion problems in heterogeneous media with high-contrast inclusions. Instead of the classical variational formulation of the problem we consider an equivalent weak saddle-point formulation with a set of small parameters. For the asymptotic expansion of the solution we derive a simple convergence condition and the underlying error estimates. We also investigate the problem in the framework of the P1 finite element method. Theoretical conclusions are confirmed by numerical results with large number of regular shaped inclusions.
Abstract: The Virtual Element Method (VEM), is a very recent technology introduced in [Beirao da Veiga, Brezzi, Cangiani, Manzini, Marini, Russo, 2013, M3AS] for the discretization of partial differential equations, that has shared a good success in recent years. The VEM can be interpreted as a generalization of the Finite Element Method that allows to use general polygonal and polyhedral meshes, still keeping the same coding complexity and allowing for arbitrary degree of accuracy. In addition to the possibility to handle general polytopal meshes, the flexibility of the above construction yields other interesting properties with respect to more standard Galerkin methods. For instance, the VEM easily allows to build discrete spaces of arbitrary C^k regularity, or to satisfy exactly the divergence-free constraint for incompressible fluids.
The present talk is an introduction to the VEM, aiming at showing the main ideas of the method. We consider for simplicity a simple elliptic model problem but set ourselves in the more involved 3D setting. In the first part we introduce the adopted Virtual Element space and the associated degrees of freedom, first by addressing the faces of the polyhedrons (i.e. polygons) and then building the space in the full volumes. We then describe the construction of the discrete bilinear form and the ensuing discretization of the problem. Furthermore, we show a set of theoretical and numerical results. In the very final part, we will give a glance at a pair of more involved problems.
Abstract: Building on a recently proposed framework for the study of hp-adaptive discretizations of elliptic boundary-value problems with given forcing term, we investigate the hp-adaptive approximation of eigenvalue problems. We design an iterative procedure for computing a selected eigenvalue and its eigenspace, which at each step generates a near-optimal hp-mesh for the current level of accuracy; such mesh is then sufficiently refined to produce a new, enhanced approximation of the eigenspace. We identify conditions on the initial mesh and the operator coefficients under which the procedure yields approximations that converge at a geometric rate independent of any discretization parameter, while growing the number of degrees of freedom in a way comparable to the optimal number for the attained accuracy. The error reduction is based on a Doerfler-type marking, which uses a computable p-robust equilibrated-flux estimator. For such estimator, we establish a saturation result which implies a contraction property for the eigenfunction error in the energy norm.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827