Where: Kirwan Hall 3206

Speaker: Brittany Froese Hamfeldt (New Jersey Institute of Technology) - https://web.njit.edu/~bdfroese/

Abstract: It is well known that the quadratic-cost optimal transportation problem is formally equivalent to the Monge-Ampere equation, a fully nonlinear elliptic PDE. Instead of a traditional boundary condition, the PDE is equipped with a global constraint on the solution gradient, which constrains the transport of mass. Recently, several numerical methods have been proposed for this problem, but no convergence proofs are available. Viscosity solutions have become a powerful tool for analyzing methods for fully nonlinear elliptic equations. However, existing convergence frameworks for viscosity solutions are not valid for this problem. We introduce an alternative PDE that couples the usual Monge-Ampere equation to a Hamilton-Jacobi equation that restricts the transportation of mass. Using this reformulation, we develop a framework for

proving convergence of a large class of approximation schemes for the optimal transport problem. We describe several examples of convergent schemes, as well as possible extensions to more general optimal transportation problems.

Where: Kirwan Hall 3206

Speaker: Akwum Onwunta (Computer Science Department, University of Maryland, College Park) -

Abstract: Optimization problems constrained by deterministic steady-state partial differential equations (PDEs) are computationally challenging. This is even more so if the constraints are deterministic unsteady PDEs since one would then need to solve a system of PDEs coupled globally in time and space, and time-stepping methods quickly reach their limitations due to the enormous demand for storage. Yet more challenging are problems constrained by unsteady PDEs involving (countably many) parametric or uncertain inputs. A viable solution approach to optimization problems with stochastic constraints employs the spectral stochastic Galerkin finite element method (SGFEM). However, the SGFEM often leads to the so-called curse of dimensionality, in the sense that it results in prohibitively high-dimensional linear systems with tensor product structure. Moreover, a typical model for an optimal control problem with stochastic inputs (OCPS) will usually be used for the quantification of the statistics of the system response – a task that could in turn result in additional enormous computational expense. In this talk, we consider two prototypical model OCPS and discretize them with SGFEM. We exploit the underlying mathematical structure of the discretized systems at the heart of the optimization routine to derive and analyze low-rank iterative solvers and robust block-diagonal preconditioners for solving the resulting stochastic Galerkin systems. The developed solvers are efficient in the reduction of temporal and storage requirements of the high-dimensional linear systems. Finally, we illustrate the effectiveness of our solvers with numerical experiments.

Where: Kirwan Hall 3206

Speaker: Lloyd N. Trefethen (Oxford University Mathematical Institute) - https://people.maths.ox.ac.uk/trefethen/

Abstract: What is a random function? What is noise? The standard answers are nonsmooth, defined pointwise via the Wiener process and Brownian motion. In the Chebfun project, we have found it more natural to work with smooth random functions defined by finite Fourier series with random coefficients. The length of the series is determined by a wavelength parameter lambda. Integrals give smooth random walks, which approach Brownian paths as lambda shrinks to 0, and smooth random ODEs, which approach stochastic DEs of the Stratonovich variety. Numerical explorations become very easy in this framework. There are plenty of conceptual challenges in this subject, starting with the fact that white noise has infinite amplitude and infinite energy, a paradox that goes back in two different ways to Einstein in 1905.

Where: Kirwan Hall 3206

Speaker: Lloyd N. Trefethen (Oxford University Mathematical Institute) - https://people.maths.ox.ac.uk/trefethen/

Abstract: The hypercube is the standard domain for computation in higher dimensions. We describe two respects in which the anisotropy of this domain has practical consequences. The first is a matter well known to experts: the importance of axis-alignment in low-rank compression of multivariate functions. Rotating a function by a few degrees in two or more dimensions may change its numerical rank completely. The second is new. The standard notion of degree of a multivariate polynomial, total degree, is isotropic -- invariant under rotation. The hypercube, however, is highly anisotropic. We present a theorem showing that as a consequence, the convergence rate of multivariate polynomial approximations in a hypercube is determined not by the total degree but by the Euclidean degree, defined in terms of not the 1-norm but the 2-norm of the vector of exponents.

Where: Kirwan Hall 3206

Speaker: Marcos Vanella (George Washington University and National Institute of Standards and Technology) - https://www.nist.gov/people/marcos-vanella

Abstract: The Fire Dynamics Simulator (FDS) is a simulation software developed at the Fire Research Division of the National Institute of Standards and Technology. Over the years it has become one of the industry preferred tools for simulation of fire scenarios in design of fire protection systems in buildings and civil structures, forensic studies and wildland fires, among others. At its core, FDS is a Large Eddy Simulation (LES) solver of the Low Mach approximation for thermally driven buoyant flows, employing standard discretization schemes on structured meshes. Traditionally, grid conforming “lego block” geometries are added to describe obstacles within the simulation domain. This talk will discuss our work implementing the capability to simulate fire scenarios around more complex geometries, defined by surface triangulations that don’t conform to the fluid grid, within FDS. Model equations, numerical discretization and implementation on a parallel computing setting using continuous integration will be discussed. Ongoing verification and validation work will be presented. Directions and challenges on this particular development, as well as other FDS simulation areas will be considered.

Where: Kirwan Hall 3206

Speaker: Matt Landreman (IREAP - University of Maryland) - https://terpconnect.umd.edu/~mattland/

Where: Kirwan Hall 3206

Speaker: Alfio Quarteroni (Politecnico di Milano, Milan, Italy and EPFL, Lausanne, Switzerland) - https://cmcs.epfl.ch/people/quarteroni

Abstract: Interface Control Domain Decomposition (ICDD) is a method designed to address partial differential equations (PDEs) in computational regions split into overlapping subdomains. The “interface controls” are unknown functions used as Dirichlet boundary data on the subdomain interfaces that are obtained by solving an optimal control problem with boundary observation. When the ICDD method is applied to classical (homogeneous) elliptic equations, it can be regarded as (yet) another domain decomposition method to solve elliptic problems. However, what makes it interesting is its convergence rate that is grid independent, its robustness with respect to the possible variation of operator coefficients, and the possibility to use non-matching grids and non-conforming discretizations inside different subdomains.

ICDD methods become especially attractive when applied to solve heterogeneous PDEs (like those occurring in multi-physics problems). A noticeable example is provided by the coupling between (Navier) Stokes and Darcy equations, with application to surface-subsurface flows, or to the coupling of blood flow in large arteries and the fluid flow in the arterial wall. In this case, the minimization problem set on the interface control variables, that is enforced by ICDD method, can in principle assure the correct matching between the two “different physics” without requiring the a-priori determination of the transmission conditions at their interface.

Where: Kirwan Hall 3206

Speaker: Nicholas J. Higham ( School of Mathematics - Manchester University) - http://www.maths.manchester.ac.uk/~higham/

Abstract: There is a growing availability of multiprecision arithmetic: floating point arithmetic in multiple, possibly arbitrary, precisions. Demand in applications includes for both low precision (deep learning and climate modelling) and high precision (long-term simulations and solving very ill conditioned problems). We discuss

- Half-precision arithmetic: its characteristics, availability, attractions, pitfalls, and rounding error analysis implications.

- Quadruple precision arithmetic: the need for it in applications, its cost, and how to exploit it.

As an example of the use of multiple precisions we discuss iterative refinement for solving linear systems. We explain the benefits of combining three different precisions of arithmetic (say, half, single, and double) and show how a new form of preconditioned iterative refinement can be used to solve very ill conditioned sparse linear systems to high accuracy.

Where: Kirwan Hall 3206

Speaker: Daniele Boffi (Dipartimento di Matematica - University di Pavia) - https://www-dimat.unipv.it/boffi/

Abstract: We discuss a distributed Lagrange multiplier formulation of the Finite Element Immersed Boundary Method for the numerical approximation of the interaction between fluids and solids. The discretization of the problem leads to a mixed problem for which a rigorous stability analysis is provided. Optimal convergence estimates are proved for the finite element space discretization. The model, originally introduced for the coupling of incompressible fluids and solids, can be extended to include the simulation of compressible structures.