Aziz M LogoLectures on Differential Equations and their Numerical Analysis

The Aziz Lectures were established by a generous gift from Prof. A. Kadir Aziz. The purpose of the lectures is to bring distinguished mathematicians to the University of Maryland, College Park, to give survey lectures on differential equations, their numerical analysis, and related areas.

Kadir Aziz, who died on March 25, 2016 at the age of 92, received his Ph.D. from the University of Maryland, College Park in 1957. He was on the faculty of Georgetown University from 1956 to 1967, and was on the faculty at the University of Maryland, Baltimore County (UMBC) since 1967. He was Professor Emeritus of Mathematics and Statistics at UMBC. Throughout his career Kadir Aziz was an active and highly respected member of the Numerical Analysis community at Maryland.

CV of Kadir Aziz

The Aziz lecture is given at 3:15pm in the Math Colloquium Room (MTH 3206).

Usually the speaker gives a related talk in the Applied Math Colloquium on the previous day at 3:30pm.

Aziz Lectures 2024

Both the Applied Math Colloquium (September 3 at 3:30pm) and the Aziz Lecture (September 4 at 3:15pm) will be broadcast via Zoom

September 4, 2024

Multilevel approximation of Gaussian random fields
Christoph Schwab
ETH, Zurich
Switzerland

Centered Gaussian random fields (GRFs) indexed by compacta as e.g. compact orientable manifolds M are determined by their covariance operators. We consider the numerical analysis of sample-wise, compressive multi-level wavelet-Galerkin approximations of centered GRFs given as variational solutions to coloring operator equations driven by spatial white noise, with pseudodifferential covariance operator being elliptic, self-adjoint and positive from the Hörmander class. For pathwise approximations with p parameters, tapered covariance or precision matrices have O(p) nonzero entries, can be optimally diagonally preconditioned, and allow O(p) path simulation, covariance estimation and kriging of GRFs.

Joint work with Helmut Harbrecht (Uni Basel), Kristin Kirchner (TU Delft), and Lukas Herrmann (RICAM, Linz).

Aziz Lectures 2023

April 12, 2023 - 3:15pm

Some Mathematical Aspects of Deep Learning and Stochastic Gradient Descent
Lexing Ying
Stanford University

This talk concerns several mathematical aspects of deep learning and stochastic gradient descent. The first aspect is why deep neural networks trained with stochastic gradient descent often generalize. We will make a connection between the generalization and the stochastic stability of the stochastic gradient descent dynamics. The second aspect is to understand the training process of stochastic gradient descent. Here, we use several simple mathematical examples to explain several key empirical observations, including the edge of stability, exploration of flat minimum, and learning rate decay. Based on joint work with Chao Ma.

Aziz Lectures 2019

May 1, 2019

Operator Preconditioning
Ralf Hiptmair
ETH Zurich
Switzerland

This text is printed on a background of locally refined finite element meshes. For the fast iterative solution of finite element models on such meshes preconditioning is indispensable. This is where operator preconditioning enters stage: it offers a general all-purpose recipe for constructing preconditioners for discrete linear operators that have arisen from a Galerkin approach, in particular, from finite element methods and boundary element methods. The key idea is to employ matching Galerkin discretizations of operators of complementary mapping properties. If these can be found, the resulting preconditioners will be robust with respect to the choice of the bases for trial and test spaces. As a consequence, in a finite element setting, they will still perform well even for high-resolution models.

Aziz Lectures 2018

September 12, 2018

Smooth random functions and smooth random ODEs
Lloyd N. Trefethen
Oxford University
United Kingdom

What is a random function? What is noise? The standard answers are nonsmooth, defined pointwise via the Wiener process and Brownian motion. In the Chebfun project, we have found it more natural to work with smooth random functions defined by finite Fourier series with random coefficients. The length of the series is determined by a wavelength parameter lambda. Integrals give smooth random walks, which approach Brownian paths as lambda shrinks to 0, and smooth random ODEs, which approach stochastic DEs of the Stratonovich variety. Numerical explorations become very easy in this framework. There are plenty of conceptual challenges in this subject, starting with the fact that white noise has infinite amplitude and infinite energy, a paradox that goes back in two different ways to Einstein in 1905.

May 4, 2018

Stochastic Nonlinear Schrödinger Equations
Arnaud Debussche
École Normale Supérieure de Rennes 

The nonlinear Schrödinger equation is a prototype model to describe propagation of waves in dispersive media. It arises in several models and noise appears naturally. It may represent the noise due to amplifiers or random dispersion in the fiber. In this talk I will present some aspects of wellposedness and influence on blow-up phenomena for the stochastic nonlinear Schrödinger equation.

February 7, 2018

Mathematical theory and computational approaches for modern materials science
Claude Le Bris
Ecole des Ponts and Inria

The talk, intended for a general audience, will survey some challenging mathematical and numerical problems in contemporary materials science. Questions such as the passage from the microscale to the macroscale, the insertion of uncertainties, defects and heterogeneities in the models, will be examined. We will discuss the interesting issues raised for mathematical analysis (theory of partial differential equations, stochastic processes, homogenization theory) and for numerical analysis (finite element methods, discrete to continuum, Monte Carlo methods, etc).

Aziz Lectures 2016

December 9, 2016

Modeling traffic flow on a network of roads
Prof. Alberto Bressan
Department of Mathematics
Penn State University

The talk will present various PDE models of traffic flow on a network of roads. These comprise a set of conservation laws, determining the density of traffic on each road, together with suitable boundary conditions, describing the dynamics at intersections. While conservation laws determine the evolution of traffic from given initial data, actual traffic patterns are best studied from the point of view of optimal decision problems, where each driver chooses the departure time and the route taken to reach destination. Given a cost functional depending on the departure and arrival times, a relevant mathematical problem is to determine (i) global optima, minimizing the sum of all costs to all drivers, and (ii) Nash equilibria, where no driver can lower his own cost by changing departure time or route to destination. Several results and open problems will be discussed.

May 4, 2016

Quantum Dots and Dislocations: Dynamics of Materials
Prof. Irene Fonseca
Department of Mathematical Sciences
Carnegie Mellon University

The formation and assembly patterns of quantum dots have a significant impact on the optoelectronic properties of semiconductors. We will address short time existence for a surface diffusion evolution equation with curvature regularization in the context of epitaxially strained three-dimensional films. Further, the nucleation of misfit dislocations will be analyzed.

This is joint work with Nicola Fusco, Giovanni Leoni and Massimiliano Morini.

Aziz Lectures 2015

November 18, 2015

Tensor Sparsity - a Regularity Notion for High Dimensional PDEs
Prof. Wolfgang Dahmen
Institute für Geometrie und Praktische Mathematik
RWTH Aachen University (Germany)

The numerical solution of PDEs in a spatially high-dimensional regime (such as the electronic Schrödinger or Fokker-Planck equations) is severely hampered by the “curse of dimensionality”: the computational cost required for achieving a desired target accuracy increases exponentially with respect to the spatial dimension.

We explore a possible remedy by exploiting a typically hidden sparsity of the solution to such problems with respect to a problem dependent basis or dictionary. Here sparsity means that relatively few terms from such a dictionary suffice to realize a given target accuracy. Specifically, sparsity with respect to dictionaries comprised of separable functions – rank-one tensors – would significantly mitigate the curse of dimensionality. The main result establishes such tensor-sparsity for elliptic problems over product domains when the data are tensor-sparse, which can be viewed as a structural regularity theorem.

April 15, 2015

Waves in random media: the story of the phase
Prof. Lenya Ryzhik
Department of Mathematics Stanford University

The macroscopic description of wave propagation in random media typically focuses on the scattering of the wave intensity, while the phase is discarded as a uselessly random object. At the same time, most of the beauty in wave scattering come from the phase correlations. I will describe some of the miracles, as well as some limit theorems for the wave phase.

May 6, 2015

Mathematical challenges in kinetic models of dilute polymers: analysis, approximation and computation
Prof Endre Süli
Mathematical Institute University of Oxford
United Kingdom

We survey recent analytical and computational results for coupled macroscopic-microscopic bead-spring chain models that arise from the kinetic theory of dilute solutions of polymeric fluids with noninteracting polymer chains, involving the unsteady Navier–Stokes system in a bounded domain and a high-dimensional Fokker–Planck equation. The Fokker–Planck equation emerges from a system of (Ito) stochastic differential equations, which models the evolution of a vectorial stochastic process comprised by the centre-of-mass position vector and the orientation (or configuration) vectors of the polymer chain. We discuss the existence of large-data global-in-time weak solutions to the coupled Navier–Stokes–Fokker–Planck system. The Fokker–Planck equation involved in the model is a high-dimensional partial differential equation, whose numerical approximation is a formidable computational challenge, complicated by the fact that for practically relevant spring potentials, such as finitely extensible nonlinear elastic potentials, the drift coefficient in the Fokker–Planck equation is unbounded.

Aziz Lectures 2014

November 12, 2014

The interplay between geometric modeling and simulation of partial differential equations
Prof. Annalisa Buffa
Istituto di Matematica Applicata e Tecnologie Informatiche "E. Magenes"
Pavia, Italy

Computer-based simulation of partial differential equations involves approximation of the unknown fields and a description of geometrical entities such as the computational domain and the properties of the media. There are a variety of situations: in some cases this description is very complex, in some other the governing equations are very difficult to discretize. Starting with an historical perspective, I will describe the recent efforts to improve the interplay between the mathematical approaches characterizing these two aspects of the problem.

Aziz Lectures 2013

November 8, 2013

Universality and chaos in clustering and branching processes
Prof. Robert Pego
Carnegie-Mellon University

Scaling limits of Smoluchowski's coagulation equation are related to probability theory in numerous remarkable ways. E.g., such an equation governs the merging of ancestral trees in critical branching processes, as observed by Bertoin and Le Gall. A simple explanation of this relies on how Bernstein functions relate to a weak topology for Levy triples. From the same theory, we find the existence of 'universal' branching mechanisms which generate complicated dynamics that contain arbitrary renormalized limits. I also plan to describe a remarkable application of Bernstein function theory to a coagulation-fragmentation model introduced in fisheries science to explain animal group size.

April 2-3, 2013

Maximum Norm Stability and Error Estimates for Stokes and Navier-Stokes Problems
Prof. Vivette Girault
Université Pierre et Marie Curie, Paris, France

Energy norm stability estimates for the finite element discretization of the Stokes problem follow easily from the variational formulation provided the discrete pressure and velocity satisfy a uniform inf-sup condition. But deriving uniform stability estimates in L is much more complex because variational formulations do not lend themselves to maximum norms. I shall present here the main ideas of a proof that relies on weighted L2 estimates for regularized Green's functions associated with the Stokes problem and on a weighted inf-sup condition. The domain is a convex polygon or polyhedron. The triangulation is shape-regular and quasi-uniform. The finite element spaces satisfy a super-approximation property, which is shown to be valid for most commonly used stable finite-element spaces. Extending this result to error estimates and to the solution of the steady incompressible Navier-Stokes problem is straightforward.

Aziz Lectures 2012

February 22, 2012

Semismooth Newton Methods: Theory, Numerics and Applications
Prof. Michael Hintermüller
Department of Mathematics
Humboldt-Universität, Berlin, Germany

Many mathematical models of processes or problems in engineering sciences, mathematical imaging, biomedical sciences or mathematical finance rely on non-smooth structures, either directly through non-differentiable associated energy models, due to (quasi)variational inequality formulations or the presence of inequality constraints in pertinent energy minimization tasks. Based on non-smooth operator equation based (re)formulations of the above problem classes, in this talk a generalized Newton framework in function space is discussed. For this purpose the concept of semismoothness in function space is addressed. Relying on the latter concept, locally superlinear convergence of the associated semismooth Newton iteration is established and its mesh independent convergence behavior upon discretization is shown. In a second part of the talk, the efficiency and wide applicability of the above semismooth Newton framework is highlighted by considering constrained optimal control problems for fluid flow, contact problems with or without adhesion forces, phase separation phenomena relying on non-smooth homogeneous free energy densities and restoration tasks in mathematical image processing.

Aziz Lectures 2011

December 2, 2011

Optimal and Practical Algebraic Solvers for Discretized PDEs
Prof. Jinchao Xu
Center for Computational Mathematics and Applications
Penn State University

An overview of fast solution techniques (such as multi-grid, two-grid, one-grid and nil-grid methods) will be given in this talk on solving large scale systems of equations that arise from the discretization of partial differential equations (such as Poisson, elasticity, Stokes, Navier-Stokes, Maxwell, MHD, and black-oil models). Mathematical optimality, practical applicability and parallel (CPU/GPU) scalability will be addressed for these algorithms and applications.

February 11, 2011

Complex Fluids
Prof. Peter Constantin
Department of Mathematics
University of Chicago

The talk will be about some of the models used to describe fluids with particulate matter suspended in them. Some of these models are very complicated. After a bit of history and a review of known results, I will try to point out some open problems, isolate some of the mathematical difficulties, and illustrate some of the phenomena on simpler didactic models.

Aziz Lectures 2010

November 12, 2010

Discontinuous Galerkin Finite Element Methods for High Order Nonlinear Partial Differential Equations
Prof. Chi-Wang Shu
Brown University

Discontinuous Galerkin (DG) finite element methods were first designed to solve hyperbolic conservation laws utilizing successful high resolution finite difference and finite volume schemes such as approximate Riemann solvers and nonlinear limiters. More recently the DG methods have been generalized to solve convection dominated convection-diffusion equations (e.g. high Reynolds number Navier-Stokes equations), convection-dispersion (e.g. KdV equations) and other high order nonlinear wave equations or diffusion equations. In this talk we will first give an introduction to the DG method, emphasizing several key ingredients which made the method popular, and then we will move on to introduce a class of DG methods for solving high order PDEs, termed local DG (LDG) methods. We will highlight the important ingredient of the design of LDG schemes, namely the adequate choice of numerical fluxes, and emphasize the stability of the fully nonlinear DG approximations. Numerical examples will be shown to demonstrate the performance of the DG methods.

March 5, 2010

A Taste of Compressed Sensing
Prof. Ronald DeVore
Texas A&M University

Compressed Sensing is a new paradigm in signal and image processing. It seeks to faithfully capture a signal/image with the fewest number of measurements. Rather than model a signal as a bandlimited function or an image as a pixel array, it models both of these as a sparse vector in some representation system. This model fits well real world signals and images. For example, images are well approximated by a sparse wavelet decomposition. Given this model, how should we design a sensor to capture the signal with the fewest number of measurements. We shall introduce ways of measuring the effectiveness of compressed sensing algorithms and then show which of these are optimal.

Aziz Lectures 2009

October 12, 2009

Isogeometric Analysis
Prof. Thomas J. R. Hughes
Institute for Computational Engineering and Sciences 
University of Texas at Austin

Geometry is the foundation of analysis yet modern methods of computational geometry have until recently had very little impact on computational mechanics. The reason may be that the Finite Element Analysis (FEA), as we know it today, was developed in the 1950's and 1960's, before the advent and widespread use of Computer Aided Design (CAD) programs, which occurred in the 1970's and 1980's. Many difficulties encountered with FEA emanate from its approximate, polynomial based geometry, such as, for example, mesh generation, mesh refinement, sliding contact, flows about aerodynamic shapes, buckling of thin shells, etc., and it s disconnect with CAD. It would seem that it is time to look at more powerful descriptions of geometry to provide a new basis for computational mechanics.

The purpose of this talk is to describe the new generation of computational mechanics procedures based on modern developments in computational geometry. The emphasis will be on Isogeometric Analysis in which basis functions generated from NURBS (Non-Uniform Rational B-Splines) and T-Splines are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element h- and p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is described. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD description.

In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. Extraordinary accuracy is noted for k-refinement in structural vibrations and wave propagation calculations. Surprising robustness is also noted in fluid and non-linear solid mechanics problems. It is argued that Isogeometric Analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses many advantages. In particular, k-refinement seems to offer a unique combination of attributes, that is, robustness and accuracy, not possessed by classical p-methods, and is applicable to models requiring smoother basis functions, such as, thin bending elements, and strain-gradient and various phase-field theories.

A modelling paradigm for patient-specific simulation of cardiovascular fluid-structure interaction is reviewed, and a précis of the status of current mathematical understanding is presented.

May 1, 2009

The Fast Multipole Method and its Applications
Leslie Greengard
Courant Institute of Mathematical Sciences, New York University

In this lecture, we will describe the analytic and computational foundations of fast multipole methods (FMMs), as well as some of their applications. They are most easily understood, perhaps, in the case of particle simulations, where they reduce the cost of computing all pairwise interactions in a system of N particles from O(N2) to O(N) or O(N log N) operations. FMMs are equally useful, however, in solving partial differential equations by first recasting them as integral equations. We will present examples from electromagnetics, elasticity, and fluid mechanics.

Aziz Lectures 2008

November 14, 2008

Topology optimization of structures
Prof. Gregoire Allaire
Ecole Polytechnique

The typical problem of structural optimization is to find the "best" structure which is, at the same time, of minimal weight and of maximum strength or which performs a desired deformation. In this context I will present the combination of the classical shape derivative and of the level-set methods for front propagation. This approach has been implemented in two and three space dimensions for models of linear or non-linear elasticity and for various objective functions and constraints on the perimeter. It has also been coupled with the bubble or topological gradient method which is designed for introducing new holes in the optimization process. Since the level set method is known to easily handle boundary propagation with topological changes, the resulting numerical algorithm is very efficient for topology optimization. It can escape from local minima in a given topological class of shapes and the resulting optimal design is largely independent of the initial guess.

March 28, 2008

New materials from mathematics: real and imagined
Prof. Richard D. James
University of Minnesota

In this talk I will give two examples where mathematics played an important role for the discovery of new materials, and a third example where mathematics suggests a systematic way of searching for broad classes of yet undiscovered materials.

Aziz Lectures 2007

Nov. 16, 2007

Adaptive Approximation by Greedy Algorithms
Prof. Albert Cohen
Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie, 
Paris, France

This talk will discuss computational algorithms that deal with the following general task: given a function f and a dictionary of functions D in a Hilbert space, extract a linear combination of N functions of D which approximates f at best. We shall review the convergence properties of existing algorithms. This work is motivated by applications as various as data compression, adaptive numerical simulation of PDE's, statistical learning theory.

May 4, 2007

Compressive Sampling
Prof. Emmanuel J. Candes
California Institute of Technology

One of the central tenets of signal processing is the Shannon/Nyquist sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth-the length of the shortest interval which contains the support of the spectrum of the signal under study. Very recently, an alternative sampling or sensing theory has emerged which goes against this conventional wisdom. This theory allows the faithful recovery of signals and images from what appear to be highly incomplete sets of data, i.e. from far fewer data bits than traditional methods use. Underlying this metholdology is a concrete protocol for sensing and compressing data simultaneously.

This talk will present the key mathematical ideas underlying this new sampling or sensing theory, and will survey some of the most important results. We will argue that this is a robust mathematical theory; not only is it possible to recover signals accurately from just an incomplete set of measurements, but it is also possible to do so when the measurements are unreliable and corrupted by noise. We will see that the reconstruction algorithms are very concrete, stable (in the sense that they degrade smoothly as the noise level increases) and practical; in fact, they only involve solving very simple convex optimization programs.

An interesting aspect of this theory is that it has bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding theory, theoretical computer science, and others as well. If time allows, we will try to explain these connections via a few selected examples.

Aziz Lectures 2006

December 1, 2006

Imaging in random media
Prof. George C. Papanicolaou
Mathematics Department 
Stanford University

I will present an overview of some recently developed methods for imaging with array and distributed sensors when the environment between the objects to be imaged and the sensors is complex and only partially known to the imager. This brings in modeling and analysis in random media, and the need for statistical algorithms that increase the computational complexity of imaging, which is done by backpropagating local correlations rather than traces (interferometry). I will illustrate the theory with applications from non-destructive testing and from other areas.

April 21, 2006

String integration of some MHD equations
Prof. Yann Brenier
Laboratoire Alexandre Dieudonné 
Université de Nice-Sophia-Antipolis, France

We first review the link between strings and some Magnetohydrodynamics equations. Typical examples are the Born-Infeld system, the Chaplygin gas equations and the shallow water MHD model. They arise in Physics at very different (from subatomic to cosmologic) scales. These models can be exactly integrated in one space dimension by solving the 1D wave equation and using the d'Alembert formula. We show how an elementary "string integrator" can be used to solve these MHD equations through dimensional splitting. A good control of the energy conservation is needed due to the repeated use of Lagrangian to Eulerian grid projections. Numerical simulations in 1 and 2 dimensions will be shown.

February 3, 2006

Multiscale Analysis in Micromagnetics
Prof. Felix Otto
Institute for Applied Mathematics
University of Bonn, Germany

From the point of view of mathematics, micromagnetics is an ideal playground for a pattern forming system in m aterials science: There are abundant experiments on a wealth of visually attractive phenomena and there is a well-accepted continuum model.

In this talk, I will focus on two specific experimental pattern for thin film ferromagnetic elements. One pattern is a ground state, the other pattern is a metastable state. Starting point for our analysis is the micromagnetic model which has three length scales and thus many parameter regimes. For both pattern, we identify the appropriate paramater regime and rigorously derive a reduced model via G-convergence. We numerically simulate the reduced model and compare to experimental data.

This is joint work with A. DeSimone, R. V. Kohn, and S. Müller for the first part and with R. Cantero-Alvarez and J. Steiner for the second part.

Aziz Lectures 2005

December 9, 2005

Multiscale Modeling in Biosciences: Ion Transport through Membranes
Prof. Willi Jäger

Institute for Applied Mathematics
University of Heidelberg, Germany

Aziz Lectures 2004

November 19, 2004

Electromagnetic imaging for small inhomogeneities
Prof. Michael Vogelius
Department of Mathematics, Rutgers University

May 7, 2004

Mathematical models for cell motion
Prof. Benoît Perthame
École Normale Supérieure, Paris

Aziz Lectures 2003

November 14, 2003

Multiscale Modeling and Computation of Flow in Heterogeneous Media
Prof. Tom Hou
Caltech

March 7, 2003

Mathematical and Numerical Modeling of the Cardiovascular System
Prof. Alfio Quarteroni
Politecnico di Milano, Milan, Italy, and 
EPFL, Lausanne, Switzerland

Aziz Lectures 2002

Dec. 6, 2002

The regularity of minimizers in elasticity
Prof. John Ball
Department of Mathematics, Oxford

May 3, 2002

Multigrid: From Fourier to Gauss
Prof. Randolph E. Bank
Department of Mathematics, University of California at San Diego

Aziz Lectures 2001

Nov. 16, 2001

Mathematical Problems in Meteorology and Oceanography
Prof. Roger Temam
Institute for Scientific Computing and Applied Mathematics, Indiana University

April 23, 2001

Recent Approaches in the Treatment of Subgrid Scales
Prof. Franco Brezzi
Istituto di Analisi Numerica del CNR and Dipartimento di Matematica, Universita di Pavia, Italy

Aziz Lectures 2000

March 15, 2000

Time Stepping in Parabolic Problems - Approximation of Analytic Semigroups
Prof. Vidar Thomée
Dept. of Mathematics, Chalmers University of Technology and Göteborg University 

Aziz Lectures 1999

December 10, 1999

Colliding Black Holes and Gravity Waves: A New Computational Challenge
Prof. Douglas N. Arnold
Dept. of Mathematics, Pennsylvania State University

September 24, 1999

A Priori and A Posteriori Error Estimates in Finite Element Approximation
Prof. Lars B. Wahlbin
Dept. of Mathematics, Cornell University

February 19, 1999

Mathematical Problems Related to the Reliability of Finite Element Analysis in Practice: When Can We Trust the Computational Results for Engineering Decisions
Prof. Ivo Babuska
University of Texas, Austin, Emeritus Professor at University of Maryland