Dr. Kadir Aziz, who had been a Professor Emeritus at the Department of Mathematics and Statistics at UMBC since 1989, passed away on March 25, 2016 in Chevy Chase, Maryland. He was 92 years old.
Kadir was born in Afghanistan in 1923. He grew up and received his early education in Paris where his father was the Afghan ambassador, and later in Washington, DC, where he obtained a bachelor's degree from Wilson Teachers College (now merged with the University of District of Columbia) in 1952, and a Master's degree from George Washington University in 1954.
Thereafter he entered the doctoral program in mathematics at the University of Maryland. His doctoral dissertation in 1958, titled "On Higher Order Boundary Value Problems for Hyperbolic Partial Differential Equations in Two and Three Variables" was written under the guidance of Dr. Joaquin Diaz.
Upon receiving the PhD degree, he obtained a faculty position at Georgetown University, where he quickly rose through the ranks, and in 1963, only five years after his PhD, was appointed a full Professor of Mathematics. In 1966 he assumed the duties of the Department Chairman there.
A year later, in 1967, Kadir moved to the nascent UMBC campus as one of the original senior faculty members of its College of Sciences. At the same time, he was appointed Adjunct Research Professor at the Institute for Physical Science and Technology at UMCP. Kadir was a major force in setting up the foundations of what has now become a thriving Department of Mathematics and Statistics at UMBC.
Kadir's research focused on the numerical analysis of partial differential equations. He was one of pioneers of what became known as the Finite Element Method (FEM). This method rapidly became one of the most powerful and indispensable tools for treating numerical problems of engineering, physics, and other sciences.
It is remarkable that the very first international conference on the mathematical theory of the FEM was held at UMBC in 1972, when UMBC was only six years old. The proceedings of that conference, a book edited by Kadir, and containing a groundbreaking monograph written by him and Ivo Babuska, went on to become a standard reference on the subject and an inspiration for the future development of the field.
During his years at Georgetown and UMBC, Kadir's research was supported by grants from the National Science Foundation, Office of Naval Research, Air Force Office of Scientific Research, Department of Energy, and the Naval Surface Weapons Center. Kadir supervised the dissertations of 14 doctoral students at Georgetown, UMCP, and UMBC.
In 1999, Kadir donated funds to establish what is known as the Aziz Lecture Series -- initially organized at UMBC, and later at UMCP. The purpose of the series is to provide a forum for expository lectures by experts in the field on the numerical solutions of differential equations. One or two Aziz Lectures have been delivered each year since the establishment of this continuing series.
Kadir was well-known for his joie de vivre -- he loved good wine, good food and good conversation. He will be missed for his heartiness, generosity, and sense of humor.
The Aziz Lectures were established by a generous gift from Prof. A. Kadir Aziz. The purpose of the lectures is to bring distinguished mathematicians to the University of Maryland, College Park, to give survey lectures on differential equations, their numerical analysis, and related areas.
Kadir Aziz, who died on March 25, 2016 at the age of 92, received his Ph.D. from the University of Maryland, College Park in 1957. He was on the faculty of Georgetown University from 1956 to 1967, and was on the faculty at the University of Maryland, Baltimore County (UMBC) since 1967. He was Professor Emeritus of Mathematics and Statistics at UMBC. Throughout his career Kadir Aziz was an active and highly respected member of the Numerical Analysis community at Maryland.
The Aziz lecture is given at 3pm in the Math Colloquium Room (MTH 3206).
Usually the speaker gives a related talk in the Applied Math Colloquium on the previous day (Thursday at 3:30pm).
Modeling traffic flow on a network of roads
Prof. Alberto Bressan
Department of Mathematics
Penn State University
The talk will present various PDE models of traffic flow on a network of roads. These comprise a set of conservation laws, determining the density of traffic on each road, together with suitable boundary conditions, describing the dynamics at intersections. While conservation laws determine the evolution of traffic from given initial data, actual traffic patterns are best studied from the point of view of optimal decision problems, where each driver chooses the departure time and the route taken to reach destination. Given a cost functional depending on the departure and arrival times, a relevant mathematical problem is to determine (i) global optima, minimizing the sum of all costs to all drivers, and (ii) Nash equilibria, where no driver can lower his own cost by changing departure time or route to destination. Several results and open problems will be discussed.
Quantum Dots and Dislocations: Dynamics of Materials
Prof. Irene Fonseca
Department of Mathematical Sciences
Carnegie Mellon University
The formation and assembly patterns of quantum dots have a significant impact on the optoelectronic properties of semiconductors. We will address short time existence for a surface diffusion evolution equation with curvature regularization in the context of epitaxially strained three-dimensional films. Further, the nucleation of misfit dislocations will be analyzed.
This is joint work with Nicola Fusco, Giovanni Leoni and Massimiliano Morini.
Tensor Sparsity - a Regularity Notion for High Dimensional PDEs
Prof. Wolfgang Dahmen
Institute für Geometrie und Praktische Mathematik
RWTH Aachen University (Germany)
The numerical solution of PDEs in a spatially high-dimensional regime (such as the electronic Schrödinger or Fokker-Planck equations) is severely hampered by the “curse of dimensionality”: the computational cost required for achieving a desired target accuracy increases exponentially with respect to the spatial dimension.
We explore a possible remedy by exploiting a typically hidden sparsity of the solution to such problems with respect to a problem dependent basis or dictionary. Here sparsity means that relatively few terms from such a dictionary suffice to realize a given target accuracy. Specifically, sparsity with respect to dictionaries comprised of separable functions – rank-one tensors – would significantly mitigate the curse of dimensionality. The main result establishes such tensor-sparsity for elliptic problems over product domains when the data are tensor-sparse, which can be viewed as a structural regularity theorem.
Waves in random media: the story of the phase
Prof. Lenya Ryzhik
Department of Mathematics Stanford University
The macroscopic description of wave propagation in random media typically focuses on the scattering of the wave intensity, while the phase is discarded as a uselessly random object. At the same time, most of the beauty in wave scattering come from the phase correlations. I will describe some of the miracles, as well as some limit theorems for the wave phase.
Mathematical challenges in kinetic models of dilute polymers: analysis, approximation and computation
Prof Endre Süli
Mathematical Institute University of Oxford
United Kingdom
We survey recent analytical and computational results for coupled macroscopic-microscopic bead-spring chain models that arise from the kinetic theory of dilute solutions of polymeric fluids with noninteracting polymer chains, involving the unsteady Navier–Stokes system in a bounded domain and a high-dimensional Fokker–Planck equation. The Fokker–Planck equation emerges from a system of (Ito) stochastic differential equations, which models the evolution of a vectorial stochastic process comprised by the centre-of-mass position vector and the orientation (or configuration) vectors of the polymer chain. We discuss the existence of large-data global-in-time weak solutions to the coupled Navier–Stokes–Fokker–Planck system. The Fokker–Planck equation involved in the model is a high-dimensional partial differential equation, whose numerical approximation is a formidable computational challenge, complicated by the fact that for practically relevant spring potentials, such as finitely extensible nonlinear elastic potentials, the drift coefficient in the Fokker–Planck equation is unbounded.
The interplay between geometric modeling and simulation of partial differential equations
Prof. Annalisa Buffa
Istituto di Matematica Applicata e Tecnologie Informatiche "E. Magenes"
Pavia, Italy
Computer-based simulation of partial differential equations involves approximation of the unknown fields and a description of geometrical entities such as the computational domain and the properties of the media. There are a variety of situations: in some cases this description is very complex, in some other the governing equations are very difficult to discretize. Starting with an historical perspective, I will describe the recent efforts to improve the interplay between the mathematical approaches characterizing these two aspects of the problem.
Universality and chaos in clustering and branching processes
Prof. Robert Pego
Carnegie-Mellon University
Scaling limits of Smoluchowski's coagulation equation are related to probability theory in numerous remarkable ways. E.g., such an equation governs the merging of ancestral trees in critical branching processes, as observed by Bertoin and Le Gall. A simple explanation of this relies on how Bernstein functions relate to a weak topology for Levy triples. From the same theory, we find the existence of 'universal' branching mechanisms which generate complicated dynamics that contain arbitrary renormalized limits. I also plan to describe a remarkable application of Bernstein function theory to a coagulation-fragmentation model introduced in fisheries science to explain animal group size.
Maximum Norm Stability and Error Estimates for Stokes and Navier-Stokes Problems
Prof. Vivette Girault
Université Pierre et Marie Curie, Paris, France
Energy norm stability estimates for the finite element discretization of the Stokes problem follow easily from the variational formulation provided the discrete pressure and velocity satisfy a uniform inf-sup condition. But deriving uniform stability estimates in L^{∞} is much more complex because variational formulations do not lend themselves to maximum norms. I shall present here the main ideas of a proof that relies on weighted L^{2} estimates for regularized Green's functions associated with the Stokes problem and on a weighted inf-sup condition. The domain is a convex polygon or polyhedron. The triangulation is shape-regular and quasi-uniform. The finite element spaces satisfy a super-approximation property, which is shown to be valid for most commonly used stable finite-element spaces. Extending this result to error estimates and to the solution of the steady incompressible Navier-Stokes problem is straightforward.
Semismooth Newton Methods: Theory, Numerics and Applications
Prof. Michael Hintermüller
Department of Mathematics
Humboldt-Universität, Berlin, Germany
Many mathematical models of processes or problems in engineering sciences, mathematical imaging, biomedical sciences or mathematical finance rely on non-smooth structures, either directly through non-differentiable associated energy models, due to (quasi)variational inequality formulations or the presence of inequality constraints in pertinent energy minimization tasks. Based on non-smooth operator equation based (re)formulations of the above problem classes, in this talk a generalized Newton framework in function space is discussed. For this purpose the concept of semismoothness in function space is addressed. Relying on the latter concept, locally superlinear convergence of the associated semismooth Newton iteration is established and its mesh independent convergence behavior upon discretization is shown. In a second part of the talk, the efficiency and wide applicability of the above semismooth Newton framework is highlighted by considering constrained optimal control problems for fluid flow, contact problems with or without adhesion forces, phase separation phenomena relying on non-smooth homogeneous free energy densities and restoration tasks in mathematical image processing.
Optimal and Practical Algebraic Solvers for Discretized PDEs
Prof. Jinchao Xu
Center for Computational Mathematics and Applications
Penn State University
An overview of fast solution techniques (such as multi-grid, two-grid, one-grid and nil-grid methods) will be given in this talk on solving large scale systems of equations that arise from the discretization of partial differential equations (such as Poisson, elasticity, Stokes, Navier-Stokes, Maxwell, MHD, and black-oil models). Mathematical optimality, practical applicability and parallel (CPU/GPU) scalability will be addressed for these algorithms and applications.
Complex Fluids
Prof. Peter Constantin
Department of Mathematics
University of Chicago
The talk will be about some of the models used to describe fluids with particulate matter suspended in them. Some of these models are very complicated. After a bit of history and a review of known results, I will try to point out some open problems, isolate some of the mathematical difficulties, and illustrate some of the phenomena on simpler didactic models.
Discontinuous Galerkin Finite Element Methods for High Order Nonlinear Partial Differential Equations
Prof. Chi-Wang Shu
Brown University
Discontinuous Galerkin (DG) finite element methods were first designed to solve hyperbolic conservation laws utilizing successful high resolution finite difference and finite volume schemes such as approximate Riemann solvers and nonlinear limiters. More recently the DG methods have been generalized to solve convection dominated convection-diffusion equations (e.g. high Reynolds number Navier-Stokes equations), convection-dispersion (e.g. KdV equations) and other high order nonlinear wave equations or diffusion equations. In this talk we will first give an introduction to the DG method, emphasizing several key ingredients which made the method popular, and then we will move on to introduce a class of DG methods for solving high order PDEs, termed local DG (LDG) methods. We will highlight the important ingredient of the design of LDG schemes, namely the adequate choice of numerical fluxes, and emphasize the stability of the fully nonlinear DG approximations. Numerical examples will be shown to demonstrate the performance of the DG methods.
A Taste of Compressed Sensing
Prof. Ronald DeVore
Texas A&M University
Compressed Sensing is a new paradigm in signal and image processing. It seeks to faithfully capture a signal/image with the fewest number of measurements. Rather than model a signal as a bandlimited function or an image as a pixel array, it models both of these as a sparse vector in some representation system. This model fits well real world signals and images. For example, images are well approximated by a sparse wavelet decomposition. Given this model, how should we design a sensor to capture the signal with the fewest number of measurements. We shall introduce ways of measuring the effectiveness of compressed sensing algorithms and then show which of these are optimal.
Isogeometric Analysis
Prof. Thomas J. R. Hughes
Institute for Computational Engineering and Sciences
University of Texas at Austin
Geometry is the foundation of analysis yet modern methods of computational geometry have until recently had very little impact on computational mechanics. The reason may be that the Finite Element Analysis (FEA), as we know it today, was developed in the 1950's and 1960's, before the advent and widespread use of Computer Aided Design (CAD) programs, which occurred in the 1970's and 1980's. Many difficulties encountered with FEA emanate from its approximate, polynomial based geometry, such as, for example, mesh generation, mesh refinement, sliding contact, flows about aerodynamic shapes, buckling of thin shells, etc., and it s disconnect with CAD. It would seem that it is time to look at more powerful descriptions of geometry to provide a new basis for computational mechanics.
The purpose of this talk is to describe the new generation of computational mechanics procedures based on modern developments in computational geometry. The emphasis will be on Isogeometric Analysis in which basis functions generated from NURBS (Non-Uniform Rational B-Splines) and T-Splines are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element h- and p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is described. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD description.
In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. Extraordinary accuracy is noted for k-refinement in structural vibrations and wave propagation calculations. Surprising robustness is also noted in fluid and non-linear solid mechanics problems. It is argued that Isogeometric Analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses many advantages. In particular, k-refinement seems to offer a unique combination of attributes, that is, robustness and accuracy, not possessed by classical p-methods, and is applicable to models requiring smoother basis functions, such as, thin bending elements, and strain-gradient and various phase-field theories.
A modelling paradigm for patient-specific simulation of cardiovascular fluid-structure interaction is reviewed, and a précis of the status of current mathematical understanding is presented.
The Fast Multipole Method and its Applications
Leslie Greengard
Courant Institute of Mathematical Sciences, New York University
In this lecture, we will describe the analytic and computational foundations of fast multipole methods (FMMs), as well as some of their applications. They are most easily understood, perhaps, in the case of particle simulations, where they reduce the cost of computing all pairwise interactions in a system of N particles from O(N2) to O(N) or O(N log N) operations. FMMs are equally useful, however, in solving partial differential equations by first recasting them as integral equations. We will present examples from electromagnetics, elasticity, and fluid mechanics.
Topology optimization of structures
Prof. Gregoire Allaire
Ecole Polytechnique
The typical problem of structural optimization is to find the "best" structure which is, at the same time, of minimal weight and of maximum strength or which performs a desired deformation. In this context I will present the combination of the classical shape derivative and of the level-set methods for front propagation. This approach has been implemented in two and three space dimensions for models of linear or non-linear elasticity and for various objective functions and constraints on the perimeter. It has also been coupled with the bubble or topological gradient method which is designed for introducing new holes in the optimization process. Since the level set method is known to easily handle boundary propagation with topological changes, the resulting numerical algorithm is very efficient for topology optimization. It can escape from local minima in a given topological class of shapes and the resulting optimal design is largely independent of the initial guess.
New materials from mathematics: real and imagined
Prof. Richard D. James
University of Minnesota
In this talk I will give two examples where mathematics played an important role for the discovery of new materials, and a third example where mathematics suggests a systematic way of searching for broad classes of yet undiscovered materials.
Adaptive Approximation by Greedy Algorithms
Prof. Albert Cohen
Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie,
Paris, France
This talk will discuss computational algorithms that deal with the following general task: given a function f and a dictionary of functions D in a Hilbert space, extract a linear combination of N functions of D which approximates f at best. We shall review the convergence properties of existing algorithms. This work is motivated by applications as various as data compression, adaptive numerical simulation of PDE's, statistical learning theory.
Compressive Sampling
Prof. Emmanuel J. Candes
California Institute of Technology
One of the central tenets of signal processing is the Shannon/Nyquist sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth-the length of the shortest interval which contains the support of the spectrum of the signal under study. Very recently, an alternative sampling or sensing theory has emerged which goes against this conventional wisdom. This theory allows the faithful recovery of signals and images from what appear to be highly incomplete sets of data, i.e. from far fewer data bits than traditional methods use. Underlying this metholdology is a concrete protocol for sensing and compressing data simultaneously.
This talk will present the key mathematical ideas underlying this new sampling or sensing theory, and will survey some of the most important results. We will argue that this is a robust mathematical theory; not only is it possible to recover signals accurately from just an incomplete set of measurements, but it is also possible to do so when the measurements are unreliable and corrupted by noise. We will see that the reconstruction algorithms are very concrete, stable (in the sense that they degrade smoothly as the noise level increases) and practical; in fact, they only involve solving very simple convex optimization programs.
An interesting aspect of this theory is that it has bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding theory, theoretical computer science, and others as well. If time allows, we will try to explain these connections via a few selected examples.
Imaging in random media
Prof. George C. Papanicolaou
Mathematics Department
Stanford University
I will present an overview of some recently developed methods for imaging with array and distributed sensors when the environment between the objects to be imaged and the sensors is complex and only partially known to the imager. This brings in modeling and analysis in random media, and the need for statistical algorithms that increase the computational complexity of imaging, which is done by backpropagating local correlations rather than traces (interferometry). I will illustrate the theory with applications from non-destructive testing and from other areas.
String integration of some MHD equations
Prof. Yann Brenier
Laboratoire Alexandre Dieudonné
Université de Nice-Sophia-Antipolis, France
We first review the link between strings and some Magnetohydrodynamics equations. Typical examples are the Born-Infeld system, the Chaplygin gas equations and the shallow water MHD model. They arise in Physics at very different (from subatomic to cosmologic) scales. These models can be exactly integrated in one space dimension by solving the 1D wave equation and using the d'Alembert formula. We show how an elementary "string integrator" can be used to solve these MHD equations through dimensional splitting. A good control of the energy conservation is needed due to the repeated use of Lagrangian to Eulerian grid projections. Numerical simulations in 1 and 2 dimensions will be shown.
Multiscale Analysis in Micromagnetics
Prof. Felix Otto
Institute for Applied Mathematics
University of Bonn, Germany
From the point of view of mathematics, micromagnetics is an ideal playground for a pattern forming system in m aterials science: There are abundant experiments on a wealth of visually attractive phenomena and there is a well-accepted continuum model.
In this talk, I will focus on two specific experimental pattern for thin film ferromagnetic elements. One pattern is a ground state, the other pattern is a metastable state. Starting point for our analysis is the micromagnetic model which has three length scales and thus many parameter regimes. For both pattern, we identify the appropriate paramater regime and rigorously derive a reduced model via G-convergence. We numerically simulate the reduced model and compare to experimental data.
This is joint work with A. DeSimone, R. V. Kohn, and S. Müller for the first part and with R. Cantero-Alvarez and J. Steiner for the second part.
Multiscale Modeling in Biosciences: Ion Transport through Membranes
Prof. Willi Jäger
Institute for Applied Mathematics
University of Heidelberg, Germany
Electromagnetic imaging for small inhomogeneities
Prof. Michael Vogelius
Department of Mathematics, Rutgers University
Mathematical models for cell motion
Prof. Benoît Perthame
École Normale Supérieure, Paris
Multiscale Modeling and Computation of Flow in Heterogeneous Media
Prof. Tom Hou
Caltech
Mathematical and Numerical Modeling of the Cardiovascular System
Prof. Alfio Quarteroni
Politecnico di Milano, Milan, Italy, and
EPFL, Lausanne, Switzerland
The regularity of minimizers in elasticity
Prof. John Ball
Department of Mathematics, Oxford
Multigrid: From Fourier to Gauss
Prof. Randolph E. Bank
Department of Mathematics, University of California at San Diego
Mathematical Problems in Meteorology and Oceanography
Prof. Roger Temam
Institute for Scientific Computing and Applied Mathematics, Indiana University
Recent Approaches in the Treatment of Subgrid Scales
Prof. Franco Brezzi
Istituto di Analisi Numerica del CNR and Dipartimento di Matematica, Universita di Pavia, Italy
Time Stepping in Parabolic Problems - Approximation of Analytic Semigroups
Prof. Vidar Thomée
Dept. of Mathematics, Chalmers University of Technology and Göteborg University
Colliding Black Holes and Gravity Waves: A New Computational Challenge
Prof. Douglas N. Arnold
Dept. of Mathematics, Pennsylvania State University
A Priori and A Posteriori Error Estimates in Finite Element Approximation
Prof. Lars B. Wahlbin
Dept. of Mathematics, Cornell University
Mathematical Problems Related to the Reliability of Finite Element Analysis in Practice: When Can We Trust the Computational Results for Engineering Decisions
Prof. Ivo Babuska
University of Texas, Austin, Emeritus Professor at University of Maryland
Avron Douglis (1918-1995) received an AB degree in economics from the University of Chicago in 1938. After working as an economist for three years and serving in World War II he began graduate studies in mathematics at New York University. He received his doctorate in 1949 under the direction of Richard Courant. He held a one-year post-doctoral appointment at the California Institute of Technology, and then returned to New York University as an assistant and then associate professor. In 1956 he accepted an appointment as associate professor at the University of Maryland, where he remained for the rest of his career, except for visiting appointments at the Universities of Minnesota, Oxford, and Newcastle upon Tyne. He was promoted to full professor in 1958 and became an emeritus in 1988.
Avron Douglis's research, noted for its depth, precision, and richness, covered the entire range of the theory of partial differential equations: linear and nonlinear; elliptic, parabolic, and hyperbolic. The famous papers he had written with S. Agmon and L. Nirenberg are among the most frequently cited in all of mathematics.
The Avron Douglis Library is housed in the department.
The Avron Douglis Lectures were established by the family and friends of Avron Douglis to honor his memory. Each academic year it brings to Maryland a distinguished expert to speak on a subject related to partial differential equations.
The lectures are held at 3:00 p.m. in room 3206 in the Department of Mathematics, unless noted otherwise below.
Yann Brenier
École Polytechnique
The usual heat equation is not suitable to preserve the topology of divergence-free vector fields, because it destroys their integral line structure. On the contrary, in the fluid mechanics literature, one can find examples of topology-preserving diffusion equations for divergence-free vector fields. They are very degenerate since they admit all stationary solutions to the Euler equations of incompressible fluids as equilibrium points. For them, we provide a suitable concept of ”dissipative solutions”, which shares common features with both P.-L. Lions’ dissipative solutions to the Euler equations and the concept of ”curves of maximal slopes”, à la De Giorgi, recently used by Gigli and collaborators to study the scalar heat equation in very general metric spaces. We show that the initial value problem admits global "dissipative" solutions (at least for two space dimensions) and that they are unique whenever they are smooth.
Sergiu Klainerman
Princeton University
The rigidity conjecture states that all regular, stationary solutions of the Einstein field equations in vacuum are isometric to the Kerr solution. The simple motivation behind this conjecture is that one expects, due to gravitational radiation, that general, dynamic, solutions of the Einstein field equation settle down, asymptotically, into a stationary regime. A well known result of Carter, Robinson and Hawking has settled the conjecture in the class of real analytic spacetimes. The assumption of real analyticity is however very problematic; there is simply no physical or mathematical justification for it. During the last five years I have developed, in collaboration with A. Ionescu and S. Alaxakis, a strategy to dispense of it. In my lecture I will these results and concentrate on some recent results obtained in collaboration with A. Ionescu.
Andrew Majda
Courant Institute of Mathematical Sciences -- New York University
An important emerging scientific issue in many practical problems ranging from climate and weather prediction to biological science involves the real time filtering and prediction through partial observations of noisy turbulent signals for complex dynamical systems with many degrees of freedom as well as the statistical accuracy of various strategies to cope with the .curse of dimensions.. The speaker and his collaborators, Harlim (North Carolina State University), Gershgorin (CIMS Post doc), and Grote (University of Basel) have developed a systematic applied mathematics perspective on all of these issues. One part of these ideas blends classical stability analysis for PDE's and their finite difference approximations, suitable versions of Kalman filtering, and stochastic models from turbulence theory to deal with the large model errors in realistic systems. Many new mathematical phenomena occur. Another aspect involves the development of test suites of statistically exactly solvable models and new NEKF algorithms for filtering and prediction for slow-fast system, moist convection, and turbulent tracers. Here a stringent suite of test models for filtering and stochastic parameter estimation is developed based on NEKF algorithms in order to systematically correct both multiplicative and additive bias in an imperfect model. As briefly described in the talk, there are both significantly increased filtering and predictive skill through the NEKF stochastic parameter estimation algorithms provided that these are guided by mathematical theory. The recent paper by Majda et al (Discrete and Cont. Dyn. Systems, 2010, Vol. 2, 441-486) as well as a forthcoming introductory graduate text by Majda and Harlim (Cambridge U. Press) provide an overview of this research.
Carlos E. Kenig
University of Chicago
In this lecture we will describe a method (which I call the concentration-compactness/rigidity theorem method) which Frank Merle and I have developed to study global well-posedness and scattering for critical non-linear dispersive and wave equations. Such problems are natural extensions of non-linear elliptic problems which were studied earlier, for instance in the context of the Yamabe problem and of harmonic maps. We will illustrate the method with some concrete examples and also mention other applications of these ideas.
Joseph B. Keller
Stanford University
Walter Strauss
Brown University
Robert V. Kohn
Courant Institute of Mathematical Sciences, New York University
Cathleen Synge Morawetz
Courant Institute of Mathematical Sciences, New York University
Constantine Dafermos
Brown University, Division of Applied Mathematics
Haim Brezis
Universite de Paris VI, Insitiut Universitaire de France, and Rutgers University
Vladmir Sverak
University of Minnesota
Tai-Ping Liu
Academia Sinica, Taiwan & Stanford University
Lawrence C. Evans
University of California, Berkeley
Luis Caffarelli
University of Texas, Austin
Hans Weinberger
University of Minnesota
Peter Lax
Courant Institute
Louis Nirenberg
Courant Institute
How to get to the Department of Mathematics by car, by Metro, from airports
Archives: 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018