<?xml version="1.0" encoding="UTF-8" ?>
	<rss version="2.0">
		<channel><title>Numerical Analysis</title><link>http://www-math.umd.edu/research/seminars.html</link><description></description><item>
	<title>Numerical Analysis for Operator Learning in SciML</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 03 Sep 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, September 3, 2024 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Christoph Schwab (ETH Zurich) - https://math.ethz.ch/research/applied-mathematics-numerical-analysis-scientific-computing/christoph-schwab.html<br />
Abstract: Deep Learning (DL) based methodologies are currently transforming computational methods in engineering and the sciences. Core is the high expressivity of deep feedforward neural network(NN) architectures. NN expressivities match established numerical approximation architectures(splines, wave-, shear- and ridgelets, p- and hp-FEM, MS-FEM, plane-waves) used in computational science and engineering. They allow numerical &quot;Operator Surrogates&quot;, i.e., finite-parametric, NN or polynomial surrogates for nonlinear maps between function spaces. We present approaches to the numerical analysis of such surrogates, delivering sufficient conditions for worst case expression rates and deterministic training of Gaussian random fields.<br />
<br />
Joint work with Carlo Marcati (Uni Pavia) and Jakob Zech (Uni Heidelberg).<br />]]></description>
</item>

<item>
	<title>Aziz Lecture: Multilevel approximation of Gaussian random fields</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Wed, 04 Sep 2024 15:15:00 EDT</pubDate>
	<description><![CDATA[When: Wed, September 4, 2024 - 3:15pm<br />Where: Kirwan Hall 3206<br />Speaker: Christoph Schwab (ETH Zurich) - https://math.ethz.ch/research/applied-mathematics-numerical-analysis-scientific-computing/christoph-schwab.html<br />
Abstract: Centered Gaussian random fields (GRFs) indexed by compacta as e.g. compact orientable manifolds M are determined by their covariance operators. We consider the numerical analysis of sample-wise, compressive multi-level wavelet-Galerkin approximations of centered GRFs given as variational solutions to coloring operator equations driven by spatial white noise, with pseudodifferential covariance operator being <br />
elliptic, self-adjoint and positive from the Hörmander class.<br />
<br />
For pathwise approximations with p parameters, tapered covariance or precision matrices have O(p) nonzero entries, can be optimally diagonally preconditioned, and allow O(p) path simulation, covariance estimation and kriging of GRFs.<br />
<br />
Joint work with Helmut Harbrecht (Uni Basel).<br />]]></description>
</item>

<item>
	<title>Runge-Kutta methods are stable</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 10 Sep 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, September 10, 2024 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Eitan Tadmor (University of Maryland) - https://www.math.umd.edu/~tadmor/<br />
Abstract: Runge-Kutta (RK) methods is the class of widely used discrete methods for numerical<br />
integration of systems of Ordinary Differential Equations (ODEs). <br />
In particular, these methods are routinely used for integration of increasingly large systems of ODEs encountered in various applications.<br />
But the standard stability arguments of RK methods fail to cover arbitrarily large systems. We explain the failure of different approaches, <br />
offer a new stability theory and demonstrate few examples.<br />]]></description>
</item>

<item>
	<title>Weights and applications in numerics</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 17 Sep 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, September 17, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=0f98cef1-69eb-46de-8c5b-b1ee0158fd27<br />Speaker: Abner Salgado (University of Tennessee, Knoxville) - https://sites.google.com/utk.edu/abnersg/<br />
Abstract: The use of weights and weighted norm inequalities has a rich history in harmonic analysis and the study of regularity properties to solutions of partial differential equations (PDE). Starting from classical results, we will present an overview of the application of some of these ideas to the numerical analysis of PDEs. Our main attention will be on some recent results concerning the use of weights in fractional diffusion, problems with singular data and some degenerate/singular PDE problems. Although these seem as disparate and unrelated applications, it is remarkable that the only structural assumption on the weight is that it belongs to a so-called Muckenhoupt $A_p$ class, which has been thoroughly studied in harmonic analysis since the 1970&#039;s.<br />]]></description>
</item>

<item>
	<title>Multilevel diffusion: Infinite dimensional score-based diffusion models</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 24 Sep 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, September 24, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=434417b2-66a7-4fa5-bef1-b1f5015ed100<br />Speaker: Nicole Tianjiao Yang (Emory University) - https://nicoletyang.github.io/<br />
Abstract: Score-based diffusion models (SBDM) have recently emerged as state-of-the-art approaches for image generation. We develop SBDMs in the infinite-dimensional setting, that is, we model the training data as functions supported on a rectangular domain. Besides the quest for generating images at ever higher resolution, our primary motivation is to create a well-posed infinite-dimensional learning problem so that we can discretize it consistently at multiple resolution levels. We demonstrate how to overcome shortcomings of current SBDM approaches in the infinite-dimensional setting by ensuring the well-posedness of forward and reverse processes and derive the convergence of the approximation of multilevel training. We implement an infinite-dimensional SBDM approach and illustrate that approximating the score function with an operator network is beneficial for multilevel training.<br />]]></description>
</item>

<item>
	<title>Numerical schemes for solving the Cahn-Hilliard equation  and other energy based systems</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 01 Oct 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 1, 2024 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Giordano Tierra (University of North Texas) - https://www.math.unt.edu/~gt0141/<br />
Abstract: The study of interfacial dynamics has become a key component to understand the behavior of a great variety of systems, in scientific, engineering and industrial applications. A very effective approach for representing interface problems is the diffuse interface/phase field approach, which describes the interfaces by layers of small thickness and whose structure is determined by a balance of molecular forces, in such a way that the tendencies for mixing and de-mixing compete through a non-local mixing energy. This approach uses a phase-field function that takes distinct values in the pure phases (for instance 0 in one phase and 1 in the other one) and varies smoothly in the interfacial regions. In particular, the Cahn-Hilliard equation was originally introduced to model the thermodynamic forces driving phase separation, arriving to a system with a gradient flow structure, that is, when there are no external forces applied to the system, the total free energy of the mixture is not increasing in time.<br />
<br />
In this presentation I will talk about the Cahn-Hilliard model and the main ideas behind the derivation of numerical schemes to approximate it, showing the main advantages and disadvantages of each approach. The key point is to try to preserve the properties of the original model at the discrete level, while the numerical schemes are efficient in time. After that I will discuss how the ideas considered for designing numerical schemes to approximate phase-fields models can be extended to other energy based applications, like liquid crystals or mixture of fluids.<br />]]></description>
</item>

<item>
	<title>Nonlocal Attention Operator: Towards an Interpretable Foundation Model for Physical Systems</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 08 Oct 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 8, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6eb7ec88-a299-4d1d-b3da-b203015f7839<br />Speaker: Yue Yu (Lehigh University) - https://www.lehigh.edu/~yuy214/<br />
Abstract: While foundation models have gainedconsiderable attention in core AI fields such as natural language processing(NLP) and computer vision (CV), their application to learning complex responsesof physical systems from experimental measurements remains underexplored. Inphysical systems, learning problems are often characterized as discoveringoperators that map between function spaces, using only a few samples ofcorresponding function pairs. For instance, in the automated discovery ofheterogeneous material models, the foundation model must be capable ofidentifying the mapping between applied loading fields and the resultingdisplacement fields, while also inferring the underlying microstructure thatgoverns this mapping. While the former task can be seen as a PDE forwardproblem, the later task frequently constitutes a severely ill-posed PDE inverseproblem. In this talk, we will consider thelearning of heterogeneous material responses as an exemplar problem to explorethe development of a foundation model for physical systems. Specifically, weshow that the attention mechanism is mathematically equivalent to a doubleintegral operator, enabling nonlocal interactions among spatial tokens througha data-dependent kernel that characterizes the inverse mapping from data to thehidden microstructure/parameter field of the underlying operator. Consequently,the attention mechanism captures global prior information from training datagenerated by multiple systems (i.e., specimens with different microstructures)and suggests an exploratory space in the form of a nonlinear kernel map. Basedon this theoretical analysis, we introduce a novel neural operatorarchitecture, the Nonlocal Attention Operator (NAO). By leveraging theattention mechanism, NAO can address ill-posedness and rank deficiency ininverse PDE problems by encoding regularization and enhancing generalizability.To demonstrate the applicability of NAO to material modeling problems, we applyit to the development of a foundation constitutive law across multiplematerials, showcasing its generalizability to unseen data resolutions andsystem states. Furthermore, we investigate the potentials of NAO inmicrostructure discovery and multiscale crack propagation problems. Our worknot only suggests a novel neural operator architecture for learning aninterpretable foundation model of physical systems, but also offers a newperspective towards understanding the attention mechanism.<br />]]></description>
</item>

<item>
	<title>Stochastic-Gradient-based Algorithms for Solving Nonconvex Constrained Optimization Problems</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 15 Oct 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 15, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=926453eb-5a94-4eab-a948-b20a01565457<br />Speaker: Frank E. Curtis (Industrial and Systems Engineering, Lehigh University) - https://coral.ise.lehigh.edu/frankecurtis/<br />
Abstract:  I will present recent work by my research group on the design and analysis of stochastic-gradient-based algorithms for solving nonconvex constrained optimization problems, which may arise, for example, in informed machine learning.  I will focus in particular on algorithmic strategies that have consistently been shown to exhibit the best practical performance in the deterministic setting, rather than focus on regularization-based methods that are popular for theoretical analyses, but often fail to yield satisfactory results.  Our algorithms possess solid theoretical convergence guarantees and preliminary experiments motivate continued study.<br />]]></description>
</item>

<item>
	<title>Multiphysics problems related to brain clearance, sleep and dementia</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 22 Oct 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 22, 2024 - 3:30pm<br />Where: Online<br />Speaker: Kent Mardal (University of Oslo ) - https://kent-and.github.io/<br />
Abstract: Recent theories suggest that a fundamental reason for sleep is simply clearance of metabolic waste produced during the activities of the day. Furthermore, lack of clearance and accumulation of waste is linked to dementia such as Alzheimer´s and Parkinson´s diseases. <br />
Hence, solute transfer and fluid dynamics are currently hot research topics in neuroscience, where the aim is to understand the basic physiology of the brain. <br />
<br />
In this talk we will present multi-physics problems and numerical schemes that target these applications. We will start with basic applications of neuroscience and discuss corresponding multi-physics problems. In particular, this will lead us to problems  involving Stokes, Biot and fractional solvers at the brain-fluid interface.<br />]]></description>
</item>

<item>
	<title>AdaBB: A Parameter-Free Gradient Method for Convex Optimization</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 29 Oct 2024 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 29, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=c0d1241c-6522-4b1a-8bc3-b21801583843<br />Speaker: Shiqian Ma (Rice University) - https://sqma.rice.edu/<br />
Abstract: We propose AdaBB, an adaptive gradient methodbased on the Barzilai-Borwein stepsize. The algorithm is line-search-free andparameter-free, and essentially provides a convergent variant of theBarzilai-Borwein method for general unconstrained convex optimization. Weanalyze the ergodic convergence of the objective function value and theconvergence of the iterates for solving general unconstrained convexoptimization. Compared with existing works along this line of research, ouralgorithm gives the best lower bounds on the stepsize and the average of thestepsizes. Moreover, we present an extension of the proposed algorithm forsolving composite optimization where the objective function is the summation ofa smooth function and a nonsmooth function. Our numerical results alsodemonstrate very promising potential of the proposed algorithms on somerepresentative examples.<br />]]></description>
</item>

<item>
	<title>Macroscopic Dynamics for Chemical Reactions: Large deviation and Wasserstein diffusion approximation</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 05 Nov 2024 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, November 5, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=5a47c268-2e27-4756-83e5-b21f016ce455<br />Speaker: Yuan Gao (Purdue University) - https://yuangaogao.github.io/<br />
Abstract: At a mesoscopic scale, the molecular number of each species in biochemical reactions can be modeled by the random time-changed Poisson processes. To characterize the macroscopic behaviors in the large volume limit, the law of large numbers (LLN) in path space determines a mean-field limit nonlinear ODE. At the same time, the WKB expansion yields a Hamilton-Jacobi equation (HJE) and the corresponding Lagrangian gives the good rate function in the large deviation principle (LDP). Rigorous proof can be done by recasting Varadhan&#039;s discrete nonlinear semigroup as a monotone scheme which approximates the limiting first-order HJE. The convergence of Varadhan&#039;s discrete nonlinear semigroup (the monotone scheme) to the continuous Lax-Oleinik semigroup yields LDP for the chemical reaction process at any single time. Consequently, the macroscopic mean-field limit reaction rate equation (LLN) is recovered. Moreover, the LDP for invariant measures can be used to construct the global energy landscape, which enables the dissipative-conservative decomposition for the reaction rate equation. For the diffusion approximation in the reversible (gradient flow) case, we also propose a canonical construction of diffusion on probability simplex based on discrete Wasserstein metric. For two species case, this Wasserstein diffusion approximation is equivalently converted to the 1D Wright-Fisher diffusion.<br />]]></description>
</item>

<item>
	<title>Learning a robust shape parameter for radial basis functions approximation with continual learning </title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 12 Nov 2024 07:45:00 EST</pubDate>
	<description><![CDATA[When: Tue, November 12, 2024 - 7:45am<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=686113c1-afc6-41d1-9234-b226016b1838<br />Speaker: Maria Han Vega (Ohio State University) - https://hanveiga.com/<br />
Abstract: Radial basis functions (RBFs) play an important role in function interpolation, being suited to deal with arbitrary sets of interpolation nodes. The accuracy of the interpolation depends on a parameter called the shape parameter. Although there are many approaches in literature on how to choose this, finding the optimal shape parameter value in general remains a challenge. In this talk, I will present a novel approach to  determine the shape parameter in RBFs. <br />
We construct an optimisation problem to obtain a shape parameter that leads to an interpolation matrix with bounded condition number. We introduce a data-driven method to learns the map between sets of interpolation nodes and a suitable shape parameter. We propose a fall-back procedure to enforce a strict upper bound on the condition number of the interpolation matrix, as well as a continual learning strategy that improves the performance of the predictor by learning from previously run simulations.This methodology is accessed in a series of numerical tests in interpolation tasks and in a RBF based finite difference (RBF-FD) method, in one and two-space dimensions.<br />]]></description>
</item>

<item>
	<title>Canceled</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 19 Nov 2024 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, November 19, 2024 - 3:30pm<br />Where: Kirwan Hall 3206<br />Canceled<br />]]></description>
</item>

<item>
	<title>Transport information geometric computations</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 03 Dec 2024 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, December 3, 2024 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=73c2db92-b497-442d-8408-b23b01759321<br />Speaker: Wuchen Li (University of South Carolina) - https://people.math.sc.edu/wuchen/<br />
Abstract: We provide a numerical analysis and computation of neural network projected schemes for approximating one-dimensional Wasserstein gradient flows. We approximate the Lagrangian mapping functions of gradient flows by the class of two-layer neural network functions with ReLU (rectified linear unit) activation functions. The numerical scheme is based on a projected gradient method, namely the Wasserstein natural gradient, where the projection is constructed from the $L^2$ mapping spaces onto the neural network parameterized mapping space. We establish theoretical guarantees for the performance of the neural projected dynamics. We derive a closed-form update for the scheme with well-posedness and explicit consistency guarantee for a particular choice of network structure. General truncation error analysis is also established on the basis of the projective nature of the dynamics. Numerical examples, including gradient drift Fokker-Planck equations, porous medium equations, and Keller-Segel models, verify the accuracy and effectiveness of the proposed neural projected algorithm. This is based on a joint work with Xinzhe Zuo (UCLA), Jiaxi Zhao (NUS), Shu Liu (UCLA), and Stanley Osher (UCLA).<br />]]></description>
</item>

<item>
	<title>Quantum Eigenvalue(phase) Estimation: From Quantum Data to Classical Signal Processing</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 12 Dec 2024 14:00:00 EST</pubDate>
	<description><![CDATA[When: Thu, December 12, 2024 - 2:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Zhiyan Ding (University of California, Berkeley) - https://math.berkeley.edu/~zding.m/<br />
Abstract: Quantum eigenvalue(phase) estimation is one of the most important quantum primitives. While numerous quantum algorithms have been proposed to tackle this problem, they often demand substantial quantum resources, making them impractical for early fault-tolerant quantum computers. The talk will begin with a quantum oracle that transforms the quantum eigenvalue estimation problem into a classical signal processing problem. I will then introduce a simple classical subroutine for solving this problem, which surprisingly achieves state-of-the-art complexity results. Additionally, I will review the performance of traditional classical algorithms for this problem and share new results gained from our study. No prior knowledge on quantum computing is needed in this talk.<br />]]></description>
</item>

<item>
	<title>Some progress on low rank methods for time dependent equations</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 04 Feb 2025 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, February 4, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=403cf24f-05f9-466b-85aa-b27a01681c5d<br />Speaker: Yingda Cheng (Virginia Tech) - https://yingdacheng.github.io/<br />
Abstract: In this talk, I will present some of our recent work on low rank methods for time-dependent differential equations. In the first part, we focus on the Lindblad master equation arising from modeling of open quantum systems. A defining feature of the Lindblad equation is the completely positivity and trace preserving  (CPTP) property of the solution. We present a high order accurate low rank method with CPTP property. In the second part, we focus on stiff matrix differential equation, and present a new preconditioner based on dynamic low rank approximation for the low rank GMRES scheme. This is joint work with Shixu Meng (VT) and Daniel Appelo (VT).<br />]]></description>
</item>

<item>
	<title>Optimal Sampling in Least-Squares Methods</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 11 Feb 2025 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, February 11, 2025 - 3:30pm<br />Where: https://umd.zoom.us/j/97661035379?pwd=eW4xR2xFL3paQ3VCTXd6bjNXNlJNUT09 <br />Speaker: Albert Cohen (Sorbonne Université) - https://www.ljll.fr/cohen/<br />
Abstract: Recovering an unknown function from point samples is an ubiquitous<br />
task in various applicative settings: non-parametric regression, machine learning,<br />
reduced modeling, response surfaces in computer or physical experiments,<br />
data assimilation and inverse problems. In this lecture we discuss the context<br />
where the user is allowed to select the measurement points (sometimes refered<br />
to as active learning). This allows us to define a notion of optimal sampling point<br />
distribution when the approximation is searched in a arbitrary but fixed linear space<br />
of finite dimension and computed by weigted-least squares. Here optimal means<br />
that the approximation is comparable to the best possible in this space, while the<br />
sampling budget only slightly exceeds the dimension. We present simple randomized<br />
strategies that provably  generate optimal samples, and discuss several ongoing<br />
developments.<br />]]></description>
</item>

<item>
	<title>Dynamic Generative AI for Uncertainty Quantification</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 25 Feb 2025 15:30:00 EST</pubDate>
	<description><![CDATA[When: Tue, February 25, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=dcea5240-c0b4-4f77-8136-b28f01651d49<br />Speaker: Feng Bao (Florida State University) - https://www.math.fsu.edu/~bao/<br />
Abstract: Generative machine learning models, including variational auto-encoders (VAE), normalizing flows (NF), generative adversarial networks (GANs), diffusion models, have dramatically improved the quality and realism of generated content, whether it&#039;s images, text, or audio. In science and engineering, generative models can be used as powerful tools for probability density estimation or high-dimensional sampling that critical capabilities in uncertainty quantification (UQ), e.g., Bayesian inference for parameter estimation. Studies on generative models for image/audio synthesis focus on improving the quality of individual sample, which often make the generative models complicated and difficult to train. On the other hand, UQ tasks usually focus on accurate approximation of statistics of interest without worrying about the quality of any individual sample, so direct application of existing generative models to UQ tasks may lead to inaccurate approximation or unstable training process. To alleviate those challenges, we developed several new generative diffusion models for various UQ tasks, including a score-based nonlinear filter for recursive Bayesian inference, and a training-free ensemble score filter for tracking high dimensional stochastic dynamical systems. We will demonstrate the effectiveness of those methods in various UQ tasks including tracking high-dimensional Lorenz 96 systems and data assimilation for multiple geophysical models.<br />]]></description>
</item>

<item>
	<title>Computation of origami-inspired structures and mechanical metamaterials</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 11 Mar 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, March 11, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=8796cc66-27e8-42b9-8b36-b29d015a0c76<br />Speaker: Frederic Marazzato (University of Arizona) - https://sites.google.com/view/marazzaf/home<br />
Abstract: Origami folds have found a large range of applications in Engineering as solar panels for satellites or to produce inexpensive mechanical metamaterials. This talk will first focus on the direct problem of computing the deformation of periodic origami surfaces. A homogenization process for origami folds proposed in [Nassar et al, 2017] and then extended in [Xu, Tobasco and Plucinsky, 2023], is first discussed.The talk will then focus on the PDEs describing the Miura fold, which is a classical origami fold. We study existence and uniqueness of solutions and then propose a finite element method to approximate them.In a second time, we will focus on the inverse problem of computing an optimal fold set approximating a given target surface. The folding of a thin elastic sheet is modeled as a two-dimensional nonlinear Kirchhoff plate with an isometry constraint.We formulate the problem as a minimization in the set of special functions of bounded variation and prove the existence of minimizers. Then, we use a phase-field damage model and a discontinuous finite element method to approximate the minimizers.<br />
We subsequently prove that this approximation $\Gamma$-converges to the sharp interface model. Finally, some numerical examples are presented. <br />]]></description>
</item>

<item>
	<title>TBA</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 25 Mar 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, March 25, 2025 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: TBA (TBA) - TBA<br />
Abstract: TBA<br />]]></description>
</item>

<item>
	<title>Discontinuous Galerkin methods for Maxwell’s equations</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 01 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 1, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=33f49515-d74a-4435-95bb-b2b2015a30d4<br />Speaker: Peter Monk (University of Delaware) - https://sites.udel.edu/monk/<br />
Abstract: Maxwell&#039;s equations govern the propagation of light and electromagnetic radiation. Computing the electromagnetic field is important in designing communications and sensing devices. To understand the challenges in approximating the solution of these equations I will consider a simple model scattering problem. Starting with the classical curl conforming edge element space, I will explain how standard software solves the Maxwell system. Then l will move on to the evolution of discontinuous Galerkin methods concentrating on Hybridizable Discontinuous Galerkin (HDG) methods. Finally, I will discuss Trefftz discontinuous Galerkin methods where simple exact solutions of the Maxwell system are used element by element to approximate a general solution. This method has proved successful in solving large problems such as scattering by an aircraft but still faces issues that need to be addressed. Several numerical examples will illustrate the performance of the method.<br />]]></description>
</item>

<item>
	<title>On the Local Linear Convergence of ADMM for Solving SDPs under Strict Complementarity</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 08 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 8, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=bf9b52ab-5228-4a5c-a284-b2b9016002e1<br />Speaker: Heng Yang (Harvard University) - https://hankyang.seas.harvard.edu/<br />
Abstract: We study the local linear convergence behavior of the Alternating Direction Method of Multipliers (ADMM) when applied to Semidefinite Programming (SDP). While ADMM is widely perceived as slow and only capable of achieving medium-accuracy solutions—due to both its sublinear worst-case complexity and empirical evidence of slow convergence—we challenge this conventional view. Specifically, we establish a new sufficient condition for local linear convergence: as long as the converged primal-dual solution satisfies strict complementarity, ADMM achieves local linear convergence, regardless of nondegeneracy conditions. Our proof relies on a direct local linearization of the ADMM update operator and a refined error bound for projection onto the positive semidefinite cone. This new bound improves upon prior results and highlights the anisotropic nature of projection residuals.<br />
We support our theoretical findings with extensive numerical experiments, demonstrating that ADMM exhibits local linear convergence across a broad class of SDP instances, including those where nondegeneracy fails. Additionally, we identify problem instances where ADMM performs poorly and trace these difficulties to near-violations of strict complementarity—a phenomenon that mirrors recent observations in linear programming. Finally, our experiments reveal intriguing connections between local linear convergence and rank identification.<br />
Joint work with Shucheng Kang (Harvard) and Xin Jiang (Cornell).<br />]]></description>
</item>

<item>
	<title>Data Driven Modeling for Scientific Discovery and Digital Twins</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 15 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 15, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=1ae71f02-8959-4058-ba58-b2c001597272<br />Speaker: Dongbin Xiu (The Ohio State University) - https://sites.google.com/view/dongbin-xiu<br />
Abstract: We present a data-driven modeling framework for scientific discovery, termed Flow Map Learning (FML). This framework enables the construction of accurate predictive models for complex systems that are not amenable to traditional modeling approaches. By leveraging measurement data and the expressiveness of deep neural networks (DNNs), FML facilitates long-term system modeling and prediction even when governing equations are unavailable. <br />
<br />
FML is particularly powerful in the context of Digital Twins, an emerging concept in digital transformation. With sufficient offline learning, FML enables the construction of simulation models for key quantities of interest (QoIs) in complex Digital Twins, even when direct mathematical modeling of the QoI is infeasible. During the online execution of a Digital Twin, the learned FML model can simulate and control the QoI without reverting to the computationally intensive Digital Twin itself.<br />
<br />
As a result, FML serves as an enabling methodology for real-time control and optimization of the physical twin, significantly enhancing the efficiency and practicality of Digital Twin applications.<br />]]></description>
</item>

<item>
	<title>𝐻² conforming virtual element discretization of nondivergence form elliptic equations</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 17 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, April 17, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=de71a275-f7f9-4400-998b-b2c70168c62a<br />Speaker: Guillaume Bonnet (Université Paris-Dauphine) - https://www.ceremade.dauphine.fr/~bonnet/<br />
Abstract: The numerical discretisation of elliptic equations in nondivergence form is notoriously challenging, due to the lack of a notion of weak solutions based on variational principles. In many cases, there still is a well-posed variational formulation for such equations, which has the particularity of being posed in 𝐻², and therefore leads to a strong solution. Galerkin discretizations based on this formulation have been studied in the literature. Since 𝐻² conforming finite elements tend to be considered impractical, most of these discretizations are of discontinuous Galerkin type. On the other hand, it has been observed that the virtual element method provides a practical way to build 𝐻² conforming discretizations of variational problems. In this talk, I will describe a virtual element discretization of equations in nondivergence form. I will mainly present results we obtained in the setting of a simple linear model problem. Especially, I will show how the 𝐻² conformity of the method allows for a particularly simple well-posedness and error analysis. I will then briefly discuss the extension to equations with lower-order terms and with Hamilton-Jacobi-Bellman type nonlinearities, and present some numerical results.<br />
<br />
This is joint work with Andrea Cangiani, Andreas Dedner, and Ricardo Nochetto.<br />]]></description>
</item>

<item>
	<title>Quantum signal processing and nonlinear Fourier analysis: a dialogue</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 22 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 22, 2025 - 3:30pm<br />Where: https://umd.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=8c974a95-1ae5-45df-95bc-b2c7016768ee<br />Speaker: Lin Lin (University of California, Berkeley) - https://math.berkeley.edu/~linlin/<br />
Abstract: Quantum Singular Value Transformation (QSVT) is one of the most important developments in quantum algorithms in the past decade. At the heart of QSVT is an innovative polynomial representation called Quantum Signal Processing (QSP), which can encode a target polynomial of definite parity using the product of a sequence of parameterized SU(2) matrices. Given a target polynomial, the corresponding parameters are called phase factors. In the past few years, there has been significant progress in designing and analyzing algorithms for finding phase factors, which can be viewed as a highly nonlinear optimization problem. In this talk, we argue that nonlinear Fourier analysis (NLFA) provides a natural framework for understanding QSP, as first observed by Thiele et al. Based on NLFA, we develop a Riemann--Hilbert--Weiss (RHW) algorithm to evaluate phase factors.  To the best of our knowledge, this is the first provably numerically stable algorithm for almost all functions that admit a QSP representation. We will also discuss the impact of QSP on NLFA, which may lead to surprising progress in algorithms for inverse nonlinear Fourier transformations.<br />]]></description>
</item>

<item>
	<title>Martingale deep learning for very high-dimensional quasi-linear partial differential equations and stochastic optimal controls</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 29 Apr 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 29, 2025 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Wei Cai (Southern Methodist University) - https://people.smu.edu/cai/<br />
Abstract: In this talk, we will present a highly parallel and derivative-free martingale neural network method, based on the probability theory of Varadhan’s martingale formulation of PDEs,<br />
<br />
to solve Hamilton-Jacobi-Bellman (HJB) equations arising from stochastic optimal control problems (SOCPs), as well as general quasilinear parabolic partial differential equations (PDEs).<br />
<br />
In both cases, the PDEs are reformulated into a martingale problem such that loss functions will not require the computation of the gradient or Hessian matrix of the PDE solution, and can be computed in parallel in both time and spatial domains. Moreover, the martingale conditions for the PDEs are enforced using a Galerkin method realized with adversarial learning techniques, eliminating the need for direct computation of the conditional expectations associated with the martingale property. For SOCPs, a derivative-free implementation of the maximum principle for optimal controls is also introduced. The numerical results demonstrate the effectiveness and efficiency of the proposed method, which is capable of solving HJB and quasilinear parabolic PDEs accurately and fast in dimensions as high as 10,000.<br />]]></description>
</item>

<item>
	<title>Applied Math Colloquium: The Mean-Field Ensemble Kalman Filter</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 06 May 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, May 6, 2025 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Andrew Stuart (California Institute of Technology) - https://www.eas.caltech.edu/people/astuart<br />
Abstract: Ensemble Kalman filters (EnKF) constitute a methodology <br />
approximating aspects of the filtering distribution<br />
in partially observed and noisy dynamical systems.<br />
They are widely adopted in the geophysical sciences, <br />
underpinning weather forecasting for example, and are <br />
starting to be used throughout the sciences and engineering; <br />
furthermore, they have been adapted as a general-purpose <br />
tool for parametric inference. The strength of these methods <br />
stems from their ability to operate using complex models <br />
as a black box, together with their natural adaptation <br />
to high performance computers. In this talk we introduce <br />
a mean-field formulation of the EnKF, demonstrate its <br />
use in the development of a theory for the error incurred<br />
by the EnKF, and show how the same formulation may be used, <br />
in conjunction with machine learning, to develop improved filters.  <br />
<br />
The mean-field formulation is based on joint work with <br />
Edoardo Calvello (Caltech) and Sebastian Reich (Potsdam). <br />
<br />
The error analysis is based on joint papers with <br />
Edoardo Calvello (Caltech), Jose Carrillo (Oxford), <br />
Franca Hoffmann (Caltech), Pierre Monmarche (Sorbonne, Paris)<br />
and Urbain Vaes (Ecole des Ponts, Paris). <br />
<br />
The use of machine learning is based on work with<br />
Eviatar Bach (Reading), Ricardo Baptista (Caltech), <br />
Edoardo Calvello (Caltech) and Bohan Chen (Caltech).<br />]]></description>
</item>

<item>
	<title>Aziz Lecture:  Allowing Image And Text Data To Communicate</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Wed, 07 May 2025 15:15:00 EDT</pubDate>
	<description><![CDATA[When: Wed, May 7, 2025 - 3:15pm<br />Where: Kirwan Hall 3206<br />Speaker: Andrew Stuart (California Institute of Technology) - https://www.eas.caltech.edu/people/astuart<br />
Abstract:  fundamental problem in artificial intelligence<br />
is the question of how to simultaneously deploy<br />
data from different sources such as audio, <br />
image, text and video; such data is known as <br />
multimodal. In this talk I will focus on the <br />
canonical problem of aligning image and text <br />
data, and describe some of the mathematical <br />
ideas underlying the challenge of allowing them<br />
to communicate. I will  describe the encoding <br />
of text and image in  Euclidean spaces and describe <br />
contrastive learning methods to identify and learn <br />
embeddings which align these two modalities; I will <br />
also describe the attention mechanism, a form of nonlinear <br />
correlation in vector-valued sequences. Attention <br />
turns out to be useful beyond this specific <br />
context, and I will show how it may be used to <br />
design and learn maps between Banach spaces or <br />
between spaces of probability measures.<br />]]></description>
</item>

<item>
	<title>A convergent algorithm for mean curvature flow of surfaces with Dirichlet boundary conditions</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 13 May 2025 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, May 13, 2025 - 3:30pm<br />Where: Kirwan Hall 3206<br />Speaker: Pedro Morin (Universidad Nacional del Litoral and CONICET) - https://www.fiq.unl.edu.ar/depto-mate/pmorin/<br />
Abstract: I will present a convergent algorithm for the computation of mean curvature flow of surfaces with fixed boundaries.<br />
Our analysis hinges upon the one recently developed by Kovács, Li, Lubich, and collaborators for closed surfaces, which in turn use Huisken&#039;s equation for the evolution of the mean curvature and the normal vector.<br />
We extend their ideas to surfaces with boundaries by formulating appropriate boundary conditions for both the mean curvature and the normal vector. These boundary treatments are essential for the well posedness of the discretization and for proving convergence.<br />
To effectively handle boundary conditions for the normal vector, we introduce a nonlinear Ritz projection into the analysis. We prove that this projection is well posed and achieves optimal approximation orders. As a result, we derive optimal $H^1$ error estimates for the surface position, velocity, mean curvature, and normal vector.<br />
Towards the end I will present some numerical experiments which illustrate the behavior of the convergent algorithm.<br />
This is joint work with Bárbara S. Ivaniszyn and M. Sebastián Pauletti.<br />]]></description>
</item>


	</channel>
</rss>