Sayas Numerics Seminar Archives for Fall 2022 to Spring 2023
Nonconforming and Discontinuous Methods in the Numerical Approximation of Nonsmooth Variational Problems
When: Tue, September 7, 2021 - 3:30pm
Where: ONLINE sayasseminar.math.umd.edu
Speaker: Soeren Bartels (University of Freiburg, Germany) -
Abstract: Nonconforming and discontinuous finite elements are attractive for discretising variational problems with limited regularity properties, in particular, when discontinuities may occur. The possible lack of differentiability of related functionals prohibits the use of classical arguments to derive error estimates. As an alternative we make use of discrete and continuous convex duality relations and of quasi-interpolation operators with suitable projection properties. For total variation regularised minimisation problems a quasi-optimal error estimate is derived which is not available for standard finite element methods. Our results use and extend recent ideas by Chambolle and Pock.
Hybrid analytic-numerical compact models for radiation induced photocurrent effects
When: Tue, November 2, 2021 - 3:30pm
Where: ONLINE sayasseminar.math.umd.edu
Speaker: Pavel Bochev (Sandia National Laboratories) -
Abstract: Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics such as radiation induced photocurrent effects into an existing model is not trivial and may require redevelopment from scratch. Data-driven approaches have the potential to automate and significantly speed up the development of compact models. In this talk we focus on the demonstration of this approach for the development of a hybrid numerical-analytical compact photocurrent model.
Compact photocurrent models are generally formulated by separating the total photocurrent into prompt and delayed components. The former is treated by invoking the depletion approximation, which reduces the Drift-Diffusion Equations in the depletion region to a Poisson equation that can be solved analytically. The delayed component is handled with the charge balance assumption, under which the excess carrier dynamics can be modeled by the Ambipolar Diffusion Equation (ADE). However, the ADE is a nonlinear, time-dependent PDE that cannot be solved analytically. Compact analytic models apply further physical approximations and assumptions that render the ADE solvable in closed form but may reduce model's accuracy.
In this talk we present a hybrid analytic-numerical approach to replace analytic solutions of the governing equations by numerical ones obtained from synthetic and/or experimental data by using a hierarchy of data-driven and machine-learning approaches. This obviates the need for additional approximations and yields a hierarchy of accurate and computationally efficient compact photocurrent models. We demonstrate these models by comparing their predictions with those of state-of-the-art analytic models using synthetic data and photocurrent measurements obtained at the Little Mountain Test Facility at Hill AFB, Utah.
We will also briefly review Xyce PyMi, which is a new Python interface that enables execution of data-driven compact device models from Sandia's massively parallel production circuit simulator Xyce.
This is joint work with J. Hanson, B. Paskaleva, E. Keiter, C. Hembree, P. Kuberry.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration contract number DE-NA0003525. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Some Nonsmooth Function Classes and Their Optimization
When: Tue, November 16, 2021 - 3:30pm
Where: ONLINE sayasseminar.math.umd.edu
Speaker: Jong-Shi Pang (University of Southern California) -
Abstract: Optimization problems with coupled nonsmoothness and nonconvexity are pervasive in statistical learning and many engineering areas. They are very challenging to analyze and solve. In particular, since the computation of their minimizers, both local and global, is generally intractable, one should settle for computable solutions with guaranteed properties and practical significance. In the case when these problems arise from empirical risk minimization in statistical estimation, inferences should be applied to the computed solutions to bridge the gap between statistical analysis and computational results.
This talk gives an overview of several nonsmooth function classes and their connections and sketches an iterative surrogation-based algorithm for the minimization of one particular class of non-Clarke regular composite optimization problems. We will also very briefly touch on the general surrogation approach supplemented by exact penalization to handle challenging constraints.
This talk is based on the monograph titled “Modern Nonconvex Nondifferentiable Optimization” joint with Ying Cui at the University of Minnesota, to be published in mid-November 2021.
Optimality in Learning
When: Tue, December 7, 2021 - 3:30pm
Where: ONLINE sayasseminar.math.umd.edu
Speaker: Ronald DeVore (Texas A&M University) -
Abstract: Learning an unknown function $f$ from data observations arises in
a myriad of application domains. This talk will present the mathematical view
of learning. It has two main ingredients: (i) a model class assumption which summarizes the totality of information we have about $f$ in addition to the data observations, (ii) a metric which describes how we measure performance of a learning algorithm.
We first mathematically describe optimal recovery which is the best possible
performance for a learning algorithm. While optimal recovery provides the ideal
benchmark on which to evaluate performance of a numerical learning algorithm, it does not, in and of itself, provide a numerical recipe for learning. We then turn to
the construction of discrete optimization problems whose solution provides a
provably near optimal solution to the learning problem. We compare these discretizations with what is typically done in machine learning and in particular
explain some of the reasons why machine learning prefers over parameterized neural networks for numerically resolving the corresponding discrete optimization problems. The main obstacle in machine learning is to produce a numerical algorithm to solve the correct discretization problem and prove its convergence to a minimizer. We close with a few remarks on this bottleneck.