Abstract: Deep Learning and Artificial Intelligence have attracted enormous attention recently. The race to design and manufacture âbrain-likeâ computers is on and several companies have produced various such chips. Yet, the current state of affairs is very unsatisfactory and ad hoc. We describe a mathematical framework we have developed that provides a hierarchical architecture for learning and cognition. The architecture combines a wavelet preprocessor, a group invariant feature extractor and a hierarchical (layered) learning algorithm. There are two feedback loops, one back from the learning output to the feature extractor and one all the way back to the wavelet preprocessor. We show that the scheme can incorporate all typical metric differences but also non-metric dissimilarity measures like Bregman divergences. The learning module incorporates two universal learning algorithms in their hierarchical tree-structured form, both due to Kohonen, Learning Vector Quantization (LVQ) for supervised learning and Self-Organizing Map (SOM) for unsupervised learning. We demonstrate the superior performance of the resulting algorithms and architecture on a variety of practical problems including: speaker and sound identification, simultaneous determination of sound direction of arrival speaker and vowel ID, face recognition. We demonstrate how the underlying mathematics can be used to provide systematic models for design, analysis and evaluation of deep neural networks. We describe current work and plans on mixed signal (digital and analog) micro-electronic implementations that mimic architectural abstractions of the cortex of higher-level animals and humans, for sound and vision perception and cognition. The resulting architecture is non-von Neumann (i.e. computing and memory are not separated in the hardware) and neuromorphic. We call the resulting chip class âCortex-on-a-Chip.â
Abstract: A common problem of atomistic materials modelling is to determine properties of
crystalline defects, such as structure, energetics, mobility, from which
meso-scopic material properties or coarse-grained models can be derived (e.g.,
Kinetic Monte-Carlo, Discrete Dislocation Dynamics, Griffith-type fracture
laws). In this talk I will focus on one the most basic tasks, computing the
equilibrium configuration of a crystalline defect, but will also also comment on
free energy and transition rate computations.
A wide range of numerical strategies, including the classical supercell method
(periodic boundary conditions) or flexibe boundary conditions (discrete BEM),
but also more recent developments such as atomistic/continuum and QM/MM hybrid
schemes, can be interpreted as Galerkin discretisations with variational crimes,
for an infinite-dimensional nonlinear variational problem. This point of view is
effective in studying the structure of exact solutions, identify approximation
parameters, derive rigorous error bounds, optimise and construct novel schemes
with superior error/cost ratio.
Time permitting I will also discuss how this framework can be used to analyse
model errors in interatomic potentials and how this can feed back into the
developing of new interatomic potentials by machine learning techniques.
Abstract: Inferring the laws of interaction of particles and agents in complex dynamical systems from observational data is a fundamental challenge in a wide variety of disciplines. We start from data consisting of trajectories of interacting agents, which is in many cases abundant, and propose a non-parametric statistical learning approach to extract the governing laws of interaction. We demonstrate the effectiveness of our learning approach both by providing theoretical guarantees, and by testing the approach on a variety of prototypical systems in various disciplines, with homogeneous and heterogeneous agents systems, ranging from fundamental physical interactions between particles to systems-level interactions, with such as social influence on people's opinion, prey-predator dynamics, flocking and swarming, and cell dynamics.
Jen Rezeppa has 10+ years of experience in Silicon Valley working for and leading teams at top companies including Apple, GoPro during the IPO, and presently at Tesla. She earned a BS in Mathematics and over the course of her career, her UMCP degree has led to opportunities in teaching, reliability engineering, and most recently, demand planning and channel planning. Letâs learn about Jenâs career progression and her thoughts on leadership for students and new graduates as they explore how a math degree can lay the foundation for their professional goals.
Abstract: The talented students of Math299m - "Visualization Through Mathematica" will be presenting their final projects where they use Mathematica to model and investigate a diverse array of topics, including fractional derivatives, gravitation, seismology, machine learning, musical harmonies, financial models and more.
Abstract: Control of multi-agent systems has application to many different domains
(including traffic, biology and other) and can be addressed at many different scale. At microscopic scale sparsity is a desired property for applicability, while passing to mean-field limits pose mathematical challenges, as controls may become singular.
We first show some recent results at microscopic scale and in rigorously passing to the limit. Then we introduce a new concept of differential equation for measures which appear to be a promising framework to deal with this problems and, finally, show application to traffic.
Abstract: Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to equations for a model, and then to the analysis of the model to make predictions.
Good mathematical models give good predictions (and inaccurate ones do not) - but the computational tools for analyzing them are the same: algorithms that are typically based on closed form equations.
While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations -data-, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by "looking in a crystal ball". Yet the "serious thinking" is still there and uses the same -and some new- mathematics: it goes into building algorithms that jump directly from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this "new" path from data to predictions. It really is the same old path, but it is travelled by new means.
Abstract: Suppose the eigenvalue distributions of two matrices M_1 and M_2 are known. What is the eigenvalue distribution of the sum M_1+M_2? This problem has a rich pure mathematics history dating back to H. Weyl (1912) with many applications in various fields. Free probability theory (FPT) answers this question under certain conditions, which often involves some degree of randomness (disorder). We will describe FPT and show examples of its powers for approximating physical quantities such as the density of states of the Anderson model, quantum spin chains, and gapped vs. gapless phases of some Floquet systems. These physical quantities are often hard to compute exactly. Nevertheless, using FPT and other ideas from random matrix theory excellent approximations can be obtained. Besides the applications presented, we believe the techniques will find new applications in fresh new contexts.
Abstract: We discuss basic notions of quantum entanglement relevant for the study of quantum matter. Entanglement is simultaneously a blessing for quantum computation and a curse for classical simulation of matter. In recent years, there has been a surge of activities in proposing exactly solvable quantum spin chains with the surprisingly high amount of ground state entanglement entropies--beyond what one expects from critical systems describable by conformal field theories (i.e., super-logarithmic violations of the area law). We will introduce entanglement and discuss these models. We prove that the ground state entanglement entropy is \sqrt(n) and in some cases even extensive (i.e., ~n) despite the underlying Hamiltonian being: 1. Local 2. Having a unique ground state and 3. Being translationally invariant in the bulk. These models have rich connections with combinatorics, random walks, and universality of Brownian excursions. Lastly, we develop techniques that enable proving the gap of these models. As a consequence, the gap scaling of 1/n^c with c>1 that we prove rules out the possibility of these models having a relativistic conformal field theory description. Time permitting we will discuss more recent developments in this direction.
Speaker: Jonathan Christopher MattinglyAbstract: In October 2017, I found myself testifying for hours in a Federal court. I had not been arrested. Rather---I was attempting to quantify gerrymandering using mathematical analysis. I was intrigued by the surprising results of the 2012 election, wondering if these results were really surprising. It hinged on probing the geopolitical structure of North Carolina using a Markov Chain Monte Carlo algorithm. In this talk, I will describe the mathematical ideas involved in our analysis. The talk will be accessible and, hopefully, interesting to all, including undergraduates. In fact, this project began as a sequence of undergraduate research projects, which undergraduates continue to be involved with to this day.
When: Thu, April 11, 2019 - 4:00pm Where: John S. Toll Physics Building, 4150 Campus Dr, College Park, MD 20740, USA Room 1412
Abstract: In October 2017, I found myself testifying for hours in a Federal court. I had not been arrested. Rather---I was attempting to quantify gerrymandering using mathematical analysis. I was intrigued by the surprising results of the 2012 election, wondering if these results were really surprising. It hinged on probing the geopolitical structure of North Carolina using a Markov Chain Monte Carlo algorithm. In this talk, I will describe the mathematical ideas involved in our analysis. The talk will be accessible and, hopefully, interesting to all, including undergraduates. In fact, this project began as a sequence of undergraduate research projects, which undergraduates continue to be involved with to this day.
Abstract: Linear algebra methods play a central role in modern methods for large-scale network analysis. The same
approach underlies many of these methods. First, one tells a story that associates the network with a matrix,
either as the generator of a linear time-invariant dynamical process on the graph or as a quadratic form used
to measure some quantity of interest. Then, one uses the eigenvalues and eigenvectors of the matrix to
reason about the properties of the dynamical system or quadratic form, and from there to understand the
network. We describe some of the most well-known spectral network analysis methods for tasks such as
bisection and partitioning, clustering and community detection, and ranking and centrality. These methods
largely depend only on a few eigenvalues and eigenvectors, but we will also describe some methods that
require a more global perspective, including methods that we have developed for local spectral clustering
and for graph analysis via spectral densities.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827