Abstract: What is a random function? What is noise? The standard answers are nonsmooth, defined pointwise via the Wiener process and Brownian motion. In the Chebfun project, we have found it more natural to work with smooth random functions defined by finite Fourier series with random coefficients. The length of the series is determined by a wavelength parameter lambda. Integrals give smooth random walks, which approach Brownian paths as lambda shrinks to 0, and smooth random ODEs, which approach stochastic DEs of the Stratonovich variety. Numerical explorations become very easy in this framework. There are plenty of conceptual challenges in this subject, starting with the fact that white noise has infinite amplitude and infinite energy, a paradox that goes back in two different ways to Einstein in 1905.
Abstract: I will explain the construction and main properties of Dirac operators for representations of various Hecke-type algebras (e.g., Lusztig's graded Hecke algebra for p-adic groups, Drinfeld's Hecke algebras, rational Cherednik algebras). The approach is motivated by the classical Dirac operator which acts on sections of spinor bundles over Riemannian symmetric spaces, and by its algebraic version for Harish-Chandra modules of real reductive groups. The algebraic Dirac theory developed for these Hecke algebras turns out to lead to interesting applications: e.g., a Springer parameterization of projective representations of finite Weyl groups (in terms of the geometry of the nilpotent cone of complex semisimple Lie algebras), spectral gaps for unitary representations of reductive p-adic groups, connections between the Calogero-Moser space and Kazhdan-Lusztig double cells. I will present some of these applications in the talk.
Four years ago, an asteroid with a 20 meter diameter exploded in the
atmosphere over Chelyabinsk, Russia, causing injury and damage 20
kilometers away but no deaths. We are studying the question of what
would occur if such an airburst happened over the ocean. Would the
blast wave generate a tsunami that could threaten coastal cities far away?
We begin with several simulations of tsunami propagation from
asteroid-generated airbursts under a range of conditions. We use the
open source software package GeoClaw, which has been successful in
modeling earthquake-generated tsunamis. GeoClaw uses a basic model of
ocean waves called the shallow water equations (SWE). We then present a
simplified one dimensional model problem with an explicit solution in
closed form to understand
some of the unexpected results.
The SWE model however may not be accurate enough for airburst-generated
tsunamis, which have shorter length and time scales than
earthquake-generated waves. We extend our model problem to the
linearized Euler equations of fluid mechanics to explore the effects of
wave dispersion and water compressibility. We end with a discussion of
suitable models for airburst-generated tsunamis, and speculate as to
appropriate tools to study the more serious case of an asteroid that
impacts the water.
Abstract: A common problem of atomistic materials modelling is to determine properties of crystalline defects, such as structure, energetics, mobility, from which
meso-scopic material properties or coarse-grained models can be derived (e.g.,
Kinetic Monte-Carlo, Discrete Dislocation Dynamics, Griffith-type fracture
laws). In this talk I will focus on one the most basic tasks, computing the
equilibrium configuration of a crystalline defect, but will also also comment on
free energy and transition rate computations.
A wide range of numerical strategies, including the classical supercell method
(periodic boundary conditions) or flexibe boundary conditions (discrete BEM),
but also more recent developments such as atomistic/continuum and QM/MM hybrid schemes, can be interpreted as Galerkin discretisations with variational crimes, for an infinite-dimensional nonlinear variational problem. This point of view is effective in studying the structure of exact solutions, identify approximation parameters, derive rigorous error bounds, optimise and construct novel schemes with superior error/cost ratio.
Time permitting I will also discuss how this framework can be used to analyse
model errors in interatomic potentials and how this can feed back into the
developing of new interatomic potentials by machine learning techniques.
Abstract: A natural question in smooth dynamics is to measure the
escape time of orbits from the neighborhood of invariant sets such as
fixed points or invariant submanifolds.
KAM theory asserts that a quasi-integrable real analytic Hamiltonian
system has in general a large measure set of invariant tori on which
the dynamics is quasi-periodic.
We show that these invariant tori are usually doubly exponentially
stable and not more than double exponentially stable. Double
exponential stability refers to the fact that a point starting at
distance r from the invariant torus remains within distance 2r during
a time that is doubly exponential in some power of 1/r. Similar
results are obtained for general elliptic equilibria of Hamiltonian
Abstract: We will discuss models for vehicular traffic flow on networks. The models
include both the Lighthill-Whitham-Richards (LWR) model and Follow-the-Leader (FtL) models.
The emphasis will be on the Braess paradox in which adding a road to a traffic network
can make travel times worse for all drivers.
In addition, we will present a novel proof how FtL models approximate the LWR model in
case of heavy traffic.
If time permits, we will discuss a novel model for multi-lane traffic.
The work is joint with N.H. Risebro (Oslo) and R. Colombo (Brescia)
Abstract: Density fitting considers the low-rank approximation of pair products of eigenfunctions of Hamiltonian operators. It is a very useful tool with many applications in electronic structure theory. In this talk, we will discuss estimates of upper bound of the numerical rank of the pair products of eigenfunctions. We will also introduce the interpolative separable density fitting (ISDF) algorithm, which reduces the computational scaling of the low-rank approximation and can be used for efficient algorithms for electronic structure calculations. Based on joint works with Stefan Steinerberger, Kyle Thicke, and Lexing Ying.
Abstract: I have been teaching a flipped class (applied linear
algebra), for which I have made a set of videos, which the students
watch before class. In class we have a general discussion: I answer
questions about the video, go over homework problems, etc. I have also
used this method to make a video in other conventionally taught
classes on days that I am absent.
In this talk I will give a brief report on my experience flipping
classes, and then demonstrate how I make the videos, using an
interactive pen/tablet display and video capture software.
Speaker: Dr. Scott ArmstrongAbstract: I will discuss the large-scale asymptotics of solutions of linear elliptic equations with random coefficients. It is well-known that solutions converge (in the limit of scale separation) to those of a deterministic equation, a kind of law of large numbers result called "homogenization". In recent years obtaining quantitative information about this convergence has attracted a lot of attention. I will give an overview of one such approach to the topic based on variational methods, elliptic regularity, and ``renormalization-group'' arguments.
When: Wed, November 28, 2018 - 3:15pm Where: 3206 William E. Kirwan Hall
Abstract: I will discuss the large-scale asymptotics of solutions of linear elliptic equations with random coefficients. It is well-known that solutions converge (in the limit of scale separation) to those of a deterministic equation, a kind of law of large numbers result called "homogenization". In recent years obtaining quantitative information about this convergence has attracted a lot of attention. I will give an overview of one such approach to the topic based on variational methods, elliptic regularity, and ``renormalization-group'' arguments.
Abstract: Minimal surfaces are critical points of the area functional on the space of surfaces. Thus, it is natural to try to construct them via Morse theory. However, there is a serious issue when carrying this out, namely the occurrence of "multiplicity." I will explain this issue and recent joint work with C. Mantoulidis ruling this out for generic metrics.
Abstract: Estimating eigenvectors and principal subspaces is of fundamental importance for numerous problems in statistics, data science, and network analysis, including covariance matrix estimation, principal component analysis, and community detection. For each of these problems, this talk will present recent foundational results that quantify the local (e.g. entrywise) behavior of sample eigenvectors within the context of a unified signal-plus-noise matrix framework. Topics of discussion will include statistical consistency, asymptotic normality, matrix decompositions, Procrustes analysis, and real-data spectral graph clustering applications in connectomics.
Abstract: Modern statistical analysis often requires an integration of statistical thinking and algorithmic thinking. In many problems, statistically sound estimation procedures (e.g., the MLE) may be difficult to compute, at least in the naive form. This challenge calls for a new look into simple statistical methods such as the spectral methods (including PCA), as well as an examination of optimization algorithms from the statistical lens.
In this talk, I will sample two typical modern statistical problems: one addresses network type data (community detection), and the other involves pairwise comparison data (phase synchronization). I will show that in high dimensions, spectral methods exhibit a very interesting new phenomenon in entrywise behavior, which leads to new theoretical insights and has practical relevance. Also, for a complex nonconvex problem, I will show how algorithmic analysis can benefit from classical statistical ideas.
This talk features joint work with (alphabetically) Emmanuel Abbe, Nicolas Boumal, Jianqing Fan, and Kaizheng Wang.
Abstract: Non-Euclidean data that are indexed with a scalar predictor such as time are increasingly encountered in data applications, while statistical methodology and theory for such random objects are not well developed yet. To address the need for new methodology in this area, we develop a total variation regularization technique for nonparametric Frechet regression, which refers to a regression setting where a response residing in a generic metric space is paired with a scalar predictor and the target is a conditional Frechet mean. We show that the resulting estimator is representable by a piece-wise constant function and investigate the convergence rate of the proposed estimator for data objects that reside in Hadamard spaces. The method can also be applied to the problem of estimating multiple change-points in a sequence of non-Euclidean data. This is illustrated via the application to modeling the dynamics of brain networks and the study of evolving mortality distributions endowed with the Wasserstein distance.
Abstract: A multi-relational network (MRN) is a network with multiple types of edges. The analysis of MRNs, especially link prediction, has a wide range of applications such as building recommender systems, predicting protein-protein interactions, and automatic question answering. Because of the MRNs encountered in these applications are often very large, computationally efficient models are needed to synthesize information from multiple types of edges. In this talk, we will present a latent variable model for MRNs and discuss its statistical properties. We will also describe some methods to overcome the computational challenges of the model and a weighted negative sampling method to further improve the computational efficiency. The performance of the method will be demonstrated through a knowledge graph completion example.
Abstract: Given a vertex of interest in a network G, vertex nomination seeks to find the corresponding vertex of interest (if it exists) in a second network G', thereby ranking the vertices in G' according to their likelihood of correspondence. The vertex nomination problem and related information retrieval tasks have attracted much attention in the machine learning literature, with numerous applications in social and biological networks. However, the current framework has often been confined to a comparatively small class of network models, and the concept of statistically consistent vertex nomination schemes has barely been explored. In this talk, we extend the vertex nomination problem to a very general random graph model; drawing inspiration from the essentials of pattern recognition, we provide key definitions of Bayes optimality and consistency in our extended vertex nomination framework, including a derivation of the Bayes optimal vertex nomination scheme. In addition, we prove that no universally consistent vertex nomination schemes exist, and we explore practical ramifications of the lack of universal consistency in the context of robust vertex nomination in the presence of adversarial node behavior.
Abstract: Model calibration or data inversion involves using experimental or field data to estimate the unknown parameters of a mathematical model. This task is complicated by the discrepancy between the model and reality, and by possible bias in field data. The model discrepancy is often modeled by a Gaussian stochastic process (GaSP), but it was observed in many studies that the calibrated mathematical model can be far from the reality. Here we show that modeling the discrepancy function via a GaSP often leads to an inconsistent estimation of the calibration parameters even if one has an infinite number of repeated experiments and an infinite number of observations in each experiment. In this work, we develop the scaled Gaussian stochastic process (S-GaSP), a new stochastic process to model the discrepancy function in calibration. We establish the explicit connection between the GaSP and S-GaSP through the orthogonal series representation. We show the predictive mean estimator in the S-GaSP calibration model converges to the reality at the same rate as the one by the GaSP model, and the calibrated mathematical model in the S-GaSP calibration converges to the one that minimizes the L2 loss between the reality and mathematical model, whereas the GaSP calibration model does not have this property. The scientific goal of this work is to use multiple interferometric synthetic-aperture radar (InSAR) interferograms to calibrate a geophysical model for Kilauea Volcano, Hawaii. Analysis of both simulated and real data confirms that our approach is better than other approaches in prediction and calibration. Both the GaSP and S-GaSP calibration models are implemented in the "RobustCalibration" R Package on CRAN.