Norbert Wiener Center Archives for Fall 2021 to Spring 2022


Exponential bases for partitions of intervals

When: Mon, September 27, 2021 - 2:00pm
Where: Online
Speaker: David Walnut (GMU) -
Abstract: For a partition of [0,1] into intervals I1,...,In we prove the existence of a partition of ℤ into Λ1,..., Λn such that the complex exponential functions with frequencies in Λk form a Riesz basis for L2(Ik), and furthermore, that for any J⊆{1,2,...,n}, the exponential functions with frequencies in ⋃j∈J Λj form a Riesz basis for L2(I) for any interval I with length |I|=Σj∈J |Ij|. The construction extends to infinite partitions of [0,1], but with size limitations on the subsets J⊆ ℤ. The construction utilizes an interesting assortment of tools from analysis, probability, and number theory.
This is joint work with Shauna Revay (GMU and Novetta), and Goetz Pfander (Catholic University of Eichstaett-Ingolstadt).

Zoom Link for September 27th Webinar:

https://umd.zoom.us/j/91097809301?pwd=Q1hXWXpkYlhDWGFHb1BLWnNGV2lmQT09

Meeting ID: 910 9780 9301

Meeting Passcode: FFT927

Empirical Bayesian inference using joint sparsity

When: Mon, October 4, 2021 - 2:00pm
Where: Online
Speaker: Anne Gelb (Dartmouth College) -
Abstract: Meeting ID: 984 1143 6340

Meeting Passcode: 104593

Deep Approximation via Deep Learning

When: Mon, October 11, 2021 - 12:00pm
Where: Online
Speaker: Prof. Zuowei Shen (National University of Singapore) -
Abstract: The primary task of many applications is approximating/estimating a function through samples drawn from a probability distribution on the input space. The deep approximation is to approximate a function by compositions of many layers of simple functions, that can be viewed as a series of nested feature extractors. The key idea of deep learning network is to convert layers of compositions to layers of tuneable parameters that can be adjusted through a learning process, so that it achieves a good approximation with respect to the input data. In this talk, we shall discuss mathematical theory behind this new approach and approximation rate of deep network; we will also how this new approach differs from the classic approximation theory, and how this new theory can be used to understand and design deep learning network.

Zoom Link for October 11th Webinar:

https://umd.zoom.us/j/99381978785?pwd=V3ozeEpMVTI2TDZwcFVKNk9FcktnQT09

Meeting ID: 993 8197 8785

Meeting Passcode: 513626

Wilson Statistics: Derivation, Generalization, and Applications to Cryo-EM

When: Mon, October 18, 2021 - 2:00pm
Where: Online
Speaker: Amit Singer (Princeton) -
Abstract: The power spectrum of proteins at high frequencies is remarkably well described by the flat Wilson statistics. Wilson statistics therefore plays a significant role in X-ray crystallography and more recently in cryo-EM. Specifically, modern computational methods for three-dimensional map sharpening and atomic modeling of macromolecules by single particle cryo-EM are based on Wilson statistics. In this talk we use certain results about the decay rate of the Fourier transform to provide the first rigorous mathematical derivation of Wilson statistics. The derivation pinpoints the regime of validity of Wilson statistics in terms of the size of the macromolecule. Moreover, the analysis naturally leads to generalizations of the statistics to covariance and higher order spectra. These in turn provide theoretical foundation for assumptions underlying the widespread Bayesian inference framework for three-dimensional refinement and for explaining the limitations of autocorrelation based methods in cryo-EM.

Meeting ID: 936 8202 8028
Meeting Passcode: 783856

Expanding Quantum Field Theory Using Affine Quantization

When: Mon, October 25, 2021 - 2:00pm
Where: Online
Speaker: John Klauder (University of Florida) -
Abstract: Quantum field theory uses canonical quantization (CQ), and often fails, e.g., $\varphi^4_4$, etc. Affine quantization (AQ) - which will be introduced - can solve a variety of problems that CQ cannot. AQ can even be used to solve certain models regarded as nonrenormalizable. The specific procedures of AQ lead to a novel Fourier transformation that illustrates how AQ can create a generous contribution to quantum field theory.

Meeting ID: 952 9031 5793
Meeting Passcode: 582647

Climbing the Diagonal Clifford Hierarchy

When: Mon, November 8, 2021 - 2:00pm
Where: Online
Speaker: Prof. Robert Calderbaank (Duke U.) -
Abstract: Quantum computers are moving out of physics labs and becoming generally programmable. In this talk, we start from quantum algorithms like magic state distillation and Shor factoring that make essential use of diagonal logical gates. The difficulty of reliably implementing these gates in some quantum error correcting code (QECC) is measured by their level in the Clifford hierarchy, a mathematical framework that was defined by Gottesman and Chuang when introducing the teleportation model of quantum computation. We describe a method of working backwards from a target logical diagonal gate at some level in the Clifford hierarchy to a quantum error correcting code (CSS code) in which the target logical can be implemented reliably.

This talk describes joint work with my graduate students Jingzhen Hu and Qingzhong Liang.

Meeting ID: 937 3374 5978
Meeting Passcode: FFT

Design of Unbiased, Adaptive and Robust AI Systems

When: Mon, November 22, 2021 - 2:00pm
Where: Online
Speaker: Raama Chellappa (JHU) -
Abstract: Over the last decade, algorithms and systems based on deep learning and other data-driven methods have contributed to the reemergence of Artificial Intelligence-based systems with applications in national security, defense, medicine, intelligent transportation, and many other domains. However, another AI winter may be lurking around the corner if challenges due to bias, domain shift and lack of robustness to adversarial attacks are not considered while designing the AI systems. In this talk, I will present our approach to bias mitigation and designing AI systems that are robust to domain shift and a variety of adversarial attacks.
Zoom Link for November 22nd Webinar:

https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Meeting Passcode: FFT

Complex Networks Workshop: Analysis, Numerics, and Applications

When: Fri, February 18, 2022 - 9:30am
Where: Kirwan Hall 3206
Speaker: Assorted (Various) - https://www.norbertwiener.umd.edu/Events/2022/ComplexNetworksWorkshop.html
Abstract: See schedule at: https://www.norbertwiener.umd.edu/Events/2022/ComplexNetworksWorkshop.html

Complex Networks Workshop: Analysis, Numerics, and Applications

When: Sat, February 19, 2022 - 9:30am
Where: Kirwan Hall 3206
Speaker: Assorted (Various) - https://www.norbertwiener.umd.edu/Events/2022/ComplexNetworksWorkshop.html
Abstract: Full schedule at: https://www.norbertwiener.umd.edu/Events/2022/ComplexNetworksWorkshop.html

A Spline Perspective of Deep Learning

When: Mon, February 28, 2022 - 2:00pm
Where: Online
Speaker: Richard Baraniuk (Rice University) -
Abstract: We study the geometry of deep learning through the lens of approximation theory via spline functions and operators. Our key result is that a large class of deep networks (DNs) can be written as a composition of continuous piecewise affine (CPA) splines, which provide a powerful portal through which to interpret and analyze their inner workings. We explore links to the classical theory of optimal classification via matched filters, the effects of data memorization, vector quantization (VQ), and K-means clustering, which open up new geometric avenue to study how DNs organize signals in a hierarchical and multiscale fashion.

Join Zoom Meeting
https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Passcode: FFT

A Harnack inequality for certain degenerate/singular elliptic PDEs shaped by convex functions

When: Mon, March 7, 2022 - 2:00pm
Where: Online
Speaker: Diego Maldonaldo (Kansas State University) -
Abstract: Certain convex functions on $R^n$ produce quasi-distances and Borel measures on $R^n$. Furthermore, their Hessians can shape degenerate/singular elliptic operators. We will describe geometric and measure-theoretic conditions on convex functions that allow to prove a Harnack inequality for nonnegative solutions to their associated elliptic operators.

Zoom Link for February 28th Webinar:

https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Passcode: FFT

Passive Source Localization

When: Mon, March 14, 2022 - 2:00pm
Where: Online
Speaker: Margaret Cheney (Colorado State University) -
Abstract: This talk introduces the problem of localizing electromagnetic sources from measurements of their radiated fields at two moving sensors. Two approaches are discussed, the first based on measuring quantities known as “time difference of arrivals“ and “frequency difference of arrivals”. This approach leads to some interesting geometrical problems. The second approach is a synthetic-aperture approach that also involves some unsolved problems.


Zoom Link for March 14th Webinar:

https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Passcode: FFT

Online nonnegative matrix factorization and applications

When: Mon, March 28, 2022 - 2:00pm
Where: Online
Speaker: Deanna Needell (UCLA) -
Abstract: Online Matrix Factorization (OMF) is a fundamental tool for dictionary learning problems, giving an approximate representation of complex data sets in terms of a reduced number of extracted features. Convergence guarantees for most of the OMF algorithms in the literature assume independence between data matrices, and the case of dependent data streams remains largely unexplored. In this talk, we present results showing that a non-convex generalization of the well-known OMF algorithm for i.i.d. data converges almost surely to the set of critical points of the expected loss function, even when the data matrices are functions of some underlying Markov chain satisfying a mild mixing condition. As the main application, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of Network Dictionary Learning that extracts `network dictionary patches' from a given network in an online manner that encodes main features of the network. We demonstrate this technique on real-world data and discuss recent extensions and variations.

Zoom Link for March 28th Webinar:

https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Passcode: FFT

Data Representation Learning from a Single Pass of the Data

When: Mon, April 4, 2022 - 2:00pm
Where: Online
Speaker: Prof. Alexander Cloninger (UCSD) -
Abstract: In many applications, constructing kernel matrices or pairwise distance matrices can be prohibitively expensive. This can be due to the expense of storing data in a streaming setting, or the expense of accessing multiple large data sets to compute some statistical distance between them. In this talk, I highlight several settings in which we can compute representations of data points (or entire point clouds) on a single pass. This includes Linearized Optimal Transport for computing Wasserstein distances between distributions and performing supervised learning, boosted kernel regression on streaming data, and streaming quantization of translation invariant kernel feature spaces.

Zoom Link for April 4th Webinar:

https://umd.zoom.us/j/93733745978?pwd=UWVSOG5lblVvNThPS2RoVFVSWjRGZz09

Meeting ID: 937 3374 5978
Passcode: FFT