Norbert Wiener Center Archives for Fall 2019 to Spring 2020

Organizational Meeting

When: Tue, September 4, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: () -

Harmonic analysis in combinatorics: a case study

When: Tue, September 11, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Dong Dong (UMD) -
The Roth theorem, which concerns the existence of three-term arithmetic progressions in certain sets, is a central topic in combinatorics. It also attracts researchers from different fields such as number theory, ergodic theory, analysis, and even computer science. In this talk, we will look at the Roth theorem from the harmonic analysis point of view. Surprisingly, harmonic analysis connects multilinear operators to very deep algebraic geometry. Although a few branches of mathematics are involved, this talk will be accessible to second-year graduate students and above.

Constrictions of bent functions using a family of permutations and Reed-Muller type codes

When: Tue, September 18, 2018 - 2:00pm
Where: EGR 2116
Speaker: Costas Karanikas (Aristotle University of Thessaloniki) -
Abstract: From a pair of permutations of the first n integers we get a family of permutations on 2^n objects. This family provides new bent functions ie Boolean sequences of length 2^(2n) whose Walsh transfom get values in {2^n,- 2^n}. The left half of a bent function determines a near-bent i.e., Boolean sequences of length 2^n (n odd) with Walsh spectrum in {0,2^n,-2^n} . We relate the support of near-bents with Reed - Muller type codes and using this we construct bents of higher degree using RM type codes and bents of lower type. We also discuss several ways for constructing bent functions and modify well-known constructions as for example Dillon H class and Maiorana- McFarland method .

A sharp Schrodinger maximal estimate in R^2

When: Tue, September 25, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Xiumin Du (UMD)
Abstract: We consider Carleson’s pointwise convergence problem of Schrodinger solutions. It is shown that the solution to the free Schrodinger equation converges to its initial data almost everywhere, provided that the initial data is in the Sobolev space H^s(R^n) with s > n/2(n+1) (joint with Larry Guth and Xiaochun Li in the case n = 2, and joint with Ruixiang Zhang in the case n >= 3). This is sharp up to the endpoint, due to a counterexample by Bourgain. This pointwise convergence problem can be approached by estimates of Schrodinger maximal function, which have some similar flavors as the Fourier restriction/extension estimates. In this talk, we'll focus on the case $n=2$ and see how polynomial partitioning method and decoupling theorem play a role in such estimates.

Mathematical approaches to understand and alter swallowing and gait functions in humans

When: Tue, October 23, 2018 - 2:00pm
Where: Kirwan Hall 1308
Speaker: Ervin Sejdic (PITT) -
Abstract: A human body comprises of several physiological systems that carry out specific functions necessary for
daily living. Traumatic injuries, diseases and aging negatively impact human functions, which can cause a
decreased quality of life and many other socio-economical and medical issues. Accurate models of
human functions are needed to propose interventions and treatments that can restore deteriorated
human functions. Therefore, our research aims to develop novel mathematical approaches that can
accurately assess changes in swallowing and gait functions by focusing on dynamical interactions
between musculoskeletal and other physiological systems. In this talk, I will present some of our recent
contributions dealing with both mathematical and clinical aspects of our work. Lastly, I will also present
our future research goals and our strategy to achieve these goals.

Dr. Ervin Sejdić received B.E.Sc. and Ph.D. degrees in electrical engineering from the University of
Western Ontario, London, Ontario, Canada in 2002 and 2008, respectively. From 2008 to 2010, he was a
postdoctoral fellow at the University of Toronto with a cross-appointment at Bloorview Kids Rehab,
Canada’s largest children’s rehabilitation teaching hospital. From 2010 until 2011, he was a research
fellow at Harvard Medical School with a cross-appointment at Beth Israel Deaconess Medical Center.
From his earliest exposure to research, he has been eager to contribute to the advancement of scientific
knowledge through carefully executed experiments and ground-breaking published work. This has
resulted in co-authoring over 130 journal publications. In February 2016, President Obama named Dr.
Sejdić as a recipient of the Presidential Early Career Award for Scientists and Engineers, “…the highest
honor bestowed by the United States Government on science and engineering professionals in the early
stages of their independent research careers.” In 2017, Dr. Sejdić was awarded the National Science
Foundation CAREER Award. In 2018, he was awarded the Chancellor’s Distinguished Research Award at
the University of Pittsburgh. Dr. Sejdić’s passion for discovery and innovation drives his constant
endeavors to connect advances in engineering to society’s most challenging problems. Hence, his
research interests include biomedical signal processing, gait analysis, swallowing difficulties, advanced
information systems in medicine, rehabilitation engineering, assistive technologies and anticipatory
medical devices.

Oscillations of Fourier series, Quantitative Sturm-Liouville Theory and Applications

When: Tue, November 6, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Stefan Steinerberger (Yale) -
Abstract: The function f(x) = a*sin(12*x) + b*sin(28x) has always between 24 and
56 roots (unless a=b=0). This follows from a classical theorem of Sturm (1836) that
has been forgotten and was recently rediscovered by Berard & Helffer. I will tell the
(quite fascinating) story behind it, give a simple proof and discuss quantitative
refinements, newly emerging connections to elliptic PDEs and the beginning of a
Sturm-Liouville theorem in higher dimensions.

Frames and some algebraic forays

When: Tue, November 13, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Prof. John Benedetto (UMD) -

Training 10k-layer CNNs with mean field theory and dynamical isometry[joint w/ RIT on Deep Learning]

When: Fri, November 16, 2018 - 12:00pm
Where: EGR 0108
Speaker: Lechao Xiao (Google Brain) -
Abstract: In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this talk, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.

Inference of interaction laws in systems of agents from trajectory data

When: Tue, November 20, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Sui Tang (JHU) -

Abstract: Inferring the laws of interaction of agents in complex dynamical systems from observational data is a fundamental challenge in a wide variety of disciplines. We propose a non-parametric statistical learning approach to estimate the governing laws of distance-based interactions, with no reference or assumption about their analytical form, from data consisting trajectories of interacting agents. We demonstrate the effectiveness of our learning approach both by providing theoretical guarantees, and by testing the approach on a variety of prototypical systems in various disciplines. These systems include homogeneous and heterogeneous agents systems, ranging from particle systems in fundamental physics to agent-based systems modeling opinion dynamics under the social influence, prey-predator dynamics, flocking and swarming, and phototaxis in cell dynamics. This talk is based on the joint work with Fei Lu, Mauro Maggioni and Ming Zhong.

Some topic in sparse optimization

When: Tue, November 27, 2018 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Ahmad Mousavi (UMBC) -

Abstract: In this presentation, we first discuss the unfavorable performance (specifically, the quantitative characterization of solution) of \ell_p-recovery with p>1 in reconstructing sparse vectors. Next, we talk about solution uniqueness conditions of several important problems that involve convex piecewise affine function(s). By leveraging their max-formulation and convex analysis tools, we develop dual variables based necessary and sufficient uniqueness conditions via simple and yet unifying approaches. Finally, we discuss uniform recovery under a coordinate-projection admissible constraint set via a generalization of the orthogonal matching pursuit algorithm and its convergence condition.

The smallest eigenvalues of Hamming, Johnson and other graphs

When: Tue, February 12, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Sebastian Cioaba (University of Delaware) -
Abstract: The smallest eigenvalue of a graph is closely related to other graph
parameters such as the independence number, the chromatic number or the
max-cut. In this talk, I will describe the well-known connections between
the smallest eigenvalue and the max-cut of a graph that have motivated
various researchers such as Karloff, Alon, Sudakov, Van Dam, Sotirov to
investigate the smallest eigenvalue of Hamming and Johnson graphs. The
eigenvalues of the Hamming graphs are given by the Kravchuk (Krawtchouk) polynomials and the eigenvalues of the Johnson graphs are described by the Eberlein polynomials.
I will describe our proofs of a conjecture by Van Dam and Sotirov on the smallest
eigenvalue of (distance-j) Hamming graphs and a conjecture by Karloff on the
smallest eigenvalue of (distance-j) Johnson graphs and mention some open
problems. This is joint work with Andries Brouwer, Ferdinand Ihringer and
Matt McGinnis.

February Fourier Talks 2019 (Day 1)

When: Thu, February 21, 2019 - 9:00am
Where: Kirwan Hall 3206
Speaker: () -
Abstract: Schedule and Abstracts available at

February Fourier Talks 2019 (Day 2)

When: Fri, February 22, 2019 - 9:00am
Where: Kirwan Hall 3206
Speaker: () -
Abstract: Schedule and Abstracts available at

Principled Multi-Person Pose Estimation using Implicit Column Generation and Nested Benders Decomposition

When: Tue, February 26, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Julian Yarkony (Verisk Analytics) -
Abstract: We present a novel approach for multi-person pose estimation (MPPE) using implicit column generation and nested benders decomposition. We formulate MPPE as a set packing problem over the set of person hypothesis (poses) in an image where the set of poses is the power set of detections of body parts in the image. We model the quality of a pose as a function of its members as described by a tree structured deformable part model.

Since we cannot enumerate the set of poses we attack inference using implicit column generation where the pricing problem is structured as a dynamic program and dual optimal inequalities are easily computed. We exploit structure in the dynamic program to permit efficient inference using nested Benders decomposition. We demonstrate the effectiveness of our approach on the MPII human pose annotation benchmark data set.

Frame theory and a global approach to the exterior calculus

When: Tue, March 5, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Tyrus Berry (George Mason University) -

Title: Frame theory and a global approach to the exterior calculus
Abstract: Multivariable calculus studies vector fields and associated operators such as the gradient and divergence in Euclidean space. This generalizes to smooth Riemannian manifolds as the exterior calculus. Recently there has been interest in defining discrete analogs of the exterior calculus on simplicial complexes. In this talk we go even further and present a generalization of the exterior calculus to graphs (without the extra structure of a complex). To achieve this, the exterior calculus on smooth manifolds is first reformulated entirely in terms of the eigenvalues and eigenfunctions of the Laplacian operator. The key to this global approach to manifolds is representing objects (functions, vector fields, operators, etc.) in a frame (dependent spanning set) instead of a basis. We call this reformulation the Spectral Exterior Calculus (SEC). The primary goal of the talk is to explain why frame theory is the natural setting for analysis on manifolds and then introduce the SEC. We then transfer this formulation to a graph using the eigenvalues and eigenvectors of the graph Laplacian. In numerical experiments we show that coarse-grained topological features of a graph are reflected in the SEC, in direct analogy to classical results in differential geometry.

Sampling and Tomography in Euclidean and non-Euclidean Spaces

When: Tue, March 12, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Stephen Casey (American University) -
Abstract: We discuss harmonic analysis in the settings of both Euclidean and non-Euclidean spaces, and then focus on two specific problems using this analysis – sampling theory and network tomography. These show both the importance of non-Euclidean spaces and some of the challenges one encounters when working in non-Euclidean geometry. Starting with an overview
of surfaces, we demonstrate the importance of hyperbolic space in general surface theory,and then develop harmonic analysis in general settings, looking at the Fourier-Helgason transform and its inversion. We then focus on sampling and tomography.

Sampling theory is a fundamental area of study in harmonic analysis and signal and image processing. We connect sampling theory with the geometry of the signal and its domain. It is relatively easy to demonstrate this connection in Euclidean spaces, but one quickly gets into open problems when the underlying space is not Euclidean. We discuss how to extend
this connection to hyperbolic geometry and general surfaces, outlining an Erlangen-type program for sampling theory.

The second problem we discuss is network tomography. We demonstrate a way to create a system that will detect viruses as early as possible and work simply on the geometry or structure of the network itself. Our analysis looks at weighted graphs and how the weights change due to an increase in traffic. The analysis is developed by applying the tools of harmonic analysis in hyperbolic space.

Stochastic processes on graphs: learning representations and applications

When: Tue, April 9, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Addison Bohannon (University of Maryland, College Park) -

Latent factor models

When: Tue, April 16, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: David Bindel (Cornell University) -
Abstract: Approximate low-rank factorizations pervade matrix data analysis, often interpreted in terms of latent factor
models. After discussing the ubiquitous singular value decomposition (aka PCA), we turn to factorizations
such as the interpolative decomposition and the CUR factorization that offer advantages in terms of interpretability and ease of computation. We then discuss constrained approximate factorizations, particularly
non-negative matrix factorizations and topic models, which are often particularly useful for decomposing
data into sparse parts. Unfortunately, these decompositions may be very expensive to compute, at least in
principal. But in many practical applications one can make a separability assumption that allows for relatively inexpensive algorithms. In particular, we show how to the separability assumption enables efficient
linear-algebra-based algorithms for topic modeling, and how linear algebraic preprocessing can be used to
“clean up” the data and improve the quality of the resulting topics.

Scalable kernel methods

When: Tue, April 23, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: David Bindel (Cornell University) -
Abstract: Kernel methods are used throughout statistical modeling, data science, and approximation theory. Depending on the community, they may be introduced in many different ways: through dot products of feature
maps, through data-adapted basis functions in an interpolation space, through the natural structure of a
reproducing kernel Hilbert space, or through the covariance structure of a Gaussian process. We describe
these various interpretations and their relation to each other, and then turn to the key computational bottleneck for all kernel methods: the solution of linear systems and the computation of (log) determinants for
dense matrices whose size scales with the number of examples. Recent developments in linear algebra make
it increasingly feasible to solve these problems efficiently even with millions of data points. We discuss some
of these techniques, including rank-structured factorization, structured kernel interpolation, and stochastic
estimators for determinants and their derivatives. We also give a perspective on some open problems and
on approaches to addressing the constant challenge posed by the curse of dimensionality.

Spectral methods in data network analysis

When: Tue, April 30, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: David Bindel (Cornell University) -
Abstract: Linear algebra methods play a central role in modern methods for large-scale network analysis. The same
approach underlies many of these methods. First, one tells a story that associates the network with a matrix,
either as the generator of a linear time-invariant dynamical process on the graph or as a quadratic form used
to measure some quantity of interest. Then, one uses the eigenvalues and eigenvectors of the matrix to
reason about the properties of the dynamical system or quadratic form, and from there to understand the
network. We describe some of the most well-known spectral network analysis methods for tasks such as
bisection and partitioning, clustering and community detection, and ranking and centrality. These methods
largely depend only on a few eigenvalues and eigenvectors, but we will also describe some methods that
require a more global perspective, including methods that we have developed for local spectral clustering
and for graph analysis via spectral densities.

Super-resolution, subspace methods, and minimum singular value of non-harmonic Fourier matrices

When: Tue, May 7, 2019 - 2:00pm
Where: Kirwan Hall 3206
Speaker: Weilin Li (New York University) -
Title: Super-resolution, subspace methods, and minimum singular value of non-harmonic Fourier matrices

This talk is concerned with the inverse problem of recovering a discrete measure on the torus consisting of S atoms, given M consecutive noisy Fourier coefficients. Super-resolution is sensitive to noise when the distance between two atoms is less than 1/M. We connect this problem to the minimum singular value of non-harmonic Fourier matrices. New results for the latter are presented, and as consequences, we derive results regarding the information theoretic limit of super-resolution and the resolution limit of subspace methods (namely, MUSIC and ESPRIT). These results rigorously establish the super-resolution phenomena of these algorithms that were empirically discovered long ago, and numerical results indicate that our bounds are sharp or nearly sharp. Interesting connections to trigonometric interpolation and uncertainty principles are also presented. Joint work with John Benedetto, Albert Fannjiang, Sinan Gunturk, and Wenjing Liao.