Abstract: This talk will provide an overview of recent developments in Fourier restriction theory, which is the study of exponential sums over restricted frequency sets with geometric structure, typically arising in pde or number theory. Decoupling inequalities measure the square root cancellation behavior of these exponential sums. I will highlight recent work which uses the latest tools developed in decoupling theory to prove much more delicate sharp square function estimates for frequencies lying in the cone in R^3 (Guth-Wang-Zhang) and moment curves (t,t^2,...,t^n) in all dimensions (Guth-Maldague).
Abstract: This talk showcases the speaker’s recent results in the field of Optimal Recovery, viewed as a trustworthy Learning Theory focusing on the worst case. At the core of several results presented here is a scenario, resolved in the global and the local settings, where the model set is the intersection of two hyperellipsoids. This has implications in optimal recovery from deterministically inaccurate data and in optimal recovery under a multifidelity-inspired model. In both situations, the theory becomes richer when considering the optimal estimation of linear functionals. This particular case also comes with additional results in the presence of randomly inaccurate data.
Abstract: How can we tell whether a given rational function F=P/Q is continuous on R^n? If the polynomials P and Q have common zeros, the question is subtle. It’s a special case of a problem on the existence of solutions of systems of linear equations for unknown C^m functions. That problem has unexpected connections to a problem on extension of functions, posed by Whitney in 1934. Whitney’s problem in turn is related to manifold learning.
This talk explains the connections and sketches some relevant ideas. A follow-on talk in a few weeks will cover Whitney’s problem in greater depth.
Joint work with Garving (Kevin) Luli and Janos Kollar.
Abstract: There has been extensive study of diagonalization of matrices. Diagonalization can be viewed as using a similarity transform to concentrate the magnitude of all entries within as small a subset of entries as possible. We present results in our talk on what can be viewed as reversing this process, namely spreading out the magnitudes of entries as uniformly as possible.