Abstract: In this meeting, we will: 1. Provide an introductory lecture on the Clifford hierarchy 2. Give participants an opportunity to learn about each other
Abstract: We will recall some features of the local Langlands program, as well as recent work in reformulating it in a categorical framework. We will discuss partial calculations of the Fargues–Scholze L-parameters associated to tame supercuspidal representations of reductive p-adic groups, by chaining together some instances of "modular functoriality". This is joint work with Tony Feng.
Abstract: Deep neural networks are the state of the art for many signal-processing tasks, including image denoising. However, applying these models to real-world scientific data presents substantial challenges. In this talk, we discuss these challenges and introduce a series of simulation-based, unsupervised, and semi-supervised strategies designed to overcome them. We demonstrate that these approaches perform effectively on real electron microscopy data, revealing previously unobserved atomic-level dynamics in catalytic nanoparticles.
Abstract: With the growing adoption of machine-learning (ML) tools, there is an ever increasing need to develop rigorous methods for assessing the quality of their predictions and outputs. Despite this, fundamental questions about the connection between ML and probability remain unresolved. For example, do arbitrary ML models always have probabilistic interpretations? What does it mean for a ML model to be consistent with probability? And how could one extract probabilities from “hard” classifiers such as support vector machines? In this talk, I will address these questions by deriving a level-set theory of classification that establishes an equivalence between certain types of self-consistent ML models and class-conditional probability distributions. I begin by considering the properties of binary Bayes classifiers, recognizing that the boundary sets separating classes can be re-interpreted as level-sets of density ratios, which quantify the relative probability that a sample point belongs to a given class. I then demonstrate how these level sets can be ordered in terms of an affine parameter related to the prevalence (fraction of elements in a class). This analysis subsequently implies that all Bayes classifiers have monotonicity and self-consistency properties, the latter being equivalent to the law of total probability. By reversing the analysis, I then discuss how for any classifier, the monotonicity and self-consistency properties (along with a normalization condition) imply the existence of probability distributions for which the classifier is in fact Bayes optimal. This allows one to determine when classifiers can be equipped with probabilistic interpretations, and it yields the density ratios via the level-set theory. Throughout, I illustrate these ideas in the context of real-world examples from diagnostics and image analysis.
Abstract: Gene drive alleles bias inheritance in the favor, allowing them to quickly spread throughout a population. They could combat disease by rapidly spreading a cargo gene that blocks pathogen transmission, or they could directly suppress vector populations. We have developed efficient systems in Anopheles stephensi for both population suppression and confined population modification with reduced resistance allele formation. Yet, questions remain about the performance of gene drives after release in real-world populations. To address these, we developed several frameworks for computational modeling. In an individual-based framework, we predict that Anopheles suppression drives may still not succeed in spatially structured natural populations due to the "chasing" phenomenon that causes long-term persistence of both drive and wild-type alleles. Yet, even without mosquito elimination, local malaria elimination can still be successful. We also used reaction-diffusion models and large-scale hex-based Culex models of Hainan island to predict optimal drive releases of spatially confined drive systems, when larger release sizes make for more challenging deployment. Finally, we assessed new variants of self-limiting “temporary” suppression gene drive systems, which have similar dynamics to mature SIT and RIDL methods but substantially more power. Thus, despite unexpected complexity, gene drive remains a flexible and effective method to protect against vector-borne diseases.
Abstract: In ongoing joint work with Jon Chaika and Florent Ygouf, we establish a Ratner-type orbit closure theorem for the horocycle flow on rank one loci, in the moduli space of translation surfaces. The question of describing horocycle invariant ergodic measures is still open, and contrary to Ratner’s work, we describe all orbit-closures without a corresponding measure classification theorem. I will give a crash course on the structure of these moduli spaces, emphasizing rel foliations and rel deformations. Then I will describe our approach to the problem, and indicate some of the main difficulties.
Abstract: A classical result of Ruzsa and Szemeredi shows that there is no subset A of the integers such that both the sumset A+A and the productset AA are small. Breuillard, Katz and Tao found a similar result for the subsets of finite prime fields of “medium size”. We show that under certain mild assumptions on a dimension theory , a “medium sized” subset of a field with small sumset and productset in the sense of not expanding in dimension indicates the existence of a subfield of the same dimension. This is joint work with Sergei Starchenko.
Abstract: In this talk we use Whitney forms on the primal mesh and generalized Whitney forms developed by Christiansen on the dual mesh to write DEC only using forms. This allows one to view DEC in a more FEEC-like setting. We then are able to use FEEC like analysis to give an error analysis for the Hodge Laplacian. It is important to note that for the zero Laplacian the analysis was already performed by Schulz and Tsogtgerel and we use some of their techniques to analyze all the Hodge Laplacians. Our results only hold for uniformly well centered meshes which is quite limiting in practice. We are also able to prove superconvergence results when there is symmetry in the meshes. This is joint work with Pratyush Potu.
Abstract: Alice and Bob control a random walk: alternately, each of them flips a fair coin, is supposed to report the outcome, and the random walk advances according to the report. Suppose that the random walk did not return to the origin infinitely often. We suspect that one of Alice and Bob misreported the outcomes of her or his coin. Can we identify the deviator?
More generally, several players are supposed to follow a prescribed profile of strategies (e.g., select each of Right and Left with probability 1/2). If they follow this profile, they will reach a given target (e.g., the random walk returns to the origin infinitely often). We show that if the target is not reached because some player deviates, then an outside observer can identify the deviator. We also construct identification methods in two nontrivial cases.
Joint work with Noga Alon (Princeton and Tel Aviv University), Benjamin Gunby (Rutgers), Xiaoyu He (Georgia Tech), and Eran Shmaya (Stony Brook).
Abstract: Optimal transport is an old branch of the calculus of variations whose origins can be traced back to an important memoir of Monge in 1781, followed by remarkable contributions due to Kantorovich in 1942, and in the last 50 years by R.L. Dobrushin, Y. Brenier, and many others. Among the by-products of optimal transport is a family of distances metrizing the weak topology of Borel probability measures on Euclidean spaces. The analogy between Borel probability measures on phase space and the notion of density operators used in quantum mechanics suggests defining a notion of « pseudometric » which can be used to compare two (quantum) density operators, or a density operator with a probability density in phase space. The talk will discuss the main properties of this pseudometric, and compare them with the analogous results known for the optimal transport metrics defined for pairs of phase space probability densities.
This presentation is based on a series of works with E. Caglioti, C. Mouhot and T. Paul.
Abstract: In joint work with Pablo Carrasco we push the theory ofthermodynamic formalism from hyperbolic systems to partially hyperbolic systems in different forms. In the meantime we find several interesting open problems and dynamical proofs of some (to us) interesting results, for example we show Burger-Monod theorem on vanishing of second bounded cohomology for higher rank lattices. The goal of the talk is to discuss this development.
Abstract: In 1979, R.L. Dobrushin used optimal transport to prove the validity of the mean-field limit in classical mechanics, and therefore of the Vlasov equation with Lipschitz continuous force field, with a convergence rate. Following Dobrushin’s remarkable result, optimal transport has been successfully applied to various PDE problems. This talk will discuss quantum analogues of the Dobrushin analysis based on the notion of optimal transport pseudometric defined on the set of quantum density operators. We shall also present some results pertaining to the classical limit of quantum dynamics, with approximation rate given in terms of the optimal transport « pseudometric » defined on the set of (quantum) density operators. This presentation is based on a series of works with S. Jin, C. Mouhot and T. Paul.
Abstract: Studies on missing network data have largely focused on the impact of missing data on network structure rather than inference from a statistical model. In particular, there has very little research on the impact of missing data when fitting latent variable network models, so we examined the impact of common missing data mechanisms and subsequent missingness treatment and imputation methods when working with latent variable network models, focusing on the latent space model (Hoff et al., 2002). By removing the common definitions of missingness, our simulation study found large differences in inference, parameter, and network feature recovery based on the missingness mechanism and treatment method. In addition, we induced missingness using a real-world dataset and explored how treatment methods impacted subsequent inference and network recovery. We found that missingness based on a node covariate that also predicted network ties was the most problematic form of missingness and that complete case analysis and Bayesian estimation generally worked as well or better than other methods.