Abstract: I will present several developments that collectively give us a fruitful new perspective on a fundamental result in quantum information theory, the strong sub-additivity of quantum entropy. I will start by discussing quantum entropy and other information measures, and explain why they are emphatically more interesting and delicate than their commutative analogues. While exploring some of their properties we will encounter the Golden-Thompson inequality which illustrates the mathematical challenges faced when dealing with functions of matrices that do not commute. We will then see how the Golden-Thompson and other norm inequalities can be generalized from two to arbitrarily many matrices using complex interpolation theory. Closing the circle, we will see how the resulting multivariate trace inequality can be used to strengthen strong sub-additivity of quantum entropy, leading us to strong bounds on the recoverability of quantum information.
This talk is based on work in arXiv:1512.02615, arXiv:1604.03023, and arXiv:1609.01999.
Abstract: We study anomalous dissipation in hydrodynamic turbulence in the context of passive scalars. We give an example of a rough divergence-free velocity field that explicitly exhibits anomalous dissipation for passive scalars. The mechanism for scalar dissipation is a built-in direct energy cascade in the synthetic velocity field. Connections to the ObukhovâCorrsin monofractal theory of scalar turbulence and to inviscid mixing will be discussed. This is joint work with T. Elgindi, G. Iyer and I-J Jeong.
Abstract: Consider a (possibly time-dependent) vector field v on the Euclidean space. The classical Cauchy-Lipschitz (also named Picard-LindelÃ¶f) Theorem states that, if the vector field v is Lipschitz in space, for every initial datum x there is a unique trajectory Î³ starting at x at time 0 and solving the ODE Î³Ì(t) = v(t, Î³(t)). The theorem looses its validity as soon as v is slightly less regular. However, if we bundle all trajectories into a global map allowing x to vary, a celebrated theory put forward by DiPerna and Lions in the 80es show that there is a unique such flow under very reasonable conditions and for much less regular vector fields. A long-standing open question is whether this theory is the byproduct of a stronger classical result which ensures the uniqueness of trajectories for almost every initial datum. I will give a complete answer to the latter question and draw connections with partial differential equations, harmonic analysis, probability theory and Gromovâs h-principle.
Abstract: Markov chains are among the most well known and established probabilistic models and have long been developed, studied and applied to real world problems. The simplicity of these models makes them interesting in theoretical studies as well as extremely flexible to analyze in applications. In comparison with classic Markov chains, the size and complexity of networks arising in contemporary applications has grown dramatically. In many applications, it is of interest to study transition processes in large and complex networks. Our goal is to develop efficient computational tools for the study of flows in large and complex time-irreversible Markov chains. In this prospectus, I will give an insight into my dissertation research and provide a necessary background for it. The research involves three major parts: (1) we propose a general framework for designing modified Markov chains in order to quantify transitions in time-irreversible Markov chains, (2) provide a theorem justifying the construction, and (3) we propose a so-called âmutation analysisâ for gene regulatory networks allowing one to access their robustness and use the proposed tools to analyze a stochastic budding yeast gene regulatory network.
Abstract: Despite the overwhelming success of neural networks for pattern recognition, these models behave categorically different from humans. Adversarial examples, small perturbations which are often undetectable to the human eye, easily fool neural networks, demonstrating that neural networks lack the robustness of human classifiers. This defense comprises two parts. First, we develop methods for hardening neural networks against an adversary. Second, we discuss several mathematical properties of the neural network models we use. These properties are of interest beyond robustness to adversarial examples, and they extend to the broad setting of deep learning.
Dissertation directed by: Wojciech Czaja
In view of recent developments on our campus, the defense will be held virtually on Zoom. The virtual meeting will start at 9:50, and the defense will begin at 10:00 AM on March 30, 2020. As this is new for most of us, I ask you to join during this 10 minute period before 10 AM.
To attend the virtual defense, please join the Zoom session at the following link: https://umd.zoom.us/j/834093556 (For regular Zoom users, the meeting ID is: 834 093 556.)
Zoom sessions can be accessed in the browser or by downloading the Zoom app. UMD students and faculty can access and join Zoom at umd.zoom.us by using your UMD credentials. Please note that not all web browsers work well with Zoom. Therefore downloading the app is encouraged, esp., for habitual Firefox users.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827