Abstract: We discuss models of interactions with the environment by human populations, both between poor and rich people, "Commoners'' and "Elites''. The Elites control the society's wealth and consume it at a higher rate than Commoners, whose work produces the wealth. We say a model is âElite-dominatedâ when the Elites' per capita population change rate is always at least as large as the Commoners'. We can show the model always exhibits population crashes for all choices of parameter values for which it is Elite-dominated. But any such model with explicit equations raises questions of how the resulting behaviors depend on the details of the models. How important are the particular design features codified in the differential equations? We discard the differential equations, replacing them with qualitative conditions that the original model satisfies, and we prove these conditions imply population collapse must occur. In particular, one condition is that the model is Elite-dominated. Our approach of introducing qualitative mathematical hypotheses can better show the underlying features of the model that lead to collapse. We also ask how societies can avoid collapse.
Abstract: Cortical neurons can be strongly or weakly coupled to the network in which they are embedded, firing in sync with the majority or firing independently. Both these scenarios have potential computational advantages in motor cortex. Commands to the body might be more robustly conveyed by a strongly coupled population, whereas a motor code with greater information capacity could be implemented by neurons that fire more independently. Which of these scenarios prevails? Here we measure neuron-to-body coupling and neuron-to-population coupling for neurons in motor cortex of freely moving rats. We find that neurons with high and low population coupling coexist, and that population coupling was tunable by manipulating inhibitory signaling. Importantly, neurons with different population coupling tend to serve different functional roles. Those with strong population coupling are not involved with body movement. In contrast, neurons with high neuron-to-body coupling are weakly coupled to other neurons in the cortical population.
Abstract: Network science is a rapidly expanding field, with a large and growing body of work on network-based dynamical processes. Most theoretical results in this area rely on the so-called "locally tree-like approximation" (which assumes that one can ignore small loops in a network). This is, however, usually an `uncontrolled' approximation, in the sense that the magnitudes of the error are typically unknown, although numerical results show that this error is often surprisingly small. In our work, we place this approximation on more rigorous footing by calculating the magnitude of deviations away from tree-based theories in the context of network cascades (i.e., a network dynamical process describing the spread of activity through a network). For this widely applicable problem, we discuss the conditions under which tree-like approximations give good results, and also explain the reasons for deviation from this approximation. More specifically, we show that these deviations are negligible for networks with a large number of network links, justifying why tree-based theories appear to work well for most real-world networks.
Abstract: Co-authored with Tse-Chun Chen and Daisuke Hotta. The National Weather Service computes operational weather forecasts using a process called âdata assimilationâ: A 6 hour forecast is computed starting from the current âanalysisâ. The 6 hour forecast is then optimally combined with the observations collected 6 hours later to create the new analysis which serves as initial conditions for the next forecast. This process, known as âanalysis cycleâ, is repeated every 6 hours. Miyakoda (personal communication, ~1980) pointed out that using any future information to improve current forecasts should be considered âcheatingâ because it cannot be done in operational forecasting. Chen (2018, PhD thesis), Chen and Kalnay (2019a) MWR, and Chen and Kalnay (2019b, under review), developed an application of Ensemble Forecast Sensitivity to Observations (EFSO, Kalnay et al., 2012, Tellus) combined with Proactive Quality Control (PQC, Hotta et al., 2017). It uses future data (e.g., observations obtained 6 hours after the present analysis) to identify and delete current detrimental observations (in the present analysis). We found that making a late correction of every current analysis after the new observations have been received, accumulates improvements with time. The accumulated improvement is found to be much larger than the last correction that cannot be used in order to avoid cheating, so that forecasts are significantly improved âwithout cheatingâ.
Abstract: The quantum adiabatic theorem governs the evolution of a wavefunction under a slowly time-varying Hamiltonian. I will consider the opposite limit of a Hamiltonian that is varied impulsively: a strong perturbation U(x,t) is applied over a time interval of infinitesimal duration Îµ approaches 0. When the strength of the perturbation scales like 1/Îµ^2, there emerges an interesting dynamical behavior characterized by an abrupt displacement of the wave function in coordinate space. I will solve for the evolution of the wavefunction in this situation. Remarkably, the solution involves a purely classical construction, yet describes the quantum evolution exactly, rather than approximately. I will use these results to show how appropriately tailored impulses can be used to control the behavior of a quantum wavefunction.
Abstract: The general problem of determining causal dependences in an unknown time evolving system from observations is of great interest in many fields. Examples include inferring neuronal connections from spiking data, deducing causal dependences between genes from expression data, discovering long spatial range influences in climate variations, etc. Previous work has tackled such problems by consideration of correlations, prediction impact, or information transfer metrics. Here we propose a new method that leverages the ability of machine learning to generalize from examples, combined with concepts from dynamical systems theory. We test our proposed technique on numerical examples obtaining results that suggest excellent performance for a large range of situations. An important, somewhat surprising, conclusion is that, although our rationale is based on noiseless deterministic systems, dynamical noise can greatly enhance our technique's effectiveness.
Abstract: The human brain is capable of diverse feats of intelligence. A particularly salient example is the ability to implicitly learn dynamics from experiencing the physical world. Analogously, artificial neural systems such as reservoir computing (RC) networks have shown great success in learning the long-term behavior of various complex dynamical systems from data, without knowing the explicit governing equation. Regardless of the marked differences between biological and artificial neural systems, one fundamental similarity is that they are essentially dynamical systems that are fine-tuned towards the imitation of other dynamical systems. To shed some light on how such a learning function may emerge from biological systems, we draw inspiration from observations of the human brain to propose a first-principles framework explicating its putative mechanisms. Within this framework, one biological or artificial dynamical system, regardless of its specific composition, implicitly and adaptively learns other dynamical attractors (chaotic or non-chaotic) by embedding the dynamical attractors into its own phase space through the invertible generalized synchronization, and imitates those attractors by sustaining the embedded attractors through fine-tuned feedback loops. To demonstrate this general framework, we construct several distinct neural network models that adaptively learn and intimate multiple attractors. With these, we observe and explain the emergence of five distinct phenomena reminiscent of cognitive functions: (i) imitation of a dynamical system purely from learning the time series, (ii) learning of multiple dynamics by a single system, (iii) switching among the imitations of multiple dynamical systems, either spontaneously or driven by external cues, (iv) filling-in missing variable from incomplete observations of a learned dynamical system, and (v) deciphering superimposed input from different dynamical systems.
Abstract: Whispering-gallery mode (WGM) resonators are disks, toroids or spheres with micro- or millimetric radius and (sub-)nanometer surface roughness. They have the capability to trap laser light by total internal reflection for a duration higher than a microsecond. In these ultra-high Q resonators, the small volume of confinement, high photon density and long photon lifetime ensures a very strong light-matter interaction, which may excite the WGMs through various nonlinear effects, namely Kerr, Raman, or Brillouin. Quantum phenomena such as twin-photon generation, entanglement, and squeezing can also occur in these optical cavities. In this talk, we discuss some of the main challenges related to the understanding of nonlinear and quantum phenomena in WGM resonators, and present as well as some of the principal applications in aerospace and communication engineering.
Abstract: We consider the problem of data-driven forecasting of chaotic dynamical systems when the available data is from a sparse spatial sampling, i.e., the full state of the dynamical system cannot be observed directly. Recently, there have been several promising data-driven approaches to forecasting of chaotic dynamical systems using machine learning. Particularly promising among these are hybrid approaches that combine machine learning with a knowledge-based model, where a machine learning technique is used to correct the imperfections in the knowledge-based model. Such a hybrid approach is promising when a knowledge-based model is available but is imperfect due to incomplete understanding of the physical processes in the underlying dynamical system. However, previously proposed data-driven forecasting approaches assume knowledge of the full state of the dynamical system. We seek to relax this assumption by using a data assimilation technique along with Machine Learning in a novel technique that improves forecasts. We demonstrate that using partial measurements of the state of the dynamical system we can train a machine learning model to correct model error in an imperfect knowledge-based model.
Abstract: In this talk, I will describe experiments on a chaotic electronic circuit that can be used as a high speed true random number generator. This circuit can be modified to act as a Physically Unclonable Function, which are novel cybersecurity devices used for device authentication, tamper-proofing, and key generation.
Abstract: Driven nonlinear dynamical systems can reside in two steady states at a single driving condition. This feature, known as bistability, is associated with emergent phenomena in phase transitions, scaling, and universal behavior. In descriptions of bistable systems, it is typically assumed that the nonlinear force responsible for bistability acts instantaneously on the system. In addition, the role of quantum fluctuations on bistability was until recently largely assumed to be irrelevant to experiments. In this talk, I will present two experiments where these two assumptions were challenged. Both of these experiments were based on nonlinear optical cavities driven by light, but similar physics is expected in other systems. The experiments we performed consisted of scanning a driving parameter (e.g. laser intensity or frequency) across an optical bistability at various speeds, and analyzing the resultant dynamic optical hysteresis. Intriguingly, both quantum fluctuations and non-instantaneous interactions lead to a universal power law decay of the hysteresis area as a function of the scanning speed. However, whereas quantum fluctuations lead to universal scaling behavior in the limit of slow scans, non-instantaneous interactions lead to a universal scaling behavior in the limit of fast scans. I will conclude with perspectives for realizing lattices of bistable optical cavities, and the opportunities that these open for performing analog computation and for studying stochastic nonlinear dynamics with light.
Abstract: The guided migration of cells is a complex dynamical process involving carefully regulated polymerization and depolymerization of the elements of the cellular scaffolding, in particular actin. Recent work has shown that polymerizing and depolymerizing actin can be described as an excitable system which exhibits natural waves or oscillations on scales of hundreds of nm, and that wave-like dynamics can be seen in a wide range of natural contexts. I will show that physical signals nucleate and guide these the wave-like dynamics, and that such guided actin waves control cell migration for a broad range of cell types. This opens up novel approaches to control cell behavior.
Abstract: Sleep is a behavioral state in which we spend nearly one third of our lives. This biological phenomenon clearly serves an important role in the lives of most species. Here, we present a mathematical model of human sleep- wake regulation with thermoregulatory functions to gain quantitative insight into the effects of ambient temperature on sleep quality. Numerical simulations provide quantitative answers regarding how humans sleep dynamics might adjust in response to being challenged with ambient temperatures away from thermoneutral. We will discuss the dynamics associated with the model as well as how the model could be used as a foundation for in silico simulations pertaining to jet lag, sleep deprivation, and temperature effects on sleep.
Abstract: The ability to rapidly learn from high-dimensional data to make reliable predictions about the future of a given system is crucial in many contexts. This could be a fly avoiding predators, or the retina processing terabytes of data almost instantaneously to guide complex human actions. In this work we draw parallels between such tasks, and the efficient sampling of complex molecules with hundreds of thousands of atoms. Such sampling is critical for predictive computer simulations in condensed matter physics and biophysics, including but not limited to problems such as crystal nucleation and drug unbinding. For this we use the Predictive Information Bottleneck (PIB) framework developed and used for the first two classes of problems, and re-formulate it for the sampling of biomolecular structure and dynamics, especially when plagued with rare events, and with minimum assumptions on the physics of the system [1-2]. Our method considers a given biomolecular trajectory expressed in terms of order parameters or basis functions, and uses a deep neural network to learn the minimally complex yet most predictive aspects of this trajectory, viz the PIB. This information is used to perform iterative rounds of biased simulations that enhance the sampling along the PIB to gradually improve its accuracy, directly obtaining associated thermodynamic and kinetic information. We demonstrate the method on different test-pieces, where we calculate the dissociation pathway and timescales slower than milliseconds. These include ligand dissociation from the protein lysozyme and and from flexible RNA. 1. Tiwary and Berne, PNAS 2016 2. Wang, Ribeiro and Tiwary, Nature Commun. 2019
Abstract: Noise has usually been considered as an unwanted disturbance and considerable research has been done on noise reduction in dynamical systems over the past decades. Alternatively, one can view noise as a means to alter the dynamic response of a nonlinear system. The nonlinear systems of interest are coupled oscillator arrays, which can be used to describe rotary systems and energy harnessing systems. In these systems, response localizations can occur. With an appropriate choice of initial conditions and harmonic input, a coupled oscillator system can be excited to realize a periodic response, wherein one or more oscillators oscillate with much higher amplitudes compared to the rest of the oscillators. This type of spatial localization of energy can be suppressed by introducing Gaussian noise into the system. In this talk, we will focus on guiding the system response between different periodic orbits realized for harmonic forcing, through the addition of Gaussian noise in the input. We also explore how long it takes for the noise to suppress energy localization.
Abstract: Each year recently I have given a talk about some important topic that is not one of my research areas. This lecture concerns ideas in the science of nutrition and metabolism that I feel most people should know about. We seem to know more about planets circling other stars than about metabolism. And there are good reasons for that.
Abstract: The talk is an overview of the latest results of research efforts to apply a hybrid (numerical-machine-learning) modeling approach developed at the University of Maryland to global weather prediction. The forecast performance of the hybrid model is assessed by comparing it to that of persistence, a numerical physics-based model, and a machine learning (ML) model, whose prognostic state variables and resolution are identical to those of the hybrid model. The hybrid model typically provides realistic prediction of the weather for the entire globe for about two-three days. Both the hybrid and the ML model outperform persistence in the extratropics, but not in the tropics. While the relative performance of the ML model compared to the physics-based model is mixed, the hybrid model forecasts are more accurate than either the ML or the physics-based model forecasts at the shorter forecast times. Potential techniques to further improve the short-term hybrid and ML forecasts and extend the valid time of the forecasts are also discussed.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827