Abstract: We discuss models of interactions with the environment by human populations, both between poor and rich people, "Commoners'' and "Elites''. The Elites control the society's wealth and consume it at a higher rate than Commoners, whose work produces the wealth. We say a model is âElite-dominatedâ when the Elites' per capita population change rate is always at least as large as the Commoners'.
We can show the model always exhibits population crashes for all choices of parameter values for which it is Elite-dominated. But any such model with explicit equations raises questions of how the resulting behaviors depend on the details of the models. How important are the particular design features codified in the differential equations?
We discard the differential equations, replacing them with qualitative conditions that the original model satisfies, and we prove these conditions imply population collapse must occur. In particular, one condition is that the model is Elite-dominated.
Our approach of introducing qualitative mathematical hypotheses can better show the underlying features of the model that lead to collapse. We also ask how societies can avoid collapse.
Abstract: Cortical neurons can be strongly or weakly coupled to the network in which they are embedded, firing in sync with the majority or firing independently. Both these scenarios have potential computational advantages in motor cortex. Commands to the body might be more robustly conveyed by a strongly coupled population, whereas a motor code with greater information capacity could be implemented by neurons that fire more independently. Which of these scenarios prevails? Here we measure neuron-to-body coupling and neuron-to-population coupling for neurons in motor cortex of freely moving rats. We find that neurons with high and low population coupling coexist, and that population coupling was tunable by manipulating inhibitory signaling. Importantly, neurons with different population coupling tend to serve different functional roles. Those with strong population coupling are not involved with body movement. In contrast, neurons with high neuron-to-body coupling are weakly coupled to other neurons in the cortical population.
Abstract: Network science is a rapidly expanding field, with a large and growing body of work on network-based dynamical processes. Most theoretical results in this area rely on the so-called "locally tree-like approximation" (which assumes that one can ignore small loops in a network). This is, however, usually an `uncontrolled' approximation, in the sense that the magnitudes of the error are typically unknown, although numerical results show that this error is often surprisingly small. In our work, we place this approximation on more rigorous footing by calculating the magnitude of deviations away from tree-based theories in the context of network cascades (i.e., a network dynamical process describing the spread of activity through a network). For this widely applicable problem, we discuss the conditions under which tree-like approximations give good results, and also explain the reasons for deviation from this approximation. More specifically, we show that these deviations are negligible for networks with a large number of network links, justifying why tree-based theories appear to work well for most real-world networks.
Abstract: Co-authored with Tse-Chun Chen and Daisuke Hotta. The National Weather Service computes operational weather forecasts using a process called âdata assimilationâ: A 6 hour forecast is computed starting from the current âanalysisâ. The 6 hour forecast is then optimally combined with the observations collected 6 hours later to create the new analysis which serves as initial conditions for the next forecast. This process, known as âanalysis cycleâ, is repeated every 6 hours. Miyakoda (personal communication, ~1980) pointed out that using any future information to improve current forecasts should be considered âcheatingâ because it cannot be done in operational forecasting. Chen (2018, PhD thesis), Chen and Kalnay (2019a) MWR, and Chen and Kalnay (2019b, under review), developed an application of Ensemble Forecast Sensitivity to Observations (EFSO, Kalnay et al., 2012, Tellus) combined with Proactive Quality Control (PQC, Hotta et al., 2017). It uses future data (e.g., observations obtained 6 hours after the present analysis) to identify and delete current detrimental observations (in the present analysis). We found that making a late correction of every current analysis after the new observations have been received, accumulates improvements with time. The accumulated improvement is found to be much larger than the last correction that cannot be used in order to avoid cheating, so that forecasts are significantly improved âwithout cheatingâ.
Abstract: The quantum adiabatic theorem governs the evolution of a wavefunction under a slowly time-varying Hamiltonian. I will consider the opposite limit of a Hamiltonian that is varied impulsively: a strong perturbation U(x,t) is applied over a time interval of infinitesimal duration Îµ approaches 0. When the strength of the perturbation scales like 1/Îµ^2, there emerges an interesting dynamical behavior characterized by an abrupt displacement of the wave function in coordinate space. I will solve for the evolution of the wavefunction in this situation. Remarkably, the solution involves a purely classical construction, yet describes the quantum evolution exactly, rather than approximately. I will use these results to show how appropriately tailored impulses can be used to control the behavior of a quantum wavefunction.
Abstract: The general problem of determining causal dependences in an unknown time evolving system from observations is of great interest in many fields. Examples include inferring neuronal connections from spiking data, deducing causal dependences between genes from expression data, discovering long spatial range influences in climate variations, etc. Previous work has tackled such problems by consideration of correlations, prediction impact, or information transfer metrics. Here we propose a new method that leverages the ability of machine learning to generalize from examples, combined with concepts from dynamical systems theory. We test our proposed technique on numerical examples obtaining results that suggest excellent performance for a large range of situations. An important, somewhat surprising, conclusion is that, although our rationale is based on noiseless deterministic systems, dynamical noise can greatly enhance our technique's effectiveness.
Abstract: The human brain is capable of diverse feats of intelligence. A particularly salient example is the ability to implicitly learn dynamics from experiencing the physical world. Analogously, artificial neural systems such as reservoir computing (RC) networks have shown great success in learning the long-term behavior of various complex dynamical systems from data, without knowing the explicit governing equation. Regardless of the marked differences between biological and artificial neural systems, one fundamental similarity is that they are essentially dynamical systems that are fine-tuned towards the imitation of other dynamical systems. To shed some light on how such a learning function may emerge from biological systems, we draw inspiration from observations of the human brain to propose a first-principles framework explicating its putative mechanisms. Within this framework, one biological or artificial dynamical system, regardless of its specific composition, implicitly and adaptively learns other dynamical attractors (chaotic or non-chaotic) by embedding the dynamical attractors into its own phase space through the invertible generalized synchronization, and imitates those attractors by sustaining the embedded attractors through fine-tuned feedback loops. To demonstrate this general framework, we construct several distinct neural network models that adaptively learn and intimate multiple attractors. With these, we observe and explain the emergence of five distinct phenomena reminiscent of cognitive functions: (i) imitation of a dynamical system purely from learning the time series, (ii) learning of multiple dynamics by a single system, (iii) switching among the imitations of multiple dynamical systems, either spontaneously or driven by external cues, (iv) filling-in missing variable from incomplete observations of a learned dynamical system, and (v) deciphering superimposed input from different dynamical systems.
Abstract: Whispering-gallery mode (WGM) resonators are disks, toroids or spheres with micro- or millimetric radius and (sub-)nanometer surface roughness. They have the capability to trap laser light by total internal reflection for a duration higher than a microsecond. In these ultra-high Q resonators, the small volume of confinement, high photon density and long photon lifetime ensures a very strong light-matter interaction, which may excite the WGMs through various nonlinear effects, namely Kerr, Raman, or Brillouin. Quantum phenomena such as twin-photon generation, entanglement, and squeezing can also occur in these optical cavities. In this talk, we discuss some of the main challenges related to the understanding of nonlinear and quantum phenomena in WGM resonators, and present as well as some of the principal applications in aerospace and communication engineering.
Abstract: We consider the problem of data-driven forecasting of chaotic dynamical systems when the available data is from a sparse spatial sampling, i.e., the full state of the dynamical system cannot be observed directly. Recently, there have been several promising data-driven approaches to forecasting of chaotic dynamical systems using machine learning. Particularly promising among these are hybrid approaches that combine machine learning with a knowledge-based model, where a machine learning technique is used to correct the imperfections in the knowledge-based model. Such a hybrid approach is promising when a knowledge-based model is available but is imperfect due to incomplete understanding of the physical processes in the underlying dynamical system. However, previously proposed data-driven forecasting approaches assume knowledge of the full state of the dynamical system. We seek to relax this assumption by using a data assimilation technique along with Machine Learning in a novel technique that improves forecasts. We demonstrate that using partial measurements of the state of the dynamical system we can train a machine learning model to correct model error in an imperfect knowledge-based model.
Abstract: In this talk, I will describe experiments on a chaotic electronic circuit that can be used as a high speed true random number generator. This circuit can be modified to act as a Physically Unclonable Function, which are novel cybersecurity devices used for device authentication, tamper-proofing, and key generation.
Abstract: Driven nonlinear dynamical systems can reside in two steady states at a single driving condition. This feature, known as bistability, is associated with emergent phenomena in phase transitions, scaling, and universal behavior. In descriptions of bistable systems, it is typically assumed that the nonlinear force responsible for bistability acts instantaneously on the system. In addition, the role of quantum fluctuations on bistability was until recently largely assumed to be irrelevant to experiments. In this talk, I will present two experiments where these two assumptions were challenged. Both of these experiments were based on nonlinear optical cavities driven by light, but similar physics is expected in other systems. The experiments we performed consisted of scanning a driving parameter (e.g. laser intensity or frequency) across an optical bistability at various speeds, and analyzing the resultant dynamic optical hysteresis. Intriguingly, both quantum fluctuations and non-instantaneous interactions lead to a universal power law decay of the hysteresis area as a function of the scanning speed. However, whereas quantum fluctuations lead to universal scaling behavior in the limit of slow scans, non-instantaneous interactions lead to a universal scaling behavior in the limit of fast scans. I will conclude with perspectives for realizing lattices of bistable optical cavities, and the opportunities that these open for performing analog computation and for studying stochastic nonlinear dynamics with light.
4176 Campus Drive - William E. Kirwan Hall
College Park, MD 20742-4015
P: 301.405.5047 | F: 301.314.0827