<?xml version="1.0" encoding="UTF-8" ?>
	<rss version="2.0">
		<channel><title>Statistics</title><link>http://www-math.umd.edu/research/seminars.html</link><description></description><item>
	<title>Inference on weak signals in presence of an additive noise</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 06 Sep 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, September 6, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Abram Kagan (Dept. of Math. (Statistics program)) - http://math.umd.edu/~amk<br />
<br />]]></description>
</item>

<item>
	<title>Semiparametric Transformation Probit Models with Current-Status Data</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 13 Sep 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, September 13, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Jing Qin (National Institutes of Health) - <br />
<br />]]></description>
</item>

<item>
	<title>Generalized Group Testing: Some Results and Open Problems</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 20 Sep 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, September 20, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Yaakov Malinovsky (Dept. of Math. and Stat., UMBC) - <br />
<br />]]></description>
</item>

<item>
	<title> Biostatistical Methods for Wearable and Implantable Technology (WIT)  </title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 27 Sep 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, September 27, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Prof. Ciprian Crainiceanu (Dept. of Biostatistics, Johns Hopkins University) - <br />
Abstract: Wearable and Implantable Technology (WIT) is rapidly changing the Biostatistics data analytic landscape due to their reduced bias and measurement error as well as to the sheer size and complexity of the signals. In this talk I will review some of the most used and useful sensors in Health Sciences and the ever-expanding WIT analytic environment. I will describe the use of WIT sensors including accelerometers, heart monitors, glucose monitors and their combination with ecological momentary assessment (EMA). This rapidly expanding data eco-system is characterized by multivariate densely sampled time series with complex and highly non-stationary structures. I will introduce an array of scientific problems that can be answered using WIT and I will describe methodsdesigned to analyze the WIT data from the micro- (sub-second-level) to the macro-scale (minute-, hour- or day-level) data. <br />]]></description>
</item>

<item>
	<title>  An efficient procedure to combine biomarkers with limits of detection for risk prediction  </title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 11 Oct 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, October 11, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Ruth Pfeiffer (National Cancer Institute, NIH) - <br />
Abstract: Much research seeks biomarkers for diagnosing disease and understanding disease etiology. <br />
As high-throughput technologies allow measuring multiple markers simultaneously, strategies for combining markers are needed, particularly if no single marker is highly discriminating. Statistical procedures to combine information from multiple markers need to account for correlations and for left and/or right censoring of the markers due to lower or upper limits of detection of the laboratory assays. We thus extend dimension reduction approaches, specifically likelihood-based sufficient dimension reduction, to regression or classification with censored predictors. Using an expectation maximization (EM) algorithm, we find linear combinations that contain all or most of the information contained in correlated markers for modeling and prediction of an outcome variable, while accounting for left and right censoring due to detection limits. We also allow for selection of important variables through penalization. We assess the performance of our methods extensively in simulations and apply them to data from a study conducted to assess associations of 47 inflammatory markers and lung cancer risk and build prediction models.   <br />
<br />
This is joint work with Diego Tomassi, Liliana Forzani and Efstathia Bura  <br />
<br />]]></description>
</item>

<item>
	<title>Accuracy of High-Dimensional Deep Learning Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 16 Oct 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, October 16, 2018 - 3:30pm<br />Where: Kirwan Hall 1313 (notice change of time)<br />Speaker: Jason Klusowski (Dept. of Statistics, Rutgers University) - <br />
Abstract: `It has been experimentally observed in recent years that <br />
multi-layer artificial neural networks have a surprising ability to <br />
generalize, even when trained with far more parameters than <br />
observations. Is there a theoretical basis for this? The best available <br />
bounds on their metric entropy and associated complexity measures are <br />
essentially linear in the number of parameters, which is inadequate to <br />
explain this phenomenon. Here we examine the statistical risk (mean <br />
squared predictive error) of multi-layer networks with $\ell^1$-type <br />
controls on their parameters and with ramp activation functions (also <br />
called lower-rectified linear units). In this setting, the risk is shown <br />
to be upper bounded by $[(L^3 \log d)/n]^{1/2}$, where $d$ is the input <br />
dimension to each layer, $L$ is the number of layers, and $n$ is the <br />
sample size. In this way, the input dimension can be much larger than <br />
the sample size and the estimator can still be accurate, provided the <br />
target function has such $\ell^1$ controls and that the sample size is <br />
at least moderately large compared to $L^3\log d$. The heart of the <br />
analysis is the development of a sampling strategy that demonstrates the <br />
accuracy of a sparse covering of deep ramp networks. Lower bounds show <br />
that the identified risk is close to being optimal. This is joint work <br />
with Andrew R. Barron.&#039;&#039;<br />]]></description>
</item>

<item>
	<title> On the construction of unbiased estimators for the group testing problem</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 25 Oct 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, October 25, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Gregory Hader (National Cancer Institute, NIH) - <br />
Abstract: <br />
<br />
                               Abstract<br />
 <br />
While the use of group testing as a tool for estimation has been on the rise in recent decades, classical problems such as the large bias of the maximum likelihood estimator continue to hinder the implementation of such methods. This has led to the development of many estimators minimizing bias and, most recently, an unbiased estimator based on sequential binomial sampling. Previous research, however, has focused heavily on the simple case where no misclassification is assumed and only one trait is to be tested. In this talk, we consider the problem of unbiased estimation in these broader areas, giving constructions of such estimators for several cases. We show that, outside of the standard case addressed previously in the literature, it is impossible to find any proper unbiased estimator, that is, an estimator giving only values in the parameter space. This is shown to hold generally under any binomial or multinomial sampling plans. <br />
 <br />]]></description>
</item>

<item>
	<title> Sample-Size Re-estimation in Two-Stage Bioequivalence Trials</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 01 Nov 2018 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, November 1, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Eric Slud (Dept. of Mathematics (Statistics Program)) - <br />
Abstract: Bioequivalence studies are an essential part of the evaluation of generic drugs. The most common in-vivo bioequivalence (BE) study design is the two-period two-treatment open label crossover design, with a metric of bioavailability such as the log of an approximate integral of the measured concentration of the drug in the blood (log AUC). The observation of interest for each subject is the difference between the measurement in the first and second period of the crossover. When this quantity is assumed approximately normally distributed, the sample size for BE studies using the &quot;Two One-sided Tests&quot; approach is a function of the assumed mean difference, the assumed variance, equivalence margins, type I error rate, and desired power. Since BE studies are often rather small, there is a serious possibility that they are under-powered when the assumed variance turns out to be too small, and it would be preferable to have a blinded study design based on re-estimating the sample-size using only a preliminary estimate of variance calculated without unmasking the treatment labels. However, up to this time there has not been such a two-stage study design guaranteed to maintain experimentwise type I error rate in small samples, apart from inefficient procedures related to Stein&#039;s 1945 two-stage procedure.<br />
      In the research described in this talk, expanding on a portion of Meiyu Shen&#039;s 2015 UMD thesis, a two-stage sample-size re-estimation design will be presented. The idea, for second-stage sample size expressed as a function of first-stage estimated sample variance, is to calculate the second-stage rejection threshold in such a way that the experimentwise type I error probability maximized over the (unknown) true variance is equal to the prescribed alpha (usually 0.05). This idea is shown to be computationally and practically feasible in the setting of BE studies.<br />
<br />
This work is joint with Meiyu Shen and Estelle Russek-Cohen of FDA.<br />
<br />]]></description>
</item>

<item>
	<title>Calibrating Dependence between Random Elements</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 08 Nov 2018 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, November 8, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Abram Kagan (UMCP) - <br />
Abstract: Properties of a measure of dependence will be presented that, in my opinion, should be satisfied by any natural measure of dependence.<br />
<br />
The main goal is construction of a calibrated scale of dependence between random elements X and Y that is based on the dimension of the range of the projector of the subspace L^{2}(X) of L^{2}(X, Y) into L^{2}(Y).<br />
<br />
For independent X, Y the range is one-dimensional and this property is characteristic of independence.<br />]]></description>
</item>

<item>
	<title>On Reproducibility of Research Findings, Boundary of Meaning and Type S Errors</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 29 Nov 2018 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, November 29, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Prof. Ron Kenett ( KPA Ltd and the Samuel Neaman Institute, Technion, Israel) - <br />
Abstract: The question of reproducibility of research outcomes is discussed now in the open press with a potential negative<br />
impact on science as a whole. In dealing with this question, from a statistical view point, several methodological<br />
advances have been proposed (like FDR) and several clarification attempts have been published (like the ASA<br />
statement on the p value). These attempts seem to only partially address the rising concerns of the public and<br />
research funding agencies.<br />
Kenett and Shmueli in Clarifying the terminology that describes scientific reproducibility, Nature Methods, 12(8), p<br />
699, 2015, review the terminology used in this debate and refer to generalizability, as a dimension that can clarify<br />
what are research claims that should be scrutinize as reproducible. Generalizability is one of the eight dimensions<br />
of the information quality (InfoQ) framework presented in Kenett and Shmueli, On information quality: The<br />
Potential of Data and Analytics to Generate Knowledge, John Wiley and Sons, 2016.<br />
In this talk, we expand on the idea of generalizability of research findings by referring to Type S errors proposed in<br />
Gelman and Carlin (2014) [Beyond power calculations: Assessing Type S (sign) and Type M (magnitude) errors,<br />
Perspectives on Psychological Science, Vol. 9(6), pp. 641–651]. The talk will first discuss methods for setting up a<br />
boundary of meaning used in generalizing research findings. It will then show how Type S errors and directional<br />
FDR methods fit with this generalizability approach. An example from research in localized colon cancer<br />
diagnostics will be used to demonstrate the approach.<br />]]></description>
</item>

<item>
	<title>Mathematical Aspects of Machine Learning</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 06 Dec 2018 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, December 6, 2018 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Wojtek Czaja (Dept. of Mathematics, UMCP) - <br />
Abstract: In recent years machine learning with its focus on predictive and <br />
generatve abilities of learning algorithms became a focus of attention of <br />
researchers across many fields, incuding mathematics. In this talk we will <br />
present some of the aspects of mathematical contributions to machine <br />
learning, devoting our attention to approximation theory, optimization, <br />
and convolutional networks.<br />]]></description>
</item>

<item>
	<title>Marginal-ancillary parametric family of distributions</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 07 Feb 2019 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, February 7, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Abram Kagan (Dept. of Mathematics (Statistics program)) - http://math.umd.edu/~amk<br />
Abstract: A parametric family of distributions of a pair (X, Y) of random elements (X, Y) is called marginal-ancillary if the marginal distributions of X and Y are parameter free. Thus all the information on the parameter is contained in the dependence between X and Y. A lower bound for the Fisher information on the parameter is obtained in the case when the parameter is the correlation coefficient.<br />]]></description>
</item>

<item>
	<title> A new method for the analysis of categorical data with repeated measurements </title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 14 Feb 2019 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, February 14, 2019 - 3:30pm<br />Where: Kirwan Hall  1313<br />Speaker: Dr. Tinghui Yu (MedImmune) - <br />
Abstract: The quality of an assay/survey with categorical output is usually characterized by its accuracy (bias) and precision (variation). To assess these parameters, one needs to perform a study testing a set of properly selected samples repeatedly under different conditions. A generalized linear mixed model (GLMM) can be fitted to the test results, providing control over the correlation structure within and between each design factor of concern. However, interpretation of the resulting GLMM, especially for the random effects, is not straightforward bacause the random effects are usually defined through a non-linear transformation (i.e., a link function). We introduced a new statistic to measure the variation in categorical data generated with multiple levels of control factors. The new method is based on the average agreement between the observed outcomes and hence offers intuitive probabilistic interpretations. It can be shown that this new statistic is closely related to the GLMM. We will also demonstrate the new method through simulations and examples with applications to clinical diagnostics.<br />
<br />]]></description>
</item>

<item>
	<title>Understanding Generative Adversarials  Networks (GANs) in the Gaussian settingment} \pagestyle{empty} \begin{center} {\bf STAT 410-0101 - EXAM \# 1 - October 4, Under</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 21 Feb 2019 15:45:00 EST</pubDate>
	<description><![CDATA[When: Thu, February 21, 2019 - 3:45pm<br />Where: Kirwan Hall 1313<br />Speaker: Prof. Soheil Feizi (Dept. of Computer Sci., Univ. of Maryland) - <br />
Abstract: Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. In this talk, I will provide an understanding of some of the basic issues surrounding GANs including their formulation, generalization and stability on a simple benchmark where the data has a high-dimensional Gaussian distribution. Even in this simple benchmark, the GAN problem has not been well-understood as we observe that existing state-of-the-art GAN architectures may fail to learn a proper generative distribution owing to (1) stability issues (i.e., convergence to bad local solutions or not converging at all), (2) approximation issues (i.e., having improper global GAN optimizers caused by inappropriate GAN&#039;s loss functions), and (3) generalizability issues (i.e., requiring large number of samples for training). In this setup, we propose a GAN architecture which recovers the maximum-likelihood solution and demonstrates fast generalization. Moreover, we analyze global stability of different computational approaches for the proposed GAN and highlight their pros and cons. Finally, we outline an extension of our model-based approach to design GANs in more complex setups than the considered Gaussian benchmark.<br />
<br />]]></description>
</item>

<item>
	<title>Uncover genotype-phenotype relationship through multiple-outcome multivariate regression</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 28 Feb 2019 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, February 28, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Yong Chen (Dept. of Biostatistics, Epidemiology and Informatics, Univ. of Pennsylvania) - <br />
Abstract: Pleiotropic and polygenic effects, where the former means genetic locus affects multiple<br />
phenotypes, and the latter refers to many loci affecting one trait, offer significant insights in<br />
understanding the complex genotype-phenotype relationship. The increasing availability of<br />
medical and genomic data provide the opportunity to uncover such relationship through joint<br />
modeling multiple phenotypes and genetic variants simultaneously. In this talk, I will share a<br />
few recently developed statistical models for detecting pleiotropic and polygenic effects. I will<br />
discuss some key techniques and considerations on modeling large-scale genetic information. I<br />
will also share our analyses on a large-scale biobank linked electronic health record (EHR) data,<br />
the Penn Medicine Biobank (PMBB), for studying complex genetic architectures and their<br />
impacts on multiple phenotypes.<br />]]></description>
</item>

<item>
	<title>A Case Study in Comparing Bayes Estimated Fixed Effects vs Frequentist Random Effects</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 07 Mar 2019 15:30:00 EST</pubDate>
	<description><![CDATA[When: Thu, March 7, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Prof. Eric Slud (Dept. of Mathematics (Statistics Program)) - <br />
Abstract: Using data from the Current Population Survey, we consider model-based estimates of population subgroups in different employment categories in two successive months (June and July 2017), cross-classified by education, age, and state. These cross-classified population counts are often rather small, too small to be well estimated by design-based survey methods, but seem amenable to `small area estimation&#039; models in which state- and other subgroup-effects are viewed as random. The random effects would be viewed differently in a Bayesian analysis and a frequentist one, although each of these different data analysis approaches provides useful information to the other. The talk will discuss computation, display and interpretation of model results, with particular reference to packages and computational tools in R. The theme of the data analysis is the contrast (by likelihood and prediction metrics) between fixed and random-effect models for area-level intercept effects.<br />
<br />]]></description>
</item>

<item>
	<title>Fisher Information, Mean Functions and Matrix Inequalities</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 14 Mar 2019 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, March 14, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Paul J. Smith (STAT Program) - <br />
<br />]]></description>
</item>

<item>
	<title>The lag-lead debate on global temperature and carbon dioxide: a statistical look through curve registration</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 28 Mar 2019 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, March 28, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Pro. Debasis Sengupta (Indian Statistical Institute) - <br />
Abstract: The close connection between global temperature variation and atmospheric carbon dioxide concentration has been central to the issue of climate change. The lag/lead between sets of longitudinal data on the two variables has implications for the causality of that connection. We consider this problem as one of curve registration. Most of the available solutions for this problem have been designed for the growth data application, where the number of observations is small and the number of replicates is large. We argue that a different emphasis is needed for the paleoclimatic application. We provide a new method, which is able to pool local information without smoothing and to match sharp landmarks without manual identification. We prove the consistency of the proposed method under fairly general conditions. Simulation results show superiority of the performance of the proposed method over two existing methods. Use of the proposed method to Antarctic ice core data leads to some interesting conclusions.<br />]]></description>
</item>

<item>
	<title>Data Privacy for a $\rho$-Recoverable Function</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 18 Apr 2019 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, April 18, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Prof. Prakash Narayan (Dept. of Comp. and Electrical Engineering, Univ. of Maryland) - <br />
Abstract: This talk is based on joint work with Ph.D. student Ajaykrishnan Nageswaran.<br />
A user&#039;s data is represented by a finite-valued random variable. <br />
Given a function of the data, a querier is required to recover, <br />
with at least a prescribed probability, the value of the function<br />
based on a query response provided by the user. The user devises <br />
the query response, subject to the recoverability requirement, <br />
so as to maximize privacy of the data from the querier. <br />
Privacy is measured by the probability of error incurred <br />
by the querier in estimating the data from the query response. <br />
We analyze single and multiple independent query responses, <br />
with each response satisfying the recoverability requirement,<br />
that provide maximum privacy to the user. Achievability schemes <br />
with explicit randomization mechanisms for query responses are given <br />
and their privacy compared with converse upper bounds.<br />
More stringent forms of privacy, viz. predicate privacy and<br />
list privacy will also be mentioned.<br />
<br />
==============================<br />
<br />]]></description>
</item>

<item>
	<title>An Overview of Statistical Machine Learning Techniques with Applications</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Tue, 30 Apr 2019 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Tue, April 30, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Amita Pal (Indian Statistical Institute) - <br />
Abstract: Statistical Machine Learning involves an algorithmic approach, derived from statistical models, for solving certain problems that arise in the domain of Artificial Intelligence, that can be implemented through computers.   Machine learning algorithms build a mathematical model of sample data, known as &quot;training data&quot;, in order to make predictions or decisions. Depending on whether training data is labeled/ unlabeled, a variety of supervised/unsupervised Statistical Machine Learning methods are available. An overview of the most widely-used ones will be provided in this talk, and application to the problems of automatic speaker recognition (ASR) and content-based image retrieval (CBIR) will be briefly described.<br />]]></description>
</item>

<item>
	<title>Event-Specific Win Ratios and Testing with Terminal and Non-Terminal Events</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 02 May 2019 15:30:00 EDT</pubDate>
	<description><![CDATA[When: Thu, May 2, 2019 - 3:30pm<br />Where: Kirwan Hall 1313<br />Speaker: Dr. Song Yang (National Heart, Lung, and Blood Institute, NIH) - <br />
Abstract: In clinical trials the primary outcome is often a composite one, defined as time to the first  of two or more types of clinical events,<br />
such as cardiovascular death, a terminal event,  and heart failure hospitalization, a non-terminal event. Thus  if a patient experiences both types of  events,<br />
then the terminal event after a non-terminal event  does not contribute to the primary outcome, even though the terminal event is more important than  the <br />
non-terminal event. If there are substantial number of patients who experience multiple events, the power of the test for treatment effect may be reduced due<br />
to omission of  some of the available data.  In the win ratio approach, priorities are given to the clinically more important events, and potentially all available data are used. However, the win ratio approach  may have low power in detecting a treatment effect  if the effect  is predominantly  on the non-terminal events. We propose<br />
event-specific win ratios  obtained  separately  on the terminal and non-terminal events. These ratios can then be used to form global tests such as a linear combination<br />
test, the maximum test, or a Chi-square test. In simulations these   tests  often improve the power of  the original win ratio test. Furthermore, when the<br />
the terminal and non-terminal events experience differential treatment effects, the new tests often   improve the power of   the log-rank test for the<br />
composite outcome.  Thus whether  the  treatment effect is primarily on the terminal events or the non-terminal events,  the new  tests based on the event-specific win ratios can be useful alternatives for testing treatment effect in clinical trials with time-to-event outcomes when different types of events are present.<br />
We illustrate the new tests with the primary outcome in  the trial Aldosterone Antagonist Therapy for Adults With<br />
Heart Failure and Preserved Systolic Function (TOPCAT),  where the new tests all reject the null hypothesis of no treatment effect while the composite outcome approach used in TOPCAT did not.<br />
 <br />]]></description>
</item>


	</channel>
</rss>