<?xml version="1.0" encoding="UTF-8" ?>
	<rss version="2.0">
		<channel><title>RIT on Deep Learning</title><link>http://www-math.umd.edu/research/seminars.html</link><description></description><item>
	<title>Machine Unlearning: Linear Filtration for Logit-based Classifiers</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 30 Sep 2022 13:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, September 30, 2022 - 1:00pm<br />Where: Kirwan Hall 1310<br />Speaker: Gonzalo Benavides (AMSC) - <br />
Abstract: Recently enacted legislation grants individuals certain rights to decide in what fashion their personal data may be used, and in particular a &quot;right to be forgotten&quot;. This poses a challenge to machine learning: how to proceed when an individual retracts permission to use data which has been part of the training process of a model? From this question emerges the field of machine unlearning, which could be broadly described as the investigation of how to &quot;delete training data from models&quot;. Our work complements this direction of research for the specific setting of class-wide deletion requests for classification models (e.g. deep neural networks). As a first step, we propose linear filtration as a intuitive, computationally efficient sanitization method. Our experiments demonstrate benefits in an adversarial setting over naive deletion schemes.<br />]]></description>
</item>

<item>
	<title>On the Universality of Invariant Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 14 Oct 2022 13:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 14, 2022 - 1:00pm<br />Where: Kirwan Hall 1310<br />Speaker: Shashank Sule (AMSC) - <br />
Abstract: Constraining linear layers in neural networks to respect symmetry transformations from a group G is a common design principle for invariant networks that has found many applications in machine learning. In this paper, we consider a fundamental question that has received very little attention to date: Can these networks approximate any (continuous) invariant function? We tackle the rather general case where G≤Sn (an arbitrary subgroup of the symmetric group) that acts on \Rn by permuting coordinates. This setting includes several recent popular invariant networks. We present two main results: First, G-invariant networks are universal if high-order tensors are allowed. Second, there are groups G for which higher-order tensors are unavoidable for obtaining universality. G-invariant networks consisting of only first-order tensors are of special interest due to their practical value. We conclude the paper by proving a necessary condition for the universality of G-invariant networks that incorporate only first-order tensors. Lastly, we propose a conjecture stating that this condition is also sufficient.<br />]]></description>
</item>

<item>
	<title>ADef: an Iterative Algorithm to Construct Adversarial Deformations</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 21 Oct 2022 13:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 21, 2022 - 1:00pm<br />Where: Kirwan Hall 1310<br />Speaker: Matthias Wellershoff (MATH) - <br />
Abstract: While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.<br />]]></description>
</item>

<item>
	<title>Group-invariant max filtering</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 28 Oct 2022 13:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 28, 2022 - 1:00pm<br />Where: Kirwan Hall 1310<br />Speaker: Stratos Tsoukanis (MATH) - <br />
Abstract: Given a real inner product space V and a group G of linear isometries, we construct a family of G-invariant real-valued functions on V that we call max filters. In the case where V=Rd and G is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric. In the case where V=L2(Rd) and G is the group of translation operators, a max filter exhibits stability to diffeomorphic distortion like that of the scattering transform introduced by Mallat. We establish that max filters are well suited for various classification tasks, both in theory and in practice.<br />]]></description>
</item>

<item>
	<title>Deep Haar Scattering Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 11 Nov 2022 13:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, November 11, 2022 - 1:00pm<br />Where: Kirwan Hall 1310<br />Speaker: Canran Ji (MATH) - <br />
<br />]]></description>
</item>

<item>
	<title>A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 18 Nov 2022 13:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, November 18, 2022 - 1:00pm<br />Where: Kirwan 1310<br />Speaker: Sanghoon Na (AMSC) - <br />
Abstract: This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution π and a source distribution p_z both defined on Rd, we prove under some assumptions that there exists a deep neural network g:Rd→R with ReLU activation such that the push-forward measure (∇g)#p_z of p_z under the map ∇g is arbitrarily close to the target measure π. The closeness are measured by three classes of integral probability metrics between probability distributions: 1-Wasserstein distance, maximum mean distance (MMD) and kernelized Stein discrepancy (KSD). We prove upper bounds for the size (width and depth) of the deep neural network in terms of the dimension d and the approximation error ε with respect to the three discrepancies. In particular, the size of neural network can grow exponentially in d when 1-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on d at most polynomially. Our proof relies on convergence estimates of empirical measures under aforementioned discrepancies and semi-discrete optimal transport.<br />]]></description>
</item>


	</channel>
</rss>