<?xml version="1.0" encoding="UTF-8" ?>
	<rss version="2.0">
		<channel><title>RIT on Deep Learning</title><link>http://www-math.umd.edu/research/seminars.html</link><description></description><item>
	<title>Organizational Meeting</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 01 Sep 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, September 1, 2017 - 12:00pm<br />Where: Kirwan Hall 1311<br />Speaker: Organizational Meeting () - <br />
<br />]]></description>
</item>

<item>
	<title>Introduction to Deep Learning</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 08 Sep 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, September 8, 2017 - 12:00pm<br />Where: Kirwan Hall 1311<br />Speaker: Ilya Kavalerov (UMD) - <br />
Abstract: In this talk, I give an overview of the concepts and applications of deep neural networks. As the title suggests, this talk is introductory and requires no prior knowledge of the subject. Everyone is invited to attend! <br />]]></description>
</item>

<item>
	<title>Introduction to Convolutional Neural Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 15 Sep 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, September 15, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Shujie Kang (UMD) - <br />
Abstract: This is a continuation of last week&#039;s talk. I introduce convolutional neural networks. <br />]]></description>
</item>

<item>
	<title>Training a Convolutional Neural Network</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 29 Sep 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, September 29, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Shujie Kang (UMD) - <br />
Abstract: I introduce commonly used algorithms to train a convolutional neural network. <br />]]></description>
</item>

<item>
	<title>A closer look at the ADAM optimizer</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 06 Oct 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 6, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Eric Oden (UMD) - <br />
Abstract: I build up on Shujie&#039;s talk by taking a closer look at the ADAM optimizer she mentioned towards the end of her talk. <br />]]></description>
</item>

<item>
	<title>Reinforcement learning and Trust Region Policy Optimization</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 13 Oct 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 13, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Cheng Jie (UMD) - <br />
Abstract: I give an overview of the mathematical framework of Reinforcement learning: Markov Decision Process. I will introduce basic policy optimization algorithms used to train the reinforcement learning model. Specifically, the talk will explore Trust Region Policy Optimization, a recently developed algorithm, widely used in Training Deep Reinforcement Learning. <br />]]></description>
</item>

<item>
	<title>Actor-Critic Method</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 20 Oct 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 20, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Nathaniel Monsoon (UMD) - <br />
<br />]]></description>
</item>

<item>
	<title>Analysis of Convergence of Back-Propagation</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 27 Oct 2017 12:00:00 EDT</pubDate>
	<description><![CDATA[When: Fri, October 27, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Andrew Lauziere (UMD) - <br />
Abstract: I will present the paper Efficient BackProp by Yann LeCun et al.<br />]]></description>
</item>

<item>
	<title>Batch Training of Neural Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 10 Nov 2017 12:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, November 10, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Roozbeh Yousefzadeh (UMD (CS)) - <br />
Abstract:  I will present two recent papers on batch training of neural networks. The first paper (Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour) focuses on the size of mini-batches and the optimization difficulties that arise in distributed training. The second paper (Train longer, generalize better: closing the generalization gap in large batch training of neural networks) proposes an algorithm called &quot;Ghost Batch Normalization&quot;.<br />]]></description>
</item>

<item>
	<title>A Proof of Convergence For The Stochastic Gradient Descent Method on  Convex Cost Functions</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 17 Nov 2017 12:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, November 17, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Daniel Mourad (UMD) - <br />
<br />]]></description>
</item>

<item>
	<title>Optimization methods for discrete and saddle-point problems in machine learning</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 01 Dec 2017 12:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, December 1, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Tom Goldstein (UMD/CS) - https://www.cs.umd.edu/~tomg/<br />
Abstract: We&#039;ll discuss two recent advanced in deep learning: adversarial neural networks, and quantized nets.  Adversarial nets is a recent method for building generative models that requires the solution of complex saddle-point problems.  We develop a simple prediction methods for stabilizing the training of saddle-point problems.  Then, we&#039;ll discuss quantized networks, which use low-precision weights to compress and accelerate neural networks.  We discuss the theory of quantized networks, and when/why they are trainable.  <br />]]></description>
</item>

<item>
	<title>Mathematics in Machine Learning: Present and Future</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Fri, 08 Dec 2017 12:00:00 EST</pubDate>
	<description><![CDATA[When: Fri, December 8, 2017 - 12:00pm<br />Where: Kirwan Hall 3206<br />Speaker: Wojtek Czaja (UMD (math)) - <br />
<br />]]></description>
</item>

<item>
	<title>NeuroEvolution of Augmenting Topologies (NEAT): A Genetic Algorithm for Neural Networks</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Wed, 07 Feb 2018 10:00:00 EST</pubDate>
	<description><![CDATA[When: Wed, February 7, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker:  Brandon Alexander (UMD) - <br />
<br />]]></description>
</item>

<item>
	<title>Training Quantized Nets: A Deeper Understanding</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 22 Feb 2018 10:00:00 EST</pubDate>
	<description><![CDATA[When: Thu, February 22, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Liam Fowl (UMD) - <br />
<br />]]></description>
</item>

<item>
	<title>Training Neural Networks Without Gradients: A Scalable ADMM Approach</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 01 Mar 2018 10:00:00 EST</pubDate>
	<description><![CDATA[When: Thu, March 1, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Zeyad Emam (UMD) - <br />
Abstract: I will present the paper Training Neural Networks Without Gradients: A Scalable ADMM Approach by Prof Goldstein et al. <br />]]></description>
</item>

<item>
	<title>Spectral Networks and Locally Connected Networks on Graphs</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 08 Mar 2018 10:00:00 EST</pubDate>
	<description><![CDATA[When: Thu, March 8, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Addison Bohannon (UMD) - <br />
Abstract: I will present the paper Spectral Networks and Locally Connected Networks on Graphs by Bruna et al. <br />]]></description>
</item>

<item>
	<title>FiLM: Visual Reasoning with a General Conditioning Layer</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 15 Mar 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, March 15, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Matthew Guay (NIH/NIBIB) - <br />
Abstract: I will present on the paper named in the title, which has the abstract:<br />
<br />
We introduce a general-purpose conditioning method for neu-<br />
ral networks called FiLM: Feature-wise Linear Modulation.<br />
FiLM layers influence neural network computation via a sim-<br />
ple, feature-wise affine transformation based on conditioning<br />
information. We show that FiLM layers are highly effective<br />
for visual reasoning — answering image-related questions<br />
which require a multi-step, high-level process — a task which<br />
has proven difficult for standard deep learning methods that<br />
do not explicitly model reasoning. Specifically, we show on<br />
visual reasoning tasks that FiLM layers 1) halve state-of-the-<br />
art error for the CLEVR benchmark, 2) modulate features in<br />
a coherent manner, 3) are robust to ablations and architectural<br />
modifications, and 4) generalize well to challenging, new data<br />
from few examples or even zero-shot.<br />]]></description>
</item>

<item>
	<title>Mallat’s scattering transform, the Fourier scattering transform, and applications in hyperspectral imagery</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 29 Mar 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, March 29, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Ilya Kavarelov (UMD (ECE)) - <br />
<br />]]></description>
</item>

<item>
	<title>Transforming machine learning heuristics into provable algorithms: classical, stochastic, and neural</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 05 Apr 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, April 5, 2018 - 10:00am<br />Where: CHM 0115<br />Speaker: Cheng Tang (GWU) - https://sites.google.com/site/chengtanggwu/<br />
Abstract: A recurring pattern in many areas of machine learning is the empirical success of a handful of &quot;heuristics&quot;, i.e., any simple learning procedure favored by practitioners. Many of these heuristic techniques lack formal theoretical justification. For unsupervised learning, Lloyd&#039;s k-means algorithm, while provably exponentially slow in the worst-case, remains popular for clustering problems arising from different applications. For supervised learning, random forest is another example of a winning heuristic with many variants and applications. But the most prominent example is perhaps the blossoming field of deep learning, which is almost entirely composed of heuristics; the practical success of a deep learning algorithm usually relies on an experienced user skillfully and creatively combining heuristics. In this talk, I will discuss some of my thesis work in advancing the theoretical understanding of some of the most widely-used machine learning heuristics.<br />
<br />
<br />]]></description>
</item>

<item>
	<title>How non-convex are neural net loss functions, and what do they look like?</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 12 Apr 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, April 12, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Thomas Goldstein (UMD (UMIACS)) - https://www.cs.umd.edu/~tomg/<br />
Abstract: This talk investigates the structure of neural network loss functions.  Using a range of visualization methods, we explore the non-convex and convex structures present in loss functions, and how neural network architecture impacts these structures.  I&#039;ll also discuss the implications this has on neural optimization, and present situations where bad actors can exploit neural loss functions to manipulate the behavior of classifiers. <br />]]></description>
</item>

<item>
	<title>Deep Learning - now and the future, an overview </title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 19 Apr 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, April 19, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: James Yorke (UMD ) - http://www.chaos.umd.edu/~yorke/<br />
<br />]]></description>
</item>

<item>
	<title>Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent</title>
	<link>http://www-math.umd.edu/research/seminars.html</link>
	<pubDate>Thu, 03 May 2018 10:00:00 EDT</pubDate>
	<description><![CDATA[When: Thu, May 3, 2018 - 10:00am<br />Where: Kirwan Hall 3206<br />Speaker: Wojtek Czaja (UMD (MATH)) - <br />
Abstract: I will present the paper Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent by Chi Jin, Praneeth Netrapalli, and Michael I. Jordan<br />]]></description>
</item>


	</channel>
</rss>