RIT on Deep Learning Archives for Fall 2017 to Spring 2018


Why does deep and cheap learning work so well?

When: Wed, September 7, 2016 - 1:00pm
Where: Kirwan Hall 0411
Speaker: Matthew Guay (NIH (NIBIB)) -
Abstract: I will discuss the (unreasonable?) effectiveness of deep learning for a wide range of problems in machine learning and computer vision, using the recent paper "Why does deep and cheap learning work so well?" by Henry Lin and Max Tegmark as a focal point for the discussion.

Why does deep and cheap learning work so well? (Part 2)

When: Wed, September 14, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Matthew Guay (NIH [NIBIB]) -
Abstract: I conclude my presentation of the paper "Why does deep and cheap learning work so well?", focusing on the justifications for deep network architectures.

Techniques for visualizing trained neural networks

When: Wed, September 21, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Matthew Whiteway (AMSC) -
Abstract: Lacking analytical results to gauge deep neural network performance, other methods are needed to gain intuition about their operation. After a review of deep convolutional networks, I will discuss a collection of results on visualizing the activity of image-processing neural networks.

Understanding Deep Convolutional Networks

When: Wed, October 5, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Franck Ndjakou Njeunje (AMSC) - http://www.nyan.cat/


Provable approximation properties for deep neural networks on manifolds

When: Wed, October 12, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Alex Cloninger (Yale) -
Abstract: We discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold Γ ⊂ R m, we construct a sparsely-connected stacked neural network and bound its error in approximating f. The size of the network depends on dimension and curvature of the manifold Γ, the complexity of f, in terms of its wavelet description, and only weakly on the ambient dimension m. Essentially, our network computes wavelet functions, which are computed from Rectified Linear Units (ReLU).

Fooling deep networks with adversarial inputs

When: Wed, October 19, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Chau-Wai Wong (UMD ECE) -


Fully convolutional networks for image segmentation

When: Wed, October 26, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Matthew Guay (NIH (NIBIB)) -


Fully convolutional networks for image segmentation

When: Wed, November 2, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Matthew Guay (NIH [NIBIB]) -


Fully convolutional networks for image segmentation: Part 2

When: Wed, November 9, 2016 - 1:00pm
Where: Kirwan Hall 0201
Speaker: Matthew Guay (NIH [NIBIB]) -


Introduction to Deep Learning

When: Wed, February 15, 2017 - 12:00pm
Where: Kirwan Hall 1311
Speaker: Zeyad Emam (UMD) - http://www.nyan.cat
Abstract: In this talk, I give an overview of the concepts and applications of deep neural networks.

Introduction to Deep Learning, Part 2

When: Wed, February 22, 2017 - 12:00pm
Where: Kirwan Hall 1311
Speaker: Zeyad Emam (UMD (AMSC)) -
Abstract: A continued overview of the methods used in modern deep learning research and applications.

A neural algorithm for artistic style

When: Wed, March 1, 2017 - 12:00pm
Where: Kirwan Hall 1311
Speaker: Andrew Lauziere (UMD) -


Neural Network Training

When: Wed, March 8, 2017 - 12:00pm
Where: Kirwan Hall 1311
Speaker: Zeyad Emam (UMD) -