Organizer: Wojtek Czaja (Math), Tom Goldstein (CS), and Matt Guay (NIBIB/NIH)
When: Fridays @ 12 noon
Where:
Kirwan Hall 3206
Description: The RIT treats various topics in machine learning and computer vision, particularly focusing on deep neural networks. Everyone who is interested is encouraged to attend. We will hold one or two introductory talks on Deep Learning to kick off the semester. As the semester unfolds we will have participants volunteer to give talks about papers of interest or their own research in the field.

Archives: 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017

  • Organizational Meeting

    Speaker: Organizational Meeting () -

    When: Fri, September 1, 2017 - 12:00pm
    Where: Kirwan Hall 1311
  • Introduction to Deep Learning

    Speaker: Ilya Kavalerov (UMD) -

    When: Fri, September 8, 2017 - 12:00pm
    Where: Kirwan Hall 1311

    View Abstract

    Abstract: In this talk, I give an overview of the concepts and applications of deep neural networks. As the title suggests, this talk is introductory and requires no prior knowledge of the subject. Everyone is invited to attend!
  • Introduction to Convolutional Neural Networks

    Speaker: Shujie Kang (UMD) -

    When: Fri, September 15, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: This is a continuation of last week's talk. I introduce convolutional neural networks.
  • Training a Convolutional Neural Network

    Speaker: Shujie Kang (UMD) -

    When: Fri, September 29, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: I introduce commonly used algorithms to train a convolutional neural network.
  • A closer look at the ADAM optimizer

    Speaker: Eric Oden (UMD) -

    When: Fri, October 6, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: I build up on Shujie's talk by taking a closer look at the ADAM optimizer she mentioned towards the end of her talk.
  • Reinforcement learning and Trust Region Policy Optimization

    Speaker: Cheng Jie (UMD) -

    When: Fri, October 13, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: I give an overview of the mathematical framework of Reinforcement learning: Markov Decision Process. I will introduce basic policy optimization algorithms used to train the reinforcement learning model. Specifically, the talk will explore Trust Region Policy Optimization, a recently developed algorithm, widely used in Training Deep Reinforcement Learning.
  • Actor-Critic Method

    Speaker: Nathaniel Monsoon (UMD) -

    When: Fri, October 20, 2017 - 12:00pm
    Where: Kirwan Hall 3206
  • Analysis of Convergence of Back-Propagation

    Speaker: Andrew Lauziere (UMD) -

    When: Fri, October 27, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: I will present the paper Efficient BackProp by Yann LeCun et al.
  • Batch Training of Neural Networks

    Speaker: Roozbeh Yousefzadeh (UMD (CS)) -

    When: Fri, November 10, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: I will present two recent papers on batch training of neural networks. The first paper (Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour) focuses on the size of mini-batches and the optimization difficulties that arise in distributed training. The second paper (Train longer, generalize better: closing the generalization gap in large batch training of neural networks) proposes an algorithm called "Ghost Batch Normalization".
  • A Proof of Convergence For The Stochastic Gradient Descent Method on Convex Cost Functions

    Speaker: Daniel Mourad (UMD) -

    When: Fri, November 17, 2017 - 12:00pm
    Where: Kirwan Hall 3206
  • Optimization methods for discrete and saddle-point problems in machine learning

    Speaker: Tom Goldstein (UMD/CS) - https://www.cs.umd.edu/~tomg/

    When: Fri, December 1, 2017 - 12:00pm
    Where: Kirwan Hall 3206

    View Abstract

    Abstract: We'll discuss two recent advanced in deep learning: adversarial neural networks, and quantized nets. Adversarial nets is a recent method for building generative models that requires the solution of complex saddle-point problems. We develop a simple prediction methods for stabilizing the training of saddle-point problems. Then, we'll discuss quantized networks, which use low-precision weights to compress and accelerate neural networks. We discuss the theory of quantized networks, and when/why they are trainable.
  • Mathematics in Machine Learning: Present and Future

    Speaker: Wojtek Czaja (UMD (math)) -

    When: Fri, December 8, 2017 - 12:00pm
    Where: Kirwan Hall 3206