Anish Acharya, PhD
  • Resume
  • Publications
  • Talks
  • Press
  • Awards
  • Outreach
  • Open Source
    Embedding Compression BGMD DeLiCoCo OpenMSFTL Lexical Substitution
  • Pointers
    WNCG Dhillon Lab Sanghavi Lab Conf Rank Conf Cal

Reading Group


List of Seminal Papers

General Theory

  • On the difficulty of training recurrent neural networks
  • Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
  • The Landscape of Deep Learning Algorithms
  • The loss surface of deep and wide neural networks
  • The loss surfaces of multilayer networks

Spectral Decompositions in DNN

  • Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections
  • Low Rank Matrix Factorization for Deep Neural Network training with High Dimensional output targets

Optimization

  • Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
  • Self-Normalizing Neural Networks

Reading List (Courses)

  • Statistical Learning Theory -CMU
  • Statistical Learning Theory - Berkley(Berlett)
  • Multi-Arm Bandit
  • RSS
  • Email me
  • GitHub
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter

Anish Acharya  •  2025

Theme by beautiful-jekyll