A new paper by Zuckerman alumni scholar Dr. Nadav Cohen, written with his student Noam Razin, settles an open question and proves that generalization in deep learning cannot be explained via standard norm-based regularization. Rather, it seems that rank minimization may be a more useful interpretation.
“Our results suggest that rather than perceiving the implicit regularization via norms, a potentially more useful interpretation is minimization of rank. We demonstrate empirically that this interpretation extends to a certain class of non-linear neural networks and hypothesize that it may be key to explaining generalization in deep learning.”
Implicit Regularization in Deep Learning May Not Be Explainable by Norms was released to arXiv several days ago and is already gaining significant attention. Dr. Cohen is currently an Assistant Professor at Tel Aviv University’s School of Computer Science.
According to Eye on AI, Nadav and Noam’s paper was the most widely shared AI work on social media in recent weeks, with more than 59 shares to date.