Physics for Neural Networks

Registration is now open.
Apr 17, 2023Apr 19, 2023
PCTS, Jadwin Hall, Room 407


Event Description


1) The workshop includes a poster session. If you want to present a poster, please email the title to [email protected] no later than April 10 at 5 pm. (Click on the link for Poster Session Instructions on this page.)

2) We will be live streaming the talks and recording. The video links will be listed on this webpage approximately three weeks after the workshop. (Click on the Live Stream Viewing link on this page.)

Organizers: Boris Hanin (PU ORFE), Giorgio Cipolloni (PCTS), William Bialek (PU Physics); Francesca Mignacco (PU Physics)

A hallmark of modem machine learning, exemplified by deep learning, is the use of exceptionally large models to fit complex datasets. While the empirical success of the neural network models used in deep learning is undeniable, a predictive formalism for describing and improving them is nascent. On the theoretical side a promising avenue for making progress is to view neural networks as large collections of (often weakly) interacting parameters. From this perspective, it is natural to study them using the tools of modern physics coming from random matrix theory, statistical mechanics, the 1/n expansion, field theory, and so on. On the experimental side, neural networks present a unique opportunity for impactful empirical investigation in which the microscopic ground truth is known to arbitrary precision. The purpose of this workshop is to bring together a range of scientists trained in a variety of disciplines related to modern physics that are interested in developing both the theory and phenomenology of deep learning.