Winning deep learning one post at a time.

Analytic Interpretability is a team of scientist scattered all over the world trying to understand the mysteries of deep learning. As Jamie put it, we’re all exploring a big maze, looking for the exit. Most people in our field are wandering around rather unproductively (or are carefully mapping regions of the maze we know don’t contain the exit). A few of us have been earnestly exploring the maze for several years now and have good ideas for promising places to look next. It makes sense to develop a collective map of the regions we’ve explored, flag the crucial splits, and start saying “you go left, I’ll go right, report back.”

The team is made up of various PhD students/postdocs/profs: