About The Newsletter

Every week, there are about 600 new machine learning papers submitted to arXiv. I go through all these submissions and identify 10-20 that I think are especially interesting, practical, or promising. I then write a summary of each one, often with some commentary. I disproportionately focus on papers that:

  • Improve understanding of deep learning, or

  • Show empirical gains on computer vision or NLP tasks

Where “gains” is more accuracy, more speed, or both. There’s plenty of great work that doesn’t fit this focus, so don’t take it as a knock on a paper if I didn’t summarize it.

Note: I definitely get stuff wrong sometimes. Feel free to reach out on Twitter to let me know, ideally on the thread for a given paper if there is one.

As far as numbers, there are currently over 1600 subscribers from all over industry and academia, including most of the top universities and research labs.

About Me

I’m a research scientist at MosaicML, where I work on speeding up deep learning training. Here’s more info as a paste-able bio:

“Davis Blalock is a research scientist at MosaicML. He completed his PhD at MIT, advised by Professor John Guttag. His primary work is designing high-performance machine learning algorithms. He received his M.S. from MIT and his B.S. from the University of Virginia. He is a Qualcomm Innovation Fellow, NSF Graduate Research Fellow, and Barry M. Goldwater Scholar.”

I’ve had a lot of failures over the years, but also some successes. My favorite successes so far include:

tl;dr: I make stuff 10x more efficient.

Subscribe in one click: