About The Newsletter

Every week, there are about 600 new machine learning papers submitted to arXiv. I go through all these submissions and identify 10-20 that I think are especially interesting, practical, or promising.1 I then write a summary of each one, often with some commentary. I disproportionately focus on papers that:

  • Improve understanding of deep learning, or

  • Show empirical gains on computer vision or NLP tasks

Where “gains” is more accuracy, more speed, or both. There’s plenty of great work that doesn’t fit this focus, so don’t take it as a knock on a paper if I didn’t summarize it.

Note: I definitely get stuff wrong sometimes. Feel free to reach comment on the relevant post with a correction.

As far as numbers, there are currently over 10,000 subscribers from all over industry and academia, including most of the top universities and research labs.

About Me

I’m a research scientist at MosaicML, where I work on speeding up deep learning training. Here’s more info as a paste-able bio:

“Davis Blalock is a research scientist at MosaicML. He completed his PhD at MIT, advised by Professor John Guttag. His primary work is designing high-performance machine learning algorithms. He received his M.S. from MIT and his B.S. from the University of Virginia. He is a Qualcomm Innovation Fellow, NSF Graduate Research Fellow, and Barry M. Goldwater Scholar.”

I’ve had a lot of failures over the years, but also some successes. My favorite successes so far include:

tl;dr: I make stuff 10x more efficient.

Subscribe in one click:

1

Note that I identify these 10-20 papers based on a funnel of title → abstract → quick skim → more detailed read. I don’t read the full text of 600+ papers each week, despite what people might say on Twitter.

Subscribe to Davis Summarizes Papers

I go through all the machine learning arXiv submissions each week and summarize 10 to 20 of my favorites. Free forever and read by thousands of ML researchers and practitioners.

People

See the overall about page.