The top 5 /r/MachineLearning posts of the month of May are:
1. Deep Image Analogy
The thread is a discussion of the paper Visual Attribute Transfer through Deep Image Analogy.
From the abstract:
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene.
The technique makes use "of 'image analogy' with features extracted from a Deep Convolutional Neutral Network for matching." Find code for the project here.
2. Example-Based Synthesis of Stylized Facial Animations
A very cool demonstration of an as-of-yet undefined approach to style transfer with video, though guesses on the thread suggest that there is more at play than simply this. As there is no available paper yet, there is ample speculation as to the exact approach. Undoubtedly an interesting demonstration, however.
3. Google releases dataset of 50M vector drawings, open sources Sketch-RNN implementation
Let's get this directly from Google:
Over 15 million players have contributed millions of drawings playing Quick, Draw! These doodles are a unique data set that can help developers train new neural networks, help researchers see patterns in how people around the world draw, and help artists create things we haven’t begun to think of. That’s why we’re open-sourcing them, for anyone to play with.
Some handy links form the discussion thread:
Link to Blog Post announcement
Link to Paper
Link to GitHub Repo of Sketch-RNN code
Link to GitHub Repo of dataset
4. New massive medical image dataset coming from Stanford
The project website gives the description as:
A petabyte-scale, cloud-based, multi-institutional, searchable, open repository of diagnostic imaging studies for developing intelligent image analysis systems.
The Reddit thread discusses some of the apparent lacks of clarity surrounding the project, and provides some tempered expectations. This may or not be something, but worth keeping an eye on, especially given the project's home.
5. Everything that Works Works Because it's Bayesian: Why Deep Nets Generalize?
This is a discussion of the recent and well-travelled post by machine learning researcher Ferenc Huszár. The post is a well-written, non-inflammatory read. The Reddit discussion is (surprisingly) also well-mannered, given the topics covered in the blog post (and given Reddit, generally).
A great read which is short, to the point, and informative. Don't miss the Scooby Doo meme while you're there.
- Top /r/MachineLearning Posts, April: Why Momentum Really Works; Machine Learning with Scikit-Learn & TensorFlow
- Top /r/MachineLearning Posts, March: A Super Harsh Guide to Machine Learning; Is it Gaggle or Koogle?!?
- Top /r/MachineLearning Posts, February: Oxford Deep NLP Course; Data Visualization for Scikit-learn Results