[unable to retrieve full-text content]This post discusses a variety of contemporary Deep Meta Learning methods, in which meta-data is manipulated to generate simulated architectures. Current meta-learning capabilities involve either support for search for architectures or networks inside networks.
Original Post: Taxonomy of Methods for Deep Meta Learning
[unable to retrieve full-text content]What is it that distinguishes neural networks that generalize well from those that don’t? A satisfying answer to this question would not only help to make neural networks more interpretable, but it might also lead to more principled and reliable model architecture design.
Original Post: Understanding Deep Learning Requires Re-thinking Generalization
[unable to retrieve full-text content]In this article we will focus — basic deep learning using Keras and Theano. We will do 2 examples one using keras for basic predictive analytics and other a simple example of image analysis using VGG.
Original Post: Medical Image Analysis with Deep Learning , Part 3
[unable to retrieve full-text content]The roadmap is constructed in accordance with the following four guidelines: from outline to detail; from old to state-of-the-art; from generic to specific areas; focus on state-of-the-art.
Original Post: Deep Learning Papers Reading Roadmap
[unable to retrieve full-text content]Deep Image Analogy; Example-Based Synthesis of Stylized Facial Animations; Google releases dataset of 50M vector drawings, open sources Sketch-RNN implementation; New massive medical image dataset coming from Stanford; Everything that Works Works Because it’s Bayesian: Why Deep Nets Generalize?
Original Post: Top /r/MachineLearning Posts, May: Deep Image Analogy; Stylized Facial Animations; Google Open Sources Sketch-RNN
[unable to retrieve full-text content]”As I understand, the chance of having a derivative zero in each of the thousands of direction is low. Is there some other reason besides this?”
Original Post: Why Does Deep Learning Not Have a Local Minimum?
[unable to retrieve full-text content]This post outlines an entire 6-part tutorial series on the MXNet deep learning library and its Python API. In-depth and descriptive, this is a great guide for anyone looking to start leveraging this powerful neural network library.
Original Post: An Introduction to the MXNet Python API
Posted by Françoise Beaufays, Principal Scientist, Speech and Keyboard Team and Michael Riley, Principal Scientist, Speech and Languages Algorithms TeamMost people spend a significant amount of time each day using mobile-device keyboards: composing emails, texting, engaging in social media, and more. Yet, mobile keyboards are still cumbersome to handle. The average user is roughly 35% slower typing on a mobile device than on a physical keyboard. To change that, we recently provided many exciting improvements to Gboard for Android, working towards our vision of creating an intelligent mechanism that enables faster input while offering suggestions and correcting mistakes, in any language you choose.With the realization that the way a mobile keyboard translates touch inputs into text is similar to how a speech recognition system translates voice inputs into text, we leveraged our experience in Speech Recognition to pursue our vision.…
Original Post: The Machine Intelligence Behind Gboard
[unable to retrieve full-text content]In short, you reach different resting placing with different SGD algorithms. That is, different SGDs just give you differing convergence rates due to different strategies, but we do expect that they all end up at the same results!
Original Post: The Two Phases of Gradient Descent in Deep Learning