[unable to retrieve full-text content]By dropping ‘Hadoop’ from its name, the @strataconf 2018 in San Jose signaled the emphasis on machine learning, cloud, streaming and real-time applications.
Original Post: Key Takeaways from the Strata San Jose 2018
[unable to retrieve full-text content]By the end of this article, you could at least get the idea of how these questions are answered and be able to test yourself based on simple examples.
Original Post: Beginners Ask “How Many Hidden Layers/Neurons to Use in Artificial Neural Networks?”
[unable to retrieve full-text content]This posts is a collection of a set of fantastic notes on the fast.ai deep learning part 1 MOOC freely available online, as written and shared by a student. These notes are a valuable learning resource either as a supplement to the courseware or on their own.
Original Post: fast.ai Deep Learning Part 1 Complete Course Notes
[unable to retrieve full-text content]Join Nvidia for an on-demand webinar to learn how to tackle the challenges of scaling and building complex deep learning systems.
Original Post: Deep Learning and Challenges of Scale Webinar
[unable to retrieve full-text content]This post is a distilled collection of conversations, messages, and debates on how to optimize deep models. If you have tricks you’ve found impactful, please share them in the comments below!
Original Post: Deep Learning Tips and Tricks
[unable to retrieve full-text content]In this post, traditional and deep learning models in text classification will be thoroughly investigated, including a discussion into both Recurrent and Convolutional neural networks.
Original Post: Overview and benchmark of traditional and deep learning models in text classification
[unable to retrieve full-text content]Most Deep Learning frameworks currently focus on giving a best estimate as defined by a loss function. Occasionally something beyond a point estimate is required to make a decision. This is where a distribution would be useful. This article will purely focus on inferring quantiles.
Original Post: Deep Quantile Regression
[unable to retrieve full-text content]Check out this collection of 30 ML, DL, NLP & AI resources for beginners, starting from zero and slowly progressing to the point that readers should have an idea of where to go next.
Original Post: 30 Free Resources for Machine Learning, Deep Learning, NLP & AI
[unable to retrieve full-text content]A personal account as to why 2018 is going to be a fun year for machine learning engineers.
Original Post: What is it like to be a machine learning engineer in 2018?
[unable to retrieve full-text content]In this blog I am going to talk about the issues related to initialization of weight matrices and ways to mitigate them. Before that, let’s just cover some basics and notations that we will be using going forward.
Original Post: Deep Learning Best Practices – Weight Initialization