Uncategorized

ICML 2017 Thoughts

FavoriteLoadingAdd to favorites

ICML 2017 has just ended. While Sydney is remote for those in Europe and North America, the conference centeris a wonderful venue (with good coffee!), and the city is a lot of fun. Everything went smoothly and theorganizers did a great job.You can get a list of papers that I liked from my Twitter feed, so instead I’d like to discuss some broad themesI sensed.Multitask regularization to mitigate sample complexity in RL. Both in video games and in dialog, it is useful to add extra (auxiliary) tasks in order to accelerate learning. Leveraging knowledge and memory. Our current models are powerful function approximators, but in NLP especially we need to go beyond “the current example” in order exhibit competence. Gradient descent as inference. Whether it’s inpainting with a GAN or BLUE score maximization with an RNN, gradient descent is an…
Original Post: ICML 2017 Thoughts