Attention: More Musings

The attention model I posed last post is still reasonable, but the comparison model is not. (These revelations are the fallout of a fun conversation with myself, Nikos, and Sham Kakade. Sham recently took a faculty position at the University of Washington, which is my neck of the woods.)As a reminder, the attention model is a binary classifier which takes…
Original post: Attention: More Musings
Source: Machined Learnings

Attention: Can we formalize it?

In statistics the bias-variance tradeoff is a core concept. Roughly speaking, bias is how well the best hypothesis in your hypothesis class would perform in reality, whereas variance is how much performance degradation is introduced from having finite training data.Last century both data and compute were relatively scarce so models that had high bias but low variance (and low computational…
Original post: Attention: Can we formalize it?
Source: Machined Learnings

NIPS 2015 Review

NIPS 2015 was bigger than ever, literally: at circa 3700 attendees this was roughly twice as many attendees as last year, which in turn was roughly twice as many as the previous year. This is clearly unsustainable, but given the frenzied level of vendor and recruiting activities, perhaps there is room to grow. The main conference is single track, however,…
Original post: NIPS 2015 Review
Source: Machined Learnings

Sample Variance Penalization

Most of the time, supervised machine learning is done by optimizing the average loss on the training set, i.e. empirical risk minimization, perhaps with a (usually not data-dependent) regularization term added in. However, there was a nice paper a couple of years back by Maurer and Pontil introducing Sample Variance Penalization. The basic idea is to optimize a combination of…
Original post: Sample Variance Penalization
Source: Machined Learnings

KDD Cup 2016 CFP

The KDD Cup is soliciting ideas for their next competition. Things have gotten tricky for the KDD Cup, because CJ’s class keeps winning. Essentially we have learned that lots of feature engineering and large ensembles do well in supervised learning tasks. But really CJ has done us a favor by directly demonstrating that certain types of supervised learning are extremely…
Original post: KDD Cup 2016 CFP
Source: Machined Learnings

ECML-PKDD 2015 Review

ECML-PKDD was a delight this year. Porto is definitely on the short list of the best European cities in which to have a conference. The organizers did a wonderful job injecting local charm into the schedule, e.g., the banquet at Taylor’s was a delight. It’s a wine city, and fittingly wine was served throughout the conference. During the day I…
Original post: ECML-PKDD 2015 Review
Source: Machined Learnings

LearningSys NIPS Workshop CFP

CISL is the research group in which I work at Microsoft. The team brings together systems and machine learning experts, with the vision of having these two disciplines inform each other. This is also the vision for the LearningSys workshop, which was accepted for NIPS 2015, and is co-organized by Markus Weimer from CISL.If this sounds like your cup of…
Original post: LearningSys NIPS Workshop CFP
Source: Machined Learnings

America needs more H1B visas, but (probably) won't get them

The current US political climate is increasingly anti-immigration, including high-skilled immigration. This not only makes much-needed reforms of the H1B visa system increasingly unlikely, but suggests the program might be considerably scaled back. Unfortunately, I’ve been dealing with H1B-induced annoyances my entire career so far, and it looks to continue. The latest: my attempt to hire an internal transfer at…
Original post: America needs more H1B visas, but (probably) won't get them
Source: Machined Learnings

Paper Reviews vs. Code Reviews

Because I’m experiencing the NIPS submission process right now, the contrast with the ICLR submission process is salient. The NIPS submission process is a more traditional process, in which first (anonymous) submissions are sent to (anonymous) reviewers who provide feedback, and then authors have a chance to respond to the feedback. The ICLR submission process is more fluid: non-anonymous submissions…
Original post: Paper Reviews vs. Code Reviews
Source: Machined Learnings

Extreme Classification Code Release

The extreme classification workshop at ICML 2015 this year was a blast. We started strong, with Manik Varma demonstrating how to run circles around other learning algorithms using a commodity laptop; and we finished strong, with Alexandru Niculescu delivering the elusive combination of statistical and computational benefit via “one-against-other-reasonable-choices” inference. Check out the entire program!Regarding upcoming events, ECML 2015 will…
Original post: Extreme Classification Code Release
Source: Machined Learnings