Monash: Academic Opportunities in Information Technology

[unable to retrieve full-text content]Monash is currently recruiting in the areas of Data Science (including artificial intelligence, machine learning, modelling, optimisation and data visualisation), Computer Systems (including networks, cloud computing, internet of things, software engineering and cybersecurity), and IT for Energy (computer scientists working on applications in energy).
Original Post: Monash: Academic Opportunities in Information Technology

A Non-comprehensive List of Awesome Things Other People Did in 2016

Editor’s note: For the last few years I have made a list of awesome things that other people did (2015, 2014, 2013). Like in previous years I’m making a list, again right off the top of my head. If you know of some, you should make your own list or add it to the comments! I have also avoided talking about stuff I worked on or that people here at Hopkins are doing because this post is supposed to be about other people’s awesome stuff. I write this post because a blog often feels like a place to complain, but we started Simply Stats as a place to be pumped up about the stuff people were doing with data. Thomas Lin Pedersen created the tweenr package for interpolating graphs in animations. Check out this awesome logo he made with it.…
Original Post: A Non-comprehensive List of Awesome Things Other People Did in 2016

arXiv Paper Spotlight: Sampled Image Tagging and Retrieval Methods on User Generated Content

What is the feasibility of image tagging with user generated content in the wild? A recent paper by Karl Ni (Lab41, In-Q-Tel), Kyle Zaragoza (Lab41, In-Q-Tel), Charles Foster (Stanford University), Carmen Carrano (Lawrence Livermore National Laboratory), Barry Chen (Lawrence Livermore National Laboratory), Yonas Tesfaye (Lab41, In-Q-Tel), and Alex Gude (Lab41, In-Q-Tel), titled “Sampled Image Tagging and Retrieval Methods on User Generated Content,” attempts to address this issue. The research starts with the premise that carefully-curated image datasets are not ideal for proposed automated approaches to tagging and retrieving images, due to limitations imposed by the number of keywords that can be used owing to small training label sets. Extending curated datasets requires supervision, where developed algorithms would need to be tolerant of inevitable labeling errors. Conversely, open source imagery datasets from Google Photos or FlickR that are created with user…
Original Post: arXiv Paper Spotlight: Sampled Image Tagging and Retrieval Methods on User Generated Content

arXiv Paper Spotlight: Why Does Deep and Cheap Learning Work So Well?

The recent paper at hand approaches explaining deep learning from a different perspective, that of physics, and discusses the role of “cheap learning” (parameter reduction) and how it relates back to this innovative perspective. Why does deep learning work so well? And… cheap learning? A recent paper by Henry W. Lin (Harvard) and Max Tegmark (MIT), titled “Why does deep and cheap learning work so well?” looks to examine from a different perspective what it is about deep learning that makes it work so well. It also introduces (at least, to me) the term “cheap learning.” First off, to be clear, “cheap learning” does not refer to using a low end GPU; instead, the following explains its relationship to parameter reduction: [A]lthough well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical…
Original Post: arXiv Paper Spotlight: Why Does Deep and Cheap Learning Work So Well?

arXiv Paper Spotlight: Automated Inference on Criminality Using Face Images

This recent paper addresses the use of still facial images in an attempt to differentiate criminals from non-criminals, doing so with the help of 4 different classifiers. Results are as promising as they are unsettling. Are the faces of a society’s criminals significantly different than those of the non-criminals? A recent paper by Xiaolin Wu (McMaster University, Shanghai Jiao Tong University) and Xi Zhang (Shanghai Jiao Tong University), titled “Automated Inference on Criminality using Face Images,” explores this very idea. The research is based on the study of still images of the faces of criminals and non-criminals, and uses 4 classification techniques in an attempt to discern, namely logistic regression, K-Nearest Neighbors, Support Vector Machines, and Convolutional Neural Networks. The study controls for race, gender, age, and facial expressions, and “nearly half” of the faces in the dataset were of…
Original Post: arXiv Paper Spotlight: Automated Inference on Criminality Using Face Images

arXiv Paper Spotlight: Stealing Machine Learning Models via Prediction APIs

Despite their confidentiality, machine learning models which have public-facing APIs are vulnerable to model extraction attacks, which attempt to “steal the ingredients” and duplicate functionality. The paper at hand investigates. In the era of prediction using Big Data, algorihms are the secret sauce. But just how secret can the ingredients be when models are opened up via API? A recent…
Original Post: arXiv Paper Spotlight: Stealing Machine Learning Models via Prediction APIs

Learn Data Science for Excellence and not just for the Exams.

Are you currently pursuing your masters in Data Science? Overwhelmed with Buzzwords and Information? Don’t know where and how to start your study? Then start with this article and a starter kit provided, but learn it for excellence and not just for the exams. By Prasad Pore, Business Analytics Consultant. Dear future Masters of Data Science,Many of you might be…
Original Post: Learn Data Science for Excellence and not just for the Exams.

Deep Learning Reading Group: Deep Residual Learning for Image Recognition

Published in 2015, today’s paper offers a new architecture for Convolution Networks, one which has since become a staple in neural network implementation. Read all about it here. By Alex Gude, Lab41. Today’s paper offers a new architecture for Convolution Networks. It was written by He, Zhang, Ren, and Sun from Microsoft Research. I’ll warn you before we start: this paper…
Original Post: Deep Learning Reading Group: Deep Residual Learning for Image Recognition

9 Key Deep Learning Papers, Explained

If you are interested in understanding the current state of deep learning, this post outlines and thoroughly summarizes 9 of the most influential contemporary papers in the field. By Adit Deshpande, UCLA. Introduction  In this post, we’ll go into summarizing a lot of the new and important developments in the field of computer vision and convolutional neural networks. We’ll look…
Original Post: 9 Key Deep Learning Papers, Explained

Deep Learning Reading Group: Deep Compression

An concise overview of a paper covering three methods of compressing a neural network in order to reduce the size of the network on disk, improve performance, and decrease run time. By Alex Gude, Lab41. The next paper from our reading group is by Song Han, Huizi Mao, and William J. Dally. It won the best paper award at ICLR 2016.…
Original Post: Deep Learning Reading Group: Deep Compression