Five Takeaways from ODSC East 2018

FavoriteLoadingAdd to favorites

The last four days in Boston have been nothing but attending talks and meeting with great people. I was exposed to a variety of interesting topics, including data science/deep learning applications in healthcare and other fields, and technical discussions/training sessions at different levels. The bottom line is that ODSC definitely exceeded my expectation. Here I compiled some resources that you can use and also want to share some of my takeways.

Keywords: Generative Adversarial Networks, Transfer Learning, Deep Learning, Fake News Detection, TensorFlow, Kubernetes, Blockchain

Network analysis

Github: https://github.com/ericmjl/Network-Analysis-Made-Simple
How-to: Scroll down to the README file, find the Binder section and click launchbinder. This way you can run the notebooks without having to download anything. You need to understand basic Python to do the exercises.
This one was the most fun part at ODSC, not only because it was a well designed course for everyone to get started with the topic, and not only because Eric Ma, the presenter, was such a nice presenter and a wonderful guy to talk with, but also that next day we discussed in length about how to apply machine learning on top of graphs using paper plates:

traditional machine learning vs machine learning on graphs vs link predictions using machine learning methods

Another big takeaway is that he couldn’t stress the importance of writing unit test for data science work, which you all don’t do (just kidding but I definitely don’t).
The tutorial on Github includes the student’s notebooks for exercises, and instructor’s notebooks checking the answers.

Generative Adversarial Networks

Github: https://github.com/dansbecker/odsc_2018
How-to: Download GANs.pdf and follow instructions.
This talk was given by Dan Becker, PhD, head of Kaggle Learn. He went through the basic concept of GANs and the techniques of building generative networks and discriminative networks, and using keras and functional interface in tensorflow. There were some interesting examples shown during the presentation, such as transforming a video of a horse into a video of a zebra, and generating a real life image from simple sketches.
And in case you haven’t heard of it, Kaggle has a service called Kaggle kernels, which allows people to run notebooks on Kaggle’s infrastructure for free (including GPU support), which is super useful for playing around with the tutorials. You can find the links to the tutorial kernels in the pdf file.
Extra code available at https://github.com/Kaggle/learntools/tree/master/learntools/gans if you want to modify the implementation of the generator and discriminator in the tutorial, such as adding dropout layers.

Deep learning for detecting fake news

The first and foremost question that should be asked is, why is a data scientist at Uber spending so much effort doing this kind of stuff? Anyways, the
talk raised an interesting point: Identifying fake news is the wrong problem to solve as fact checks are just hard to do by machines. Instead, the right problem to solve is to classify journalism vs not journalism, sensationalism vs objectivism, etc. Whether you like the arugument or not, it does provide a viable way to apply the existing natural language deep learning models to solve these kinds of problems. In short, they use (non-naive) doc2vec and LSTM models to detect features in the news articles, and build classifiers on top of it to categorize the article as journalism or not journalism.
They also created an application at www.fakerfact.com, which you can use to test if an article sounds like journalism or not. There are many other classifiers being tested but those are only exposed to their developerss at this point.

Docker, Kubernetes and distributed TensorFlow

Github: https://github.com/Azure/kubeflow-labs
If anyone is interested in container technology such as Docker, and how one can use it to enable machine learning training at scale, this would be helpful to you. Kubernetes is a container orchestrator for distributed applications. I know it sounds complicated, but basically it could create a cluster of nodes with containers that containing your code to train a model. There were two sessions on the topic of using kubernetes to distribute TensorFlow training. The first guy is the founder of PipelineAI, a Silicon Valley start-up guy who started by offending all women by saying east coast girls look better than the west coast ones (and he admitted he has problem with self-control). The second presenter from Microsoft spoke in a better manner, and the git repo is from him. It is Microsoft though so all the examples are tied with Azure.

Blockchain

Interestingly, while there were keynotes and marketing talks about blockchain and how it’s gonna impact your business, there was no one technical session about it. No one. I guess that says enough. The keynote presenter (from MIT) talked about how they could find the hidden network between bitcoin users, but to me that’s just a network analysis on meta data in a blockchain instead of doing data science on encrypted data. He did talk about how to do pattern recognition on encrypted data, but did not give any specific examples. Still, I guess doing data science on encrypted data is a real thing and could catch attention soon. If you’re interested in these kinds of marketing stunt go take a look at their website: https://www.endor.com/.

Miscellaneous

There was a talk about the evolution of color theory and technology and how to choose colors for data science projects. The presenter explained how opposite colors in the color wheel works, and why Monet was a master of using white plus just a little colors to create a pleasing result. It happens that Boston Museum of Fine Arts has the largest Monet collection in the States so it was a natural decision to visit it after the talk. I also came to know from the presentation that there’s an app called Adobe Capture which automatically turns photos into color palettes, which is quite fun:

Creating color pallete from Poppy Field Argenteuil by Claude Monet.

There was a guy talking about automated machine learning. I saw DataRobot, the company that does this sort of thing, getting a lot of exposures at ODSC, so I’ll just assume that it’s legit.

There was the founder of Julia talking about why Julia could become the next Machine Learning language by combining the ease of use of Python with the speed of C. I did not attend the talk, but the description of the talk says all Python’s scikit-learn stuff could be imported into Julia, which is pretty cool.

There was a researcher talking about model interpretability using Locally Interpretable Model-agnostic Explanations (LIME). No material to share but you can just google this term and see how it tries to interprete a complex model by forming locally linear problems.

Leave a Reply

Your email address will not be published. Required fields are marked *