JAX 2018 talk announcement: Deep Learning – a Primer

I am happy to announce that on Tuesday, April 24th 2018 Uwe Friedrichsen and I will give a talk about Deep Learning – a Primer at the JAX conference in Mainz, Germany. Deep Learning is one of the “hot” topics in the AI area – a lot of hype, a lot of inflated expectation, but also quite some impressive success stories. As some AI experts already predict that Deep Learning will become “Software 2.0”, it might be a good time to have a closer look at the topic. In this session I will try to give a comprehensive overview of Deep Learning. We will start with a bit of history and some theoretical foundations that we will use to create a little Deep Learning taxonomy. Then we will have a look at current and upcoming application areas: Where can we…
Original Post: JAX 2018 talk announcement: Deep Learning – a Primer

Sketchnotes from TWiML&AI #94: Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley

These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley: Sketchnotes from TWiMLAI talk #94: Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley You can listen to the podcast here. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence , the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 –…
Original Post: Sketchnotes from TWiML&AI #94: Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley

Join MünsteR for our next meetup on obtaining functional implications of gene expression data with R

In our next MünsteR R-user group meetup on March 5th, 2018 Frank Rühle will talk about bioinformatics and how to analyse genome data. You can RSVP here: http://meetu.ps/e/DDY1B/w54bW/f Next-Generation sequencing and array-based technologies provided a plethora of gene expression data in the public genomics databases. But how to get meaningful information and functional implications out of this vast amount of data? Various R-packages have been published by the Bioconductor user community for distinct kinds of analysis strategies. Here, several approaches will be presented for functional gene annotation, gene enrichment analysis and co-expression network analysis. A collection of wrapper functions for streamlined analysis of expression data can be found at: https://github.com/frankRuehle/systemsbio. Dr. Frank Rühle is a post-doctoral research fellow in the group of genetic epidemiology at the Institute of Human Genetics at the University of Münster. As biologist with focus on computational…
Original Post: Join MünsteR for our next meetup on obtaining functional implications of gene expression data with R

Sketchnotes from TWiML&AI #92: Learning State Representations with Yael Niv

These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Learning State Representations with Yael Niv: https://twimlai.com/twiml-talk-92-learning-state-representations-yael-niv/ Sketchnotes from TWiMLAI talk #92: Learning State Representations with Yael Niv You can listen to the podcast here. In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page. https://twimlai.com/twiml-talk-92-learning-state-representations-yael-niv/ Related To leave a comment for the author,…
Original Post: Sketchnotes from TWiML&AI #92: Learning State Representations with Yael Niv

How to make your machine learning model available as an API with the plumber package

Training and saving a model Let’s say we have trained a machine learning model as in this post about LIME. I loaded a data set on chronic kidney disease, did some preprocessing (converting categorical features into dummy variables, scaling and centering), split it into training and test data and trained a Random Forest model with caret. We can use this trained model to make predictions for one test case with the following code: library(tidyverse) # load test and train data load(“../../data/test_data.RData”) load(“../../data/train_data.RData”) # load model load(“../../data/model_rf.RData”) # take first test case for prediction input_data <- test_data[1, ] %>% select(-class) # predict test case using model pred <- predict(model_rf, input_data) cat(“—————-nTest case predicted to be”, as.character(pred), “n—————-“) ## —————- ## Test case predicted to be ckd ## —————- The input For our API to work, we need to define the input,…
Original Post: How to make your machine learning model available as an API with the plumber package

Sketchnotes from TWiML&AI #91: Philosophy of Intelligence with Matthew Crosby

These are my sketchnotes for Sam Charrington’s podcast This Week in Machine Learning and AI about Philosophy of Intelligence with Matthew Crosby: You can listen to the podcast here. This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i’m joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence,…
Original Post: Sketchnotes from TWiML&AI #91: Philosophy of Intelligence with Matthew Crosby

Looking beyond accuracy to improve trust in machine learning

I have written another blogpost about Looking beyond accuracy to improve trust in machine learning at my company codecentric’s blog: Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local…
Original Post: Looking beyond accuracy to improve trust in machine learning

TWiMLAI talk 88 sketchnotes: Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru

These are my sketchnotes taken from the “This week in Machine Learning & AI” podcast number 88 about Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru: Sketchnotes from TWiMLAI talk #88: Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru Related To leave a comment for the author, please follow the link and comment on their blog: Shirin’s playgRound. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more… If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook…
Original Post: TWiMLAI talk 88 sketchnotes: Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru

Registration now open for workshop on Deep Learning with Keras and TensorFlow using R

Recently, I announced my workshop on Deep Learning with Keras and TensorFlow. The next dates for it are January 18th and 19th in Solingen, Germany. You can register now by following this link: https://www.codecentric.de/schulung/deep-learning-mit-keras-und-tensorflow If any non-German-speaking people want to attend, I’m happy to give the course in English! Contact me if you have further questions. As a little bonus, I am also sharing my sketch notes from a Podcast I listened to when first getting into Keras: Sketchnotes: Software Engineering Daily – Podcast from Jan 29th 2016 Links from the notes: Related To leave a comment for the author, please follow the link and comment on their blog: Shirin’s playgRound. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX,…
Original Post: Registration now open for workshop on Deep Learning with Keras and TensorFlow using R

Explaining Predictions of Machine Learning Models with LIME – Münster Data Science Meetup

Slides from Münster Data Science Meetup These are my slides from the Münster Data Science Meetup on December 12th, 2017. My sketchnotes were collected from these two podcasts: Sketchnotes: TWiML Talk #7 with Carlos Guestrin – Explaining the Predictions of Machine Learning Models & Data Skeptic Podcast – Trusting Machine Learning Models with Lime Example Code the following libraries were loaded: library(tidyverse) # for tidy data analysis library(farff) # for reading arff file library(missForest) # for imputing missing values library(dummies) # for creating dummy variables library(caret) # for modeling library(lime) # for explaining predictions Data The Chronic Kidney Disease dataset was downloaded from UC Irvine’s Machine Learning repository: http://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease data_file <- file.path(“path/to/chronic_kidney_disease_full.arff”) load data with the farff package data <- readARFF(data_file) Features age – age bp – blood pressure sg – specific gravity al – albumin su – sugar rbc…
Original Post: Explaining Predictions of Machine Learning Models with LIME – Münster Data Science Meetup