Papers: - Deisenroth, M. P., & Rasmussen, C. E. (2011). PILCO: A model-based and data-efficient approach to policy search. Proceedings of the 28th International Conference on Machine Learning, ICML 2011, 465–472. - Gal, Y., Mcallister, R. T., & Rasmussen, C. E. (2016). Improving PILCO with Bayesian Neural Network Dynamics Models. Data-Efficient Machine Learning Workshop, ICML, 1–7.
The papers shows how to find good policies with relatively few observations on classical control problems (mountain car, pole swing up etc) using probabilistic model based reinforcement learning.

Slides from my presentation on “Bayesian Probabilistic Matrix Factorization using Markov Chain Monte Carlo” by Salakhutdinov and Mnih. Paper link. The slides can be downloaded here.

For the first time Ive actually made a summary of all the papers and presentations I found noteworthy at a conference (allright, there were more, but this is a start). Below is my notes, with links etc. The purpose of the notes is mainly for myself to remember and revisit what I found interesting, but I see no reasons not to share to others. Does not include my own paper.

Ning Zhou, Audun Øygard and I got a paper in the KDD workshop Deep Learning Day. We provide some practitioner’s findings on applying deep learning recommendations in production! Link to paper here.
Together with @nzhou9 and @matsiyatzy, I am officially moving into academia after being an industrial observer: We got a paper in the #KDD2018 workshop Deep Learning Day. We provide some practitioner's findings on applying deep learning recommendations in production!

Ning Zhou, Audun Øygard and I got a paper in the KDD workshop Deep Learning Day. We provide some practitioner’s findings on applying deep learning recommendations in production! Link to paper here.
Together with @nzhou9 and @matsiyatzy, I am officially moving into academia after being an industrial observer: We got a paper in the #KDD2018 workshop Deep Learning Day. We provide some practitioner's findings on applying deep learning recommendations in production!

I have held an introductory machine learning workshop a couple of times. The aim is to classify who said what on Twitter among Norwegian celebrities.
Project: https://github.com/simeneide/tweet-classification
using figure:

I am not a fan of new years resolutions. If I had been, one of my new years resolutions would be to be better at writing down what I am doing all the time. However, I held some talks and did some fun experiments in 2017, so the cheap way is simply to link to those.
How to become a Data Scientist in 20 minutes (JavaZone 2017) At Javazone 2017 I held a short talk on how building (drum roll) machine learning algorithms is actually pretty easy.

During a hackathon at FINN.no, we figured we wanted to learn more about deep NLP-models. FINN.no has a large database with ads of people trying to sell stuff (around 1 million active ads at any time), and they are categorized into a category tree with three or four layers. For example, full suspension bikes can be found under “Sport and outdoor activities” / “Bike sport” / “Full suspension bikes”.

Every three months or so I get really annoyed about Jupyter Notebook being so limited, and I usually spend half a day browsing alternatives like Spyder, PyCharm and Rodeo. Usually my search phrase is “Rstudio for python”, but wasting half a day or more I still end up with jupyter notebook. Although many good alternatives, the fact that you can work in the browser directly on the server makes it very simple to set up.

Most people have a love-hate relationship to spreadsheets. The spreadsheet format is simple and intuitive, and doing calculations becomes really easy. However, they quickly become too complicated as well.
Jeremy Howard’s lecture explaining embeddings was a great use of Excel, and I implemented my own version of his excel-sheet using it for illustration purposes on how recommendation algorithms work.
Recommendations are everywhere: Netflix is trying to propose the most relevant movies, and Google is serving you personalised ads that are hopefully a (little less) annoying.

© 2018 · Powered by the Academic theme for Hugo.