

That will let them rapidly build their understanding in ways impossible in traditional static media. They can change models, try out different hypotheses, and immediately see what happens. In such an environment, users can explore in ways impossible with traditional static media. Ideally, such articles will integrate explanation, code, data, and interactive visualizations into a single environment. It’s an interactive medium that lets users – “readers” is no longer sufficient – work directly with machine learning models. A Distill article (at least in its ideal, aspirational form) isn’t just a paper.

Writes Michael Nielsen:ĭistill is taking the web seriously. Distill: An Interactive, Visual Journal for Machine Learning Researchĭistill officially launches, under the vision and custodianship of founding editors Chris Olah and Shan Carter from Google Brain.

Best reads 2017 reddit free#
The industrial revolution freed humanity from much repetitive physical drudgery I now want AI to free humanity from repetitive mental drudgery, such as driving in traffic.ĥ. I want all of us to have self-driving cars conversational computers that we can talk to naturally and healthcare robots that understand what ails us. In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research. I will continue my work to shepherd in this important societal change. Spoiler alert: he isn't opening a seafood restaurant in Van Nuys: In a post on his Medium blog, Andrew Ng announces his resignation from Baidu and what his upcoming plans are. That clearly wouldn't lead to progress in unsupervised learning. But really, in general, I wouldn't think that everyone working on unsupervised learning problems would or should go out and label classification data. It would be a great educational undertaking, and could support the community at large, especially so if he were to encourage the sharing of said curated dataset afterward. Some advice via tweet from Salesforce's Chief Scientist, Richard Socher:Įven though this is an apples and oranges comparison ("Rather than eating a meat lovers pizza, just go chop some wood"), I can see where he is coming from to some extent. Suggestion by Salesforce chief data scientist But it's a great branding move, and can help Google extend their recent interests in becoming the lone global machine learning behemoth.ģ. The event is either exciting or terrifying, depending on your views on potential monopolies. We'll see what Google's endgame is vis-a-vis Kaggle, but there are all sorts of possibilities for crossover here. That competition had some deep integrations with the Google Cloud Platform, too. Earlier this month, Google and Kaggle teamed up to host a $100,000 machine learning competition around classifying YouTube videos. Kaggle has a bit of a history with Google, too, but that’s pretty recent. This shouldn't comes as a huge surprise, given:

Google is acquiring data science community Kaggle And I'm a little upset that there is no life advice to follow, because this is the kind of guy (or gal) I expect would be pretty good at dishing it out.Ģ. It's beautiful in both its honesty and accuracy. If you have life questions, I have no idea. If you have debugging questions, use StackOverflow. If you need resume filler, so some Kaggle competitions. Now you can probably be hired most places. The literature changes every few months, so keep up. Once you do all of that, go on arXiv and read the most recent useful papers. Do stuff with CNNs and RNNs and just feed forward NNs. Put tensorflow or torch on a Linux box and run examples until you get it. Now forget all of that and read the deep learning book. Make sure you get the same answers with all of them. Do all the exercises in Matlab and python and R. You probably should, but I'll assume you know all of it. You can read the rest of the book if you want. If you don't understand it, keep reading it until you do. His entire guide - which is expanded upon, discussed, and has materials sourced by others in the ensuing discussion - is below:įirst, read f***ing Hastie, Tibshirani, and whoever. This short, blunt guide to machine learning was written by /u/thatguydr, whose succinctness is admirable.
