-
Notifications
You must be signed in to change notification settings - Fork 0
Tips
@dashster18 left a great comment on the gitter channel on 28th March 2016:
Personally, 2.5 years ago I started dabbling with ML by self-learning it by the Andrew Ng Coursera course which was awesome, but I only really learned the concepts at a cursory level. About 1.5 years ago, I doubled down on ML and started quite seriously learning it, getting to a point where I could apply it to business problems at work. Here are of the things I found that were quite helpful with learning ML by yourself:
-
Mindset - Most people here seem to be developers who want to learn ML and take an approach similar to how you would learn other advanced programming topics, by focusing on an environment and thinking code first. In my opinion, ML is different enough from other advanced areas of CS that having a code first approach won't set you up for success. You really want to nail the ML concepts down first, and that's probably the most complex part. Understanding things like supervised learning, unsupervised learning, cross validation, feature engineering, hyperparameter optimization are extremely important and are not code related - they are more statistics related. As a corollary, I noticed folks focusing on arguing language X is more important to learn than language Y; I think this is the wrong way to look at it and efforts should instead be focused on understanding ML as a craft. Once you do that, it becomes much easier to transfer that knowledge over to different platforms. The concrete example I have experience this in is at work, where I started a project by doing my exploratory analysis and feature engineering in Python, but eventually moved to applying models in our internal ML framework since it plays nicely with our production environment. The learning curve for the other platform was pretty shallow, because the most difficult parts was understanding the ML concepts, not the specific APIs.
-
Math baseline - ML is more mathematical in the continuous sense than most other areas of CS. I really recommend folks to know multivariate calculus and linear algebra to better understand how ML works and shine a light on the black box. However, you don't need to know those subjects like a mathematician does! For MV calculus, the most advanced thing you would need to do is take a partial derivative, which is super easy if you already know how to differentiate; don't even worry about multivariate integrating. For linear algebra, you don't need to know the theoretical variety that math geeks use, but rather the practical kind that physicists use. Be able to do algebra with matrices effectively, and you'll have plenty of ammunition to learn ML well. Most courses online focus on teaching you as if you were training to be a mathematician, but I found one that was geared towards CS folks where you learned LA from an algorithmic perspective and implemented operations in Python; I self-learned from this and it has been plenty for ML: https://www.coursera.org/course/matrix
-
ML concepts - learn these cold. Andrew Ng's Coursera course is by far the best introduction for this: https://www.coursera.org/learn/machine-learning. Make a couple passes through it, I've done it 3-4 times now and I still learned something new each time. If you want to get a bit more theoretical and advanced after this, you can go with his more advanced class by watching the lecture videos on youtube and following the materials here: http://cs229.stanford.edu/materials.html. Note, I have gone through this and I still don't have the material down cold, but I grew tremendously by meticulously working through the lectures and notes.
-
Apply it! - this is how you learn the craft. Most ML classes focus on training you to become an ML researcher and the theorycraft. However, many of the opportunities in ML today are in the applied domain, which the best way to learn is to apply it to real-life problems. If you have access to data at work and can apply it there, that's the best option! I did this recently and I learned a TON. You learn things like feature engineering, but also more practical things that don't get taught in classes such as should I really compute this complex feature if it only increases my accuracy by 0.01? Or things such as how to design machine learning pipelines that take other ML pipelines as input and feed out to other ML pipelines. Things like, how do I think about model drift and what should my model re-training strategy be when the underlying distributions change and my features are obsolete.