The last time I blogged specifically about my fast.ai experience was after Week 2, where I talked about being introduced to great technical writing, learning through experimenting, and some of my initial trust issues with the top-down learning approach.
I finished the course back in February, and overall, it has been a wonderful, worthwhile journey of several months. I leave it feeling more capable of immediately applying what I’ve learned than I did at the end of Andrew Ng’s Machine Learning course back in 20141. I owe this confidence to fast.ai’s top-down teaching philosophy (despite the drawbacks discussed in week 2).
I hosted my Ng course notes in OneNote2, which turned out to be a great platform for supporting Ng’s heavy use of mathematical notation.
I’ve put my fast.ai notes up on GitHub, as part of my repo of course work. My notes for this course are less in-depth (versus my Ng notes) because the fast.ai course itself supplied excellent notes on the course wiki.
My notes aggregate tips from each lesson, as well as provide a glossary of course terminology, commands cheat sheet, and a list of recurring resources.
As of the publish date of this post, my notes are based on the 2017/v1 edition of this course.
I plan to revisit the 2018/v2 edition of this course too, so I will gradually add notes on interesting differences between the editions to the same repository.
The most obvious difference: 2017/v1 uses Theano + Keras; 2018/v2 uses PyTorch + a custom library built for the course.
Take this course if you are:
This course is less helpful if you are:
So given these expectations, this course should have costed ~$31.50 in AWS spending. Haha.
My bill included significant charges that are, unfortunately, not mentioned by the course:
These non-P2 charges resulted in me spending, on average, $16.72/month. In fact, only a third of my billed weeks show a majority of cost coming from up-time of the P2 instance (11 of 33 weeks). The other two-thirds show 50%+ of the weekly cost coming from storage volume and an idling IP.
And here I was so worried about instance up-time…
My actual costs from July ‘17-February ‘18? $296.92 in AWS spending.
There are caveats to this 170.5 hours of compute.
The caveats still don’t explain away the great difference between 24 hours and 5-6 hours. To have spent the time I did in 7 weeks, I would have had to have treated this course like a heavy part-time job.
In reality, my compute time was spent over 33 weeks rather than 7. If I remove weeks with no compute spend, 33 reduces to 22 weeks.
Over 22 weeks, my compute time averaged out to ~7.75 hours per week. So even at my extended pace, I was spending solidly >6 hours a week on compute alone for the course.
Here’s another interesting way to look at my course spending: did more spending correlate with more work? I’m roughly correlating effort with net change in lines of code (additions - deletions), as told by GitHub.
Size of bubble == net code change.
e.g., right now, I’m applying RNN character generated text from Lesson 6 to a fun side project. ↩
Using OneNote back then also let me eat my own dog food. ↩
“$0.10 per GB-month of General Purpose SSD (gp2) provisioned storage” ↩
“$0.005 per Elastic IP address not attached to a running instance per hour (prorated)” ↩