Now this bit of spotlight is long overdue - Variance Explained was my first “model blog” in spirit, if not in post chronology.
David Robinson is so good at blogging. I was recently reminded of this while reading his latest post, “What’s the difference between data science, machine learning, and artificial intelligence?” 1.
His writing is engaging and plain-speaking, and he has a friendly style.
I greatly admire the look and feel of his blog - so much so that I’ve tried replicating his clean way of referencing, linking, and footnoting within posts. His use of these keeps posts streamlined and highly readable.
I even chose to use Jekyll as the static site generator for this blog after noticing it in his footer.
I analyzed and reproduced (swapping Python for R) one of his posts as one of my very first in-depth undertakings for this blog. It introduced me to the concept of tidy data and much more. Other posts I’d recommend:
I only discovered Variance Explained this past May, but the rest of his backlog looks just as impressive3.
I am a huge fan and grateful to have David’s continued example.
True to style, it’s an awesome post. “Data science produces insights. Machine learning produces predictions. Artificial intelligence produces actions.” - oversimplification made useful, then followed by a breakdown of what he means by it. I also enjoyed his use of Twitter snark from the community. And the AI effect has its own Wikipedia page! ↩
This post was actually brought to my attention during the Johns Hopkins R Programming course that I took back in 2016. ↩
For example, “Understanding mixture models and expectation-maximization (using baseball statistics)” sounds like it could help reinforce some of the stuff I learned about topic models and Latent Dirichlet Allocation. ↩