What’s nice about being a young, black female in a room of 10 men1, predominantly white and Asian and older, and as part of a deep learning reading group at work2, is the mental freedom to be (or even play) the n00b
and ask questions as one.
I often actually like being the novice in a situation - whether I’m picking up flag football or joining Microsoft over 4 years ago out of university. When I’m expected to know so little, my ego can be easier to dismiss.
Bonus: I feel like my line of questioning3 loosened up the rest of the room to ask questions as well, despite the fear of being too basic. And I wasn’t even lost throughout! We talked about Convolutional Neural Networks, finetuning, VGG, ensembling… and the combination of my fast.ai coursework4 and reading The Master Algorithm prepped me for following along quite well. Next time, I will try reading the featured paper before the session.
+60 people on Skype, with a few women sprinkled in there. ↩
The Deep Learning Reading Group is a Microsoft employee community effort to discuss cutting-edge research material in deep learning. They solicit paper recommendations from attendees, and anyone interested can pick up one of those papers and volunteer to moderate a session. Sessions are about 2 hours long, towards the end of the work day, once every two weeks. Today’s session was on “Progressive Neural Networks,” and it was my first attendance. ↩
Questions like: “Is boosting the same thing as ensembling?” (answer: not quite; rather, boosting is a type of ensembling); “If you implement the branching needed to perform multi-task learning with neural networks, how do you merge the gradients from each branch during backpropagation?” (answer: simple or weighted summation is enough). ↩
For example, this Geoffrey Hinton paper on Knowledge Distillation came up as a reference during group - and I recognized it as one of my assigned readings for week 4 of the class. So maybe I should read it (haha). ↩