The worst interview I ever had was at a tech startup. I interviewed there between working as a postdoc and getting my research gig at Google Brain. I'll tell the story briefly.
It's nothing compared to this gem of a horror story, which you should totally read and enjoy.👇
After I decided to forgo academic jobs interviews, I looked around briefly at startup companies. There was a company in the bay area that specialized in hardware for DNA sequencing. They were looking for an algorithms expert and gave me an interview, given my neuro/DL background.
The company is in Foster City, which is a big biotech hub. I drive there, find the office, arrive on time and everything is going well. I meet the interviewer, who seems professional enough and we proceed to the interview room.
The interviewer opens up by pointing to my resume, saying he was impressed with the number of Nature articles I had. (Sorry for the humble brag, but it's relevant to the story).
He then asks me to derive the Kalman filter on the board for him. Now, in and of itself, this is not an insane ask. It's a tough question, but a fair one. My approach for a research interview would typically be to delve into material related to the resume, but here we are.
So I get up the board and attempt to derive the Kalman filter. Shit! I draw a blank! I had learned this and understood, etc., but I'm there and I'm flailing HARD.
Ultimately, I put the marker down and admit defeat. I tell him I am unable to derive the Kalman filter.
The interviewer says to me, and this is nearly a verbatim quote, as it's been burned into my skull, "I do not see how you could honestly come by first authorship on those Nature papers if you cannot derive a Kalman filter."
In my mind I was like "What? ? ? ?" I might have even been close to tears. I would like to tell you I had the presence of mind to stop the interview right there and storm out in righteous indignation.
But I didn't. I was shell shocked. I just stood there stupidly.
He asked me another technical question or two, and I went through the motions on board, but I my heart was no longer in it.
Finally, I had the presence of mind to say, "I don't think this interview is working out. I'm leaving." And I left the interview and the office.
And that was basically that. Some rando accused me academic fraud because I couldn't derive the Kalman filter in real time during a pop quiz in an interview.
My only regret is that I didn't angry.
The End, thanks for reading. 😎
Meanwhile, my colleagues are now, after reading this story, like, “Daaaaaaamm, Sussillo can’t derive a Kalman filter!” 🤣😂😅
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/7 For the past decade, our team at Meta Reality Labs (previously CTRL-labs) has been dedicated to developing a neuromotor interface.
Our goal is to address the Human Computer Interaction challenge of providing effortless, intuitive, and efficient input to computers.
2/7 We developed a wristband device that can be easily put on and removed to non-invasively sense muscle activations in the wrist and hand via surface electromyography (sEMG). The sEMG technology uses metal contacts on the skin to detect muscle activity, allowing us to transform intentional neuromotor commands into computer input.
3/7 We created generic wrist-based sEMG neural network decoding models trained on data from thousands of paid volunteers who participated in our study.
These models generalize across people, eliminating the need for per-person or per-session calibration, which have traditionally been challenges for biosignal interfaces.
Below, we show how offline handwriting decoding performance improves as the number of participants in the training dataset increases.
For instance, when a model is trained on data from over 6000 users, it achieves an offline performance of 7% character error rate, which translates to about one error every 14 characters and is approaching error rates comparable with mobile typing.
A few comments on GPT-3, OpenAI's latest one-step-ahead language model, and how I think it relates to task-based modeling in systems neuroscience.
In systems neuroscience, we are interested in how the brain works at the systems and circuit levels. Over the last ten years, more and more neuroscientists have used task-based modeling to understand specific neural circuits' function.
One attempts to learn an equivalent artificial circuit, typically a deep net, that performs the same task as an animal in an experiment. After training the network, one bridges back to the brain by comparing the internals and behavior of the biological and artificial circuits.
Ever wonder what the heck an Echo State Network (ESN) is, or what the fuss is all about and how it relates to recurrent neural networks (RNNs) more generally? Follow along in this tweet storm. 🐦⚡️
aka day 93 in quarantine = tweeting about random stuff
Preliminaries - An ESN is an RNN with fixed, random weights in the recurrent and input matrices but the readout matrix is trained. The output weights are trained via linear regression to make the outputs look like desired targets.
In contrast, an RNN used in deep learning applications is trained with back-propagation through time (BPTT), training all the parameters. BPTT allows an error at time t to be combine with activity at a time earlier than t to modify the weights.
My senior year in high school, my Uncle Elliott twisted his friend's arm, a physicist at CUNY in uptown Manhattan, to allow me to intern in his lab that summer, just before college. I'd stay with my Aunt Maria in Bay Ridge, Brooklyn and commute the 80 or so minutes on the subway.
The day came for the interview, so I got on the subway and dutifully found my way to the right building on the campus at 140th street. I knocked and a very friendly russian postdoc opened the door and introduced himself. I don't remember his name.
Most of you know me as a successful neuroscientist / deep learning researcher but I have a story that I want to share briefly.
I grew up in a group home, which is basically an orphanage.
External Tweet loading...
If nothing shows, it may have been deleted
by @MaddowBlog view original on Twitter
Currently the Trump administration has a policy of separating children from their parents in a purposeful effort to deter migration at our southern border. Right now there’s an old Walmart full of kids and “tent cities” filled with children are popping up.
Upwards of 2000 children are now being torn from their parents, processed, & housed in government sanctioned group living structures. This amounts to a policy of emotional abuse of children for the purposes of deterring immigration.