denoising in a discrete input has always fascinated me ever since i read jmlr.org/papers/volume1… by Vincent & @hugo_larochelle et al., and yoshua has always motivated me to look into denoising for sequence modeling ever since 2013.
just watched the Social Dilemma netflix.com/title/81254224. to anyone who's been thinking about and following various stories about social media and other "attention-grabbing" services, this documentary won't have too much new stuffs, ...
although it works as a great reminder that these services that are effectively surveilling us 24/7 and profiting by selling who we are are embedded in every aspect of our lives. ...
it's an interesting watch after reading Steven Levy's Facebook: The Inside Story (amazon.com/dp/B07V8CL7RH/).
Two ironies:
1. i'm obviously writing about this documentary immediately after watching it on "Facebook" and "Twitter"
another @zoom_us tip/bug i learned friday: do not "Enable join before host" if you are not going to join immediately at the beginning. a random participant who first joins it becomes the host and stays so even when alternative hosts join the meeting. 😱
according to @zoom_us, one of the alternative hosts can claim the host role manually from a random host, but this should be automatic not manual. an alternative host (designed in the meeting setting) shows up, and they must be the host rather than a random participant.
i'm quite embarrassed and wanted to sweep it under a rug, but let me share what happened behind this, largely for my own record/reminder and for a small hope this might raise awareness.
1st&foremost, it was totally my oversight to miss that the keynote speaker lineup was entirely composed of male speakers, including myself, which would've reinforced the lack of diversity and also potentially sent out a wrong sign to many participants and others, in our fields.
i've been calling out similar cases myself earlier pretty both publicly & privately (see e.g. slide 80 in drive.google.com/drive/u/0/fold…), but i've apparently fallen into the trap of seeing others' faults while failing to see my own. how embarrassing and eye-opening!
it all started with @_willfalcon casually reading the papers on DIM and CPC and talking about how he could come up with a better contrastive learning algo 1.5+ years ago. instead of adding yet another novel, sota, simple, awesome, principled contrastive learning algo, ..
@_willfalcon sat down, painstakingly implemented an effective & efficient framework for ML experimentation (which ended up being @PyTorchLightnin), talked with the authors of an ever-growing set of novel, sota, simple, awesome, principled contrastive learning algo.'s, ..
reproduced them in a unified sw&conceptual framework as much to his best and experimented with them patiently. along the way, @_willfalcon and i have learned a lot about these recent algo's and @_willfalcon is releasing all his implementations at github.com/PyTorchLightni….