It still confuses me that in spaces in which humans can often generalize from just 2 or 3 examples, it's considered successful when a software system does so from millions of examples
Nevermind robustness issues
It's true that, you know, reading the entire internet is something that computers can do better than us. But why should they have to? I feel like the metric for success is just wild
Even if training ends up being a relatively small part of the cost of a system, I think it's the part the community could in many cases shrink down to almost nothing compared to what it currently is
And there are so many domains that don't have the luxury of millions of even thousands of examples at our disposal to begin with
I recognize this is oversimplifying a whole field and that as someone whose work is what it is I am extremely biased here. The less "hot take" version of this is that few-shot learning problems aren't emphasized enough
The comparison between AI and humans is fraught and has in the past killed the entire field for a while. I just think there are things humans are doing that are really impressive and should serve as inspiration for how we solve a large class of problems
Also, the question of what success is, it's a really important question that steers entire research communities. It's important to reconsider it once in a while IMO, in every field
This inevitably leads to my neurosymbolic rant, which is one of 5 Talia rants, along with Eltana bagels tasting like cardboard
I hope this is more of an interesting and fun conversation for people than a heated argument---I'm enjoying the amount of work discussed but I know I said this in a way that was inflammatory, mostly because I was sleepy
Another clarification I missed: I'm not saying "this is bad and we shouldn't do it," I'm saying "we shouldn't be satisfied because I know we can do better in the near future, and I think there are useful techniques to draw on for that and we should care more about them"
I worry a bit that the way machine learning is heading, without any change, it's going to hit a wall at some point due to the lack of true generalization and the reliance on massive amounts of data, and walls like that can cause mass disillusionment about an entire field
So I just want more people thinking a bit further into the future
• • •
Missing some Tweet in this thread? You can try to
force a refresh
On the "we" versus "I" debate about my thesis, I ended up going with this:
- "I" for things I did,
- "we" for mathematical handholding, and
- "Nate" and "RanDair" and so on for things my coauthors did.
Nonstandard I guess, but I deliberately designed projects to decouple work.
So it is actually very easy to point to the parts that my coauthors did. And for the things that we really did design together, I will add a note about this in an early section at the end of the introduction, along with complete authorship statements.
I'm going to use the knowledge package that @16kbps recommended to introduce those authors and link back to the full authorship statements when I mention their names in later chapters. Credit is extremely important!!!
Oh man so I bought three books on the power dynamics of philanthropy, but hilariously, I accidentally bought "Winner Takes All" (a trashy high school romance novel) rather than "Winners Take All" (a scalding critique of the upper class and capitalism run wild). Oops
Time to find one of those little free libraries to drop the trashy high school romance novel
Definitely not a critique of the upper class hahahahaha
Writing my thesis, I'm just baffled by how well I subsumed my own work. The PUMPKIN Pi paper (arxiv.org/abs/2010.00774, PLDI 2021) completely subsumes DEVOID (dependenttyp.es/pdf/ornpaper.p…, ITP 2019). DEVOID just ends up being an example in my thesis. I'd be mad if anyone else did this.
Perhaps even more amusingly, in December 2019, I had the idea for PUMPKIN Pi, but also thought it was something I'd never be able to do without help from external experts. I didn't really do it deliberately in the end, either, so I'm surprised it happened at all.
For real though, happy lesbian visibility day. A while back I did a blog interview series about LGBT computer science researchers. Eventually my life got too hard to continue it. But here's an interview with Deb Agarwal from back in the day.
If anyone wants to take over this project and start it up again, I'd be happy to pass along the knowledge I gained in the process, and give you access and so on.
Back then I really felt that there was a Don't Ask, Don't Tell culture in CS research. When I got the NSF Fellowship, suddenly people knew I existed, and I immediately decided to use this to try to fight that culture. I don't know how much it helped, but I really hope it did.