Next: Tim Vieira on learning to prune #acl2017nlp
we're going to learn to prune a coarse-to-fine DP by optimizing a linear combination of accuracy & runtime
in constituency parsing, runtime is a function of the number of chart cells we explore
& not all chart cells are equal
more interestingly, badness of each pruning decision is not independent---lots of complicated interactions
we'll explore local perturbations to pruning decisions to learn which ones are actually worth it---this is LOLS
(now I know why @haldaume3 is excited)
@haldaume3 but naively too slow---need to be clever. DP and various other tricks get us gradient in O(n^3) rather than O(n^5)
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
