It works like this: I name a model for a quantitative response Y, and then you guess whether or not IJALM.
1/
IJALM but you’re not fitting it using least squares — it’s penalized/regularized least squares instead.
3/
IJALM using a subset of the predictors. We fit the linear model using least squares on a subset of the predictors, though of course this isn’t the same is if we had performed least squares on ALL the predictors.
4/
Well, no. The model is linear in the PCs, which are linear in the features, so, IJALM. Fit w/least squares.
Partial least squares? IJALM, for the same reason. Fit using least squares.
10/
No you didn't fit a "super complicated non-linear model”. I bet you all my winnings from this round of IJALM it was actually JALM.
Perhaps not linear in the original features, and perhaps fit using a variant of least squares. But, IJALM nonetheless.
12/
They might not be what your data thinks it wants, but they’re what your data needs, and they’re almost certainly what your data is going to get.
13/
Stay tuned for my next installment: IT’S JUST LOGISTIC REGRESSION (IJLR), which is literally just this same exact thread but now Y is a binary response.
14/14