alright so learned the most used and imp terms in ai space (from @gkcs_)
some of the most heard and common ones are:
- llm: predicts next token from input (yes divides our input into tokens)
- quantization: playing with neural network weights basically
- transformers: also indicates next output token from input but consider it as a core part of llm, it has attention block linked with ffnn and has multiple blocks of these (ex: consider it as a engine for a car)
- fine tuning: teaching our llm specific to our use cases