What does a ๐ฅ๐ฒ๐ฎ๐น ๐ง๐ถ๐บ๐ฒ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐ผ๐ฟ ๐ฅ๐ฒ๐ฐ๐ผ๐บ๐บ๐ฒ๐ป๐ฑ๐ฒ๐ฟ ๐ฆ๐๐๐๐ฒ๐บ ๐๐ฒ๐๐ถ๐ด๐ป look like?
The graph was inspired by the amazing work of @eugeneyan
Recommender and Search Systems are one of the biggest money makers for most companies when it comes to Machine Learning.
๐
Both Systems are inherently similar. Their goal is to return a list of recommended items given a certain context - it could be a search query in the e-commerce website or a list of recommended songs given that you are currently listening to a certain song on Spotify.
๐
The procedure in real world setups usually consists of two steps:
โก๏ธ We train embedding model that will be used to transform inventory items into vector representations.
๐
โก๏ธ Deploy the model as a service to be later used for real time embedding.
โก๏ธ We apply previously trained embedding model on all owned inventory items.
โก๏ธ Build an Approximate Nearest Neighbours search index from the embeddings.
๐
โก๏ธ Define additional filtering rules for retrieved candidates (e.g. donโt allow heavy metal songs for children under 7 years old).
โก๏ธ Once an item comes, we embed it into a vector representation using the model from the offline part.
โก๏ธ Use Approximate NN index to find n most similar vectors.
โก๏ธ Apply additional filtering rules defined in the offline part
โก๏ธ We build a Feature Store for item level features and expose it for real time feature retrieval.
โก๏ธ Train a Ranking Model using the Features.
๐
โก๏ธ Deploy the model as a service to be later used for real time Scoring.
โก๏ธ Define additional Business rules that would overlay additional scores (e.g. downscore certain brands).
โก๏ธ Chaining from the previous online step we enrich retrieved candidates with features from the Feature Store.
โก๏ธ Apply Ranking model for candidates to receive item scores.
โก๏ธ Order by scores and apply additional business rules on top.
๐
โ By chaining both Online parts of Candidate Retrieval and Candidate Ranking end-to-end we get a recommendation list that is shown to the user.
โ Results and user actions against them are piped back to the Feature Store to improve future Ranking Models.
๐๐ฝ๐ฎ๐ฐ๐ต๐ฒ ๐ฆ๐ฝ๐ฎ๐ฟ๐ธ is an extremely popular distributed processing framework utilizing in-memory processing to speed up task execution. Most of its libraries are contained in the Spark Core layer.
๐
As a warm up exercise for later deeper dives and tips, today we focus on some architecture basics.
In its simplest form Data Contract is an agreement between Data Producers and Data Consumers on what the Data being produced should look like, what SLAs it should meet and the semantics of it.
Here is a short refresher on ๐๐๐๐ ๐ฃ๐ฟ๐ผ๐ฝ๐ฒ๐ฟ๐๐ถ๐ฒ๐ ๐ผ๐ณ ๐๐๐ ๐ฆ (๐๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐ ๐ฆ๐๐๐๐ฒ๐บ).
๐ก๐ผ ๐๐ ๐ฐ๐๐๐ฒ๐ ๐๐ฎ๐๐ฎ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ฃ๐ผ๐ฟ๐๐ณ๐ผ๐น๐ถ๐ผ ๐ง๐ฒ๐บ๐ฝ๐น๐ฎ๐๐ฒ - next week I will enrich it with the missing Machine Learning and MLOps parts!
Lambda and Kappa are both Data architectures proposed to solve movement of large amounts of data for reliable Online access.
๐
The most popular architecture has been and continues to be Lambda. However, with Stream Processing becoming more accessible to organizations of every size you will be hearing a lot more of Kappa in the near future. Letโs see how they are different.
Letโs remind ourselves of how a ๐ฅ๐ฒ๐พ๐๐ฒ๐๐-๐ฅ๐ฒ๐๐ฝ๐ผ๐ป๐๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐ฒ๐ฝ๐น๐ผ๐๐บ๐ฒ๐ป๐ looks like - ๐ง๐ต๐ฒ ๐ ๐๐ข๐ฝ๐ ๐ช๐ฎ๐.
You will find this type of model deployment to be the most popular when it comes to Online Machine Learning Systems.
Let's zoom in:
๐ญ: Version Control: Machine Learning Training Pipeline is defined in code, once merged to the main branch it is built and triggered.
๐
๐ฎ: Feature Preprocessing: Features are retrieved from the Feature Store, validated and passed to the next stage. Any feature related metadata that is tightly coupled to the Model being trained is saved to the Experiment Tracking System.