In its simplest form Data Contract is an agreement between Data Producers and Data Consumers on what the Data being produced should look like, what SLAs it should meet and the semantics of it.
๐ Shema Version - Data Sources evolve, Producers have to ensure that it is possible to detect and react to schema changes. Consumers should be able to process Data with the old Schema.
๐
๐ SLA metadata - Quality: is it meant for Production use? How late can the data arrive? How many missing values could be expected for certain fields in a given time period?
๐
๐ Semantics - what entity does a given Data Point represent. Semantics, similar to schema, can evolve over time.
๐ Lineage - Data Owners, Intended Consumers.
๐ โฆ
โก๏ธ Ensure Quality of Data in the Downstream Systems.
โก๏ธ Prevent Data Processing Pipelines from unexpected outages.
โก๏ธ Enforce Ownership of produced data closer to where it was generated.
๐
โก๏ธ Improve scalability of your Data Systems.
โก๏ธ Reduce intermediate Data Handover Layer.
โก๏ธ โฆ
Example implementation for Data Contract Enforcement:
๐
1๏ธโฃ Schema changes are implemented in a git repository, once approved - they are pushed to the Applications generating the Data and a central Schema Registry.
2๏ธโฃ Applications push generated Data to Kafka Topics. Separate Raw Data Topics for CDC streams and Direct emission.
๐
3๏ธโฃ A Flink Application(s) consumes Data from Raw Data streams and validates it against schemas in the Schema Registry.
4๏ธโฃ Data that does not meet the contract is pushed to Dead Letter Topic.
๐
5๏ธโฃ Data that meets the contract is pushed to Validated Data Topic.
6๏ธโฃ Applications that need Real Time Data consume it directly from Validated Data Topic or its derivatives.
7๏ธโฃ Data from the Validated Data Topic is pushed to object storage for additional Validation.
๐
8๏ธโฃ On a schedule Data in the Object Storage is validated against additional SLAs and is pushed to the Data Warehouse to be Transformed and Modeled for Analytical purposes.
9๏ธโฃ Consumers and Producers are alerted to any SLA breaches.
๐
๐ Data that was Invalidated in Real Time is consumed by Flink Applications that alert on invalid schemas. There could be a recovery Flink Application with logic on how to fix invalidated Data.
๐๐ฝ๐ฎ๐ฐ๐ต๐ฒ ๐ฆ๐ฝ๐ฎ๐ฟ๐ธ is an extremely popular distributed processing framework utilizing in-memory processing to speed up task execution. Most of its libraries are contained in the Spark Core layer.
๐
As a warm up exercise for later deeper dives and tips, today we focus on some architecture basics.
What does a ๐ฅ๐ฒ๐ฎ๐น ๐ง๐ถ๐บ๐ฒ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐ผ๐ฟ ๐ฅ๐ฒ๐ฐ๐ผ๐บ๐บ๐ฒ๐ป๐ฑ๐ฒ๐ฟ ๐ฆ๐๐๐๐ฒ๐บ ๐๐ฒ๐๐ถ๐ด๐ป look like?
The graph was inspired by the amazing work of @eugeneyan
Recommender and Search Systems are one of the biggest money makers for most companies when it comes to Machine Learning.
๐
Both Systems are inherently similar. Their goal is to return a list of recommended items given a certain context - it could be a search query in the e-commerce website or a list of recommended songs given that you are currently listening to a certain song on Spotify.
Here is a short refresher on ๐๐๐๐ ๐ฃ๐ฟ๐ผ๐ฝ๐ฒ๐ฟ๐๐ถ๐ฒ๐ ๐ผ๐ณ ๐๐๐ ๐ฆ (๐๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ ๐ ๐ฎ๐ป๐ฎ๐ด๐ฒ๐บ๐ฒ๐ป๐ ๐ฆ๐๐๐๐ฒ๐บ).
๐ก๐ผ ๐๐ ๐ฐ๐๐๐ฒ๐ ๐๐ฎ๐๐ฎ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ฃ๐ผ๐ฟ๐๐ณ๐ผ๐น๐ถ๐ผ ๐ง๐ฒ๐บ๐ฝ๐น๐ฎ๐๐ฒ - next week I will enrich it with the missing Machine Learning and MLOps parts!
Lambda and Kappa are both Data architectures proposed to solve movement of large amounts of data for reliable Online access.
๐
The most popular architecture has been and continues to be Lambda. However, with Stream Processing becoming more accessible to organizations of every size you will be hearing a lot more of Kappa in the near future. Letโs see how they are different.
Letโs remind ourselves of how a ๐ฅ๐ฒ๐พ๐๐ฒ๐๐-๐ฅ๐ฒ๐๐ฝ๐ผ๐ป๐๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐ฒ๐ฝ๐น๐ผ๐๐บ๐ฒ๐ป๐ looks like - ๐ง๐ต๐ฒ ๐ ๐๐ข๐ฝ๐ ๐ช๐ฎ๐.
You will find this type of model deployment to be the most popular when it comes to Online Machine Learning Systems.
Let's zoom in:
๐ญ: Version Control: Machine Learning Training Pipeline is defined in code, once merged to the main branch it is built and triggered.
๐
๐ฎ: Feature Preprocessing: Features are retrieved from the Feature Store, validated and passed to the next stage. Any feature related metadata that is tightly coupled to the Model being trained is saved to the Experiment Tracking System.