Testing your pipelines before merging is crucial to ensure they do not fail in production. However, testing data pipelines is complex (and expensive) due to the data size, confidentiality, and time it takes to test a data pipeline.
🧵 #data#dataengineering#testing#dataops
Here are a few ways to get data for your tests:
1. Copying data: An exact copy of the prod data for testing will ensure that our changes are not breaking the pipeline. This is expensive! You can use a part of data for testing, accepting possible edge case misses.
2. Data git: Projects like Nessie and LakeFS can help set up different environments without replicating entire data.
3. Separate environment: The test environment has its data populated by test pipelines. The chances of data being different from prod are much higher, thus leading to ineffective testing.
4. Test in Prod: Skip the tests, run the pipeline in prod, and test the output. Testing in prod may be feasible for fast-running pipelines, which do not have multiple consumers or are low priority.
5. No tests
What are your strategies for testing data pipelines?
That's a wrap!
If you enjoyed this thread:
1. Follow me @startdataeng for more of these 2. RT the tweet below to share this thread with your audience
If you have worked in the data space, you would have heard the term Metadata. It is used as a catch-all term. Here are a few things to think about when someone mentions Metadata 👇