Start Data Engineering Profile picture
May 23 7 tweets 2 min read Twitter logo Read on Twitter
Testing your pipelines before merging is crucial to ensure they do not fail in production. However, testing data pipelines is complex (and expensive) due to the data size, confidentiality, and time it takes to test a data pipeline.
🧵
#data #dataengineering #testing #dataops
Here are a few ways to get data for your tests:

1. Copying data: An exact copy of the prod data for testing will ensure that our changes are not breaking the pipeline. This is expensive! You can use a part of data for testing, accepting possible edge case misses.
2. Data git: Projects like Nessie and LakeFS can help set up different environments without replicating entire data.
3. Separate environment: The test environment has its data populated by test pipelines. The chances of data being different from prod are much higher, thus leading to ineffective testing.
4. Test in Prod: Skip the tests, run the pipeline in prod, and test the output. Testing in prod may be feasible for fast-running pipelines, which do not have multiple consumers or are low priority.

5. No tests
What are your strategies for testing data pipelines?
That's a wrap!

If you enjoyed this thread:

1. Follow me @startdataeng for more of these
2. RT the tweet below to share this thread with your audience

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Start Data Engineering

Start Data Engineering Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @startdataeng

May 22
Data engineers work with multiple systems & it's crucial to understand DevOps. Shown below are a few DevOps concepts to familiarize oneself with:

1. Docker: docs.docker.com/get-started/
2. Kubernetes: kubernetes.io/docs/concepts/…
3. CI/CD: resources.github.com/ci-cd/

#dataengineering
#data
That's a wrap!

If you enjoyed this thread:

1. Follow me @startdataeng for more of these
2. RT the tweet below to share this thread with your audience
Read 4 tweets
Jan 9
If you have worked in the data space, you would have heard the term Metadata. It is used as a catch-all term. Here are a few things to think about when someone mentions Metadata 👇

#data #dataengineering #metadata #dataops
1. Orchestration: Time of run, re-run information, pipeline structure, the execution time for the pipeline, pipeline failure times, etc

2. Data processing: Input parameters, failure stack trace, number of rows processed, number of rows in output, number of discarded rows, etc
3. Data quality: Mean/sum/avg, etc. for numerical columns, available values for enum columns, etc. (think dataframe.describe in pandas)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(