📍 The need for data compression📍
🧵
When we talk about computation time we are also talking about money, data compression represents the most appropriate economic way to shorten the gap between content creators and content consumers.
Compressed files are obviously smaller and it is necessary less money and time to transfer them and cost less money to store them, content creators pay less money to distribute their content, and content consumers pay less money to consume content.
On the other hand, companies in all sectors need to find new ways to control the rapid growth of their data heterogeneous generated every day, data compression and decompression techniques are one of the most viable solutions to these problems. #bigdata
In fact, data compression and decompression techniques are the DNA of many famous distributed systems (DS) and part of their success is due to the proper use of these.
#ApacheSpark
#ApacheKafka
#ApacheFlink
#ApacheBeam
#ApacheHadoop
These DS above use a hidden universe called contextual data transformations, the raw data is never compressed by a compression algorithm, it is always processed first by this previous pipeline which rearrange the symbols so they will be more sensitive to a compression algorithm. Image
Bzip2 doesn't have the best performance in compression and decompression times, it has a feature that makes it special, compression and decompression can be separated into blocks, which makes it powerful to be used in DISTRIBUTED systems. Image
I made my own version of bzip2, and I am trying to improve compression and decompression times and add more features that could be valuable for speed up these process.

coraspe-ramses.medium.com/lossless-data-…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ramses Alexander Coraspe Valdez

Ramses Alexander Coraspe Valdez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RamsesCoraspe

Aug 26, 2021
If you are strengthening your skills in data engineering and machine learning with python, here I recommend you some books with hands-on projects, It will help with the commom questions: why? and for what?
#Python #MachineLearning
Text Analytics with python (Dioanjan Sarkar) Image
Hands-On Machine Learning with Scikit-Learn, Keras & Tensorflow (Aurélien Géron) Image
Data Science in Production (Ben G. Weber) Image
Read 6 tweets
Jun 24, 2021
Evolution of the word clouds from the titles of books purchased on Amazon USA (1995 - 2015)

#AWS #DataEngineering #Python #BigData #100DaysOfCode #awscloud
The goal of this repository is to offer and explain an AWS EMR template that you can use quickly if the need for your analysis involves working with millions of records, the template can be easily altered to support the size of your project:

Github: github.com/Wittline/pyspa…
EMR EC2 SPOT AND ON-DEMAND INSTANCES Image
Read 5 tweets
May 10, 2021
If you are a data-driven company and you are still stucked with R, here are some points to keep in mind for the transition to Python

1. Be focused on outcomes
2. Forget language and focus on the ecosystem
3. Cross-language libraries
4. Real datasets
5. Work Locally

👇
1. Be focused on outcomes:
Instead of learning everything about python, be focus on build and train Machine learning models first: Linear Regression, Logistic Regression, KNN, SVM, NN

Check this out

Github: github.com/Wittline/Machi…
2. Forget learning the language and focus on the ecosystem: Go right to Numpy, Pandas and Scikit-learn

Medium: medium.com/personal-proje…
Read 8 tweets
May 1, 2021
There are a couple of activities involved in the world of data engineering:

1. Describe data architecture
2. Understand data sources
3. Design your data model
4. Configure infrastructure
5. Run and monitor ETL processes
6. Business intelligence and analytics

Thread 👇👇
The project below will help you to get some hands-on experience with data engineering, you will see all the steps mentioned above:

Uber expenses tracking

Data Architecture Image
Data Sources ImageImage
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(