The reason why I was trying to solve this problem is that I have a directory with a huge number of PDF files (books).
I was almost sure that there are some duplicates among these files (maybe even with different names, that's why the hashing).
I started by implementing a simple and straightforward script in Node.js.
It recursively traverses the directory and its sub-directories while collecting the files in each one of them. Then, generating a hash value for each file. Keeping a record of files that have the same hash value. Finally, printing out the full paths of similar ones.
That script ran peacefully on the books directory which has approximately 400 files (3.6 GB). It took ~1m 40s.
But can we do better? I thought that by using promises and trying to concurrently hash the files we can drastically improve the performance. But I was completely wrong.
pastebin.com/ALZxXm8p
Running the promises version took forever, I had to kill the process, and it was eating up all my RAM. So, I decided to implement the solution in Go.
A simple implementation in Go took almost (-10s) the same time as the first Node.js solution.
Now trying to improve our Go solution by implementing concurrency using Go routines & Wait Groups. pastebin.com/0cfHSiY4
This solution took roughly 9 minutes. But the cool thing is it didn't consume much memory. The memory consumption was nearly consistent of about 6 MB.
We conclude that the bottleneck is the I/O (the disk reading speed, HDD in my case). No matter how we try to increase the performance, we are faced by the limitations of the physical movement of the hard disk (required to read the file, thus generating the hash value).
Let me know if I have any mistakes. 😁
I had so much fun today and I learned a lot.
• • •
Missing some Tweet in this thread? You can try to
force a refresh