Profile picture
, 19 tweets, 8 min read Read on Twitter
@thellimist 1/ There are many examples from the history of algorithms and data structures. Here's one that comes to mind:

Computer Science has a phenomenon called "The Curse of Dimensionality"
en.wikipedia.org/wiki/Curse_of_…
@thellimist 2/ The curse of dimensionality states that for geometric problems (such as nearest neighbor search), the complexity of the problem is exponential in the number of dimensions. This means that problems like nearest neighbor search become intractable as the dimension increases.
@thellimist 3/ In particular, it means that many high-dimensional problems are intractable. Such problems are e.g. given a track of music with some noise, find an existing track in a database that is the same without the noise. This was believed to be a difficult problem in the ~80s.
@thellimist 4/ However, in the 80s and 90s, the curse of dimensionality began to crack. It turns out that yes, the complexity of geometric problems like nearest-neighbor-search seems exponential in num_dimensions. But if you only look for approximate solutions, the problem becomes easier.
@thellimist 5/ In particular, the 90s and 00s brought us extremely efficient approximate algorithms for high-dimensional problems. Maybe the most celebrated of those are the works by Piotr Indyk, some with his Alexandr Andoni, around is Compressed Sensing
en.wikipedia.org/wiki/Compresse…
@thellimist 6/ The most iconic early use of these results is Shazam (the music search app) but the use of these results became pervasive. If you asked someone in the 70s, whether something like Shazam was possible, I think they'd say that it isn't, because of the curse of dimensionality.
@thellimist 7/ It turns out that this "impossibility" was only partial, and could be avoided by cleverly redefining the problem

This will happen with the supposed non-scalability of Bitcoin-type consensus as well. We'll relax the problem enough to solve it, while satisfying the applications
@thellimist 8/ In general, computer scientists have proved themselves extremely adept at making things scale. Not everything -- NP-hard problems are still non-tractable on worst-case inputs, and products of two large random primes are still hard to factor. But almost everything.
@thellimist 9/ The history of computation is littered with problems that were challenging to scale, being made to scale once there was enough motivation to put research efforts into them. Scaling distributed ledgers will be one of those cases.
@thellimist 10/ The reason I'm so sure is because the patterns are already there -- we already have some good attempts at scalable solutions, such as multi-layer blockchains. These solutions are just unsatisfactory for various reasons, but there doesn't seem to be an inherent blocker.
@thellimist 11/ Another example of problems that got solved with time is Integer Programming. This is a class of problems that are intractable in theory, but through the years practitioners found great heuristics, so by now they're often tractable in practice. (Google "CP solvers").
@thellimist 12/ Yet another example is Speech Recognition. It was obvious it will be solved, and gradual progress has been made since the 80s. The algorithms keep improving, and we're still on our way.
@thellimist 13/ These three examples should be contrasted with challenges that are *actually* hard, like building quantum computers or creating General AI. For these challenges there are good clear reasons why we might get stuck along the way.
@thellimist 14/ I'd compare challenges as follows:
Blockchain scaling -- is like landing on the moon
General AI -- is like developing faster-than-lightspeed travel.
@thellimist 15/ Landing on the moon was a challenge that took a lot of effort, but it was clear that if we just pour enough resources and brainpower into it, it'll be solved. In contrast, we might never find a way to travel faster than light, no matter how much resources we pour into it.
@thellimist 16/ One of the things we learn as scientists, especially as computer scientists, is how to try to distinguish between these two types of challenges. NP-Completeness is all about that: we might never find tractable algorithms for NP-hard problems: it's like FTL travel.
@thellimist 17/ Computer scientists try very hard to figure out what computation we'll be able to do within our lifetime, and what kind of computation we cannot. I don't think there's yet enough awareness of blockchains and their scaling among computer scientists to have reached a consensus
@thellimist 18/ But I'm pretty sure that, seeing how the technology develops (from Bitcoin, to PoS, to L2/lightning/plasma and DAG-based approaches and zkp and beyond), most well-informed computer scientists would guess that the problem is of the "moon landing" variety.
@thellimist @threadreaderapp unroll please -- might make this into a blog post :)
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Elad Verbin
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!