Let's think now about what would it mean to properly answer this question. π
Say we want to answer "yes". How can we do that? Do we have to check that *all* NP problems are actually in P?
π To begin, there are infinitely many problems in Computer Science.
There must a better way...
π€ What if we could prove that just answering yes to *one problem* would automatically answer yes to *any other problem*?
Imagine that, a CS problem that would represent them all, such that solving that problem would immediately give you a solution to any other (NP) problem.
Say we have two problems A and B. For example, say A = sorting an array (SORT), and B = finding the minimum element (MIN).
How can we prove that B is at least as easy as A? In other words, if we have a polynomial algorithm for solving A, we must also have one for solving B. π
What we need is a polynomial-time reduction from B to A. This sounds complicated, but it's very intuitive.
π What we want is a way to implement MIN provided that you can call SORT, and that implementation has to be of polynomial complexity. For example:
Now we can definitely say that MIN is at least as easy as SORT. Maybe it's even easier (and we know it is, you can solve MIN in O(N) and SORT only in O(N log N) in the general case).
π The question now is, can we do this with one problem A for all other problems B (in NP)?
It turns out, this is exactly what Stephen Cook and Leonid Levin, independently, both showed around the 1970s.
But how does this work? What does it mean to find such a problem? π
The problem in question is Boolean Satisfiability (SAT), and it works like this:
β Given a Boolean formula, with variables and Boolean operators, such as *x & y | z*, is there any assignment of True/False to each variable that makes the formula true?
Sometimes the formula is extremely easy, but when it involves several copies of the same variable, some positive and some negative, it is not trivial to, just looking at the formula, decide if it's satisfiable.
π However, it is very easy to validate, so SAT is clearly in NP.
π€ Now, how can you go on proving that SAT is a problem to which any other problem in NP is polynomially-reducible?
First, remember we are concerned *only* with decision problems, i.e., problems whose answer is True or False.
Here are some intuitions... π
Say we have a decision problem B that is in NP. This means there is a polynomial algorithm V to verify if a solution of B for a specific input I is indeed True.
π€― We can create a Boolean formula F that basically says "B answered True to input I", for every possible input I.
This formula is quite large, but it is always possible to build it for a specific input I, by using the code of the verification algorithm.
βοΈ That means (believe me here) that the formula remains polynomial with respect to the length of I.
π An intuitive way to see that such a formula must be possible to build is to consider that whatever V is, it is a program composed at the end of a bunch of basic instructions.
βοΈ We can always build a formula, in principle, that says "this instruction was executed" and so kind of trace the execution of V in a Boolean form.
Formally, this is done with the Turing-machine definition of V, but intuitively, think of it as disassembling the code of V.
For any input I, we can build in polynomial time a formula F whose satisfiable iif B would answer True to I.
We can then run SAT(F), and we have a polynomial-time solution for B(I).
β And that's it! We have one problem, SAT, that can solve any other problem in NP.
This problem, SAT, is the first of the so-called NP-Complete problems.
A problem in NP that is so general, that any other problem in NP could be solved in polynomial time if you could solve that one problem.
But is not the only one! π
The most surprising finding is not SAT itself, but rather that is there is a huge number of such problems, seemingly very different, but in their core all equivalent:
As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to β€οΈ like and π retweet if you think someone else could benefit from knowing this stuff.
There seems to be a recent surge in the "HTML is/isn't a programming language" discussion.
While there are a lot of honest misconceptions and also outright bullshit, I still think if we allow for some nuance there is a meaningful discussion to have about it.
My two cents π
First, to be bluntly clear, if a person is using this argument to make a judgment of character, to imply that someone is lesser because of their knowledge (or lack of) about HTML or other skills of any nature, then that person is an asshole.
With that out the way...
Why is this discussion meaningful at all?
If you are newcomer to the dev world and you have some misconceptions about it, you can find yourself getting into compromises you're not yet ready for, or letting go options you could take.
One of the very interesting questions that really got me thinking yesterday (they all did to an important degree) was from @Jeande_d regarding how to balance between learning foundational/transferable skills vs focusing on specific tools.
@Jeande_d My reasoning was that one should try hard not to learn too much of a tool, because any tool will eventually disappear. But tools are crucial to be productive, so one should still learn enough to really take advantage of the unique features of that tool.
@Jeande_d One way I think you can try to hit that sweet spot is practice some sort of dropout regularization on your common tool set.
In every new project, substitute one of your usual tools for some convenient alternative. It will make you a bit less productive, to be sure...
β Today, I want to start discussing the different types of Machine Learning flavors we can find.
This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... ππ§΅
Last time we talked about how Machine Learning works.
Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric M.
According to the nature of that experience, we can define different formulations, or flavors, of the learning process.
A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1οΈβ£ Supervised and 2οΈβ£ Unsupervised Learning π
A big problem with social and political sciences is that they *look* so intuitive and approachable that literally everyone has an opinion.
If I say "this is how quantum entanglement works" almost no one will dare to reply.
But if I say "this is how content moderation works"...
And the thing is, there is huge amount of actual, solid science on almost any socially relevant topic, and most of us are as uninformed in that as we are on any dark corner of particle physics.
We just believe we can have an opinion, because the topic seems less objective.
So we are paying a huge disrespect to social scientists, who have to deal every day with the false notion that what they have been researching for years is something that anyone, thinking for maybe five minutes, can weigh in. This is of course nonsense.