Most of what you know of the 1980s hacking scene wasn't Internet, but "phone phreaking" and "BBSs". I don't know much about those things. I was an Internet hacker instead -- on the net since back before DNS was a thing (when 'hosts.txt' was distributed by hand).
By the late 1980s, computers from Sun Microsystems were a big deal. Yet, Sun (and other manufacturers) were immune to notifications of vulnerabilities. Issues had to be handle by tech support, and if you didn't have a support contract, you didn't matter.
Here's the thing, the Big F***ing Thing: hackers don't have a support contract. Sure, given one set of assumptions, it's pointless fixing bugs that customers aren't complaining about. But it's the wrong set of assumptions.
The culture of 1980s Internet hacking was trading vulns, exploits, and textfiles. We'd get some from friends, and exchange them with other people to get more. Examples were the SMTP DEBUG and Finger exploits. We'd hack into computers without the victim's knowing how.
So vendors ignored vulnerability reports. This encouraged Robert Tappan Morris to demonstrate the fallacy of that thinking. He wrote a "worm" that would exploit the vulns, then target other random machines on the Internet and exploit the vulns on those, too.
So I walked in to the computer science center early one morning in November 1988 and spent the next several days battling the worm. One thing I tried was what's now called a "tarpit", which held open the connection, delaying the worm.
After the Morris worm, DARPA established the CERT/CC -- the Computer Emergency Response Team / Coordination Center. The idea was CERT would guard against future occurrences of such worms.
But it was a complete failure. They were still ignoring vuln reports, trying to cover them up and suppress them rather than getting vendors to fix them. We hackers were still trading vulns in the underground, causing mayhem that CERT couldn't see.
Then in 1993, the Bugtraq mailing list appeared. This was before the web as we know it, so there was no website where we could publish such information. A mailing list was used instead.
Hackers published vulns and exploits to the mailing list for everyone to see.
Then in 1995 SATAN was published -- a tool that collected vulns/exploits from Bugtraq into a tool that could be aimed at a victim to take it over.
This is the time of "script kiddies", those with little expertise or understanding who simply downloaded the "scripts" posted to Bugtraq in order to hack into systems.
This all sounds very bad, full of chaos, but here's the thing: vendors started paying attention and fixing vulns, even when notified of the problem by somebody who wasn't a customers.
All this is an application of Kerckhoff's Principle from the 1880s about cryptography. The solution to vulnerabilities is not to cover them up but expose them. This is the "full disclosure" movement -- bugs must be disclosed, even if it helps the adversary. And it must be public.
Todays in the 2020s we see the opposite attitude from leading Internet companies (Microsoft, Amazon, Apple, Google, etc.). Instead of covering up vulnerabilities, they provide bounties for people to expose them. Even the Pentagon does this.
Bugtraq itself hasn't been important for almost 20 years, but that's simply because it won. We are all Bugtraq now.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If this sounds like a wackjob conspiracy theory, it's because this is a wackjob conspiracy theory. Signal's source code and algorithms are open. Just because some government departs have given it funding doesn't mean it's a secret plot by the CIA.
Signal uses well-known crypto algorithms. If they are insecure, well, then all cryptography is insecure and it doesn't matter which encrypted messaging app you use.
If there's a backdoor in the code, well, the code is open source and people would be able to find it.
Here's is the "censorship episode" of the show "WKRP in Cincinnati", where you see Andy (radio station program director) argue "free enterprise" against preacher "Dr. Bob Hallier" who is using boycotts to get them to remove music from the radio:
This is an exceptionally lazy argument on a platform known for lazy arguments. There are people who consistently oppose censorship on principle, whether it's censoring Trumpists, censoring terrorists, or censoring any other disliked group;
If activists came to Signal with the phone numbers of identified Proudboys members, as well as the contents (retrieved from phones) of messages they sent via Signal planning an insurrection, what should Signal do?
Reverse engineering the Parler app to scrape all the public content from Jan 6 (including content marked "deleted" but not yet deleted) is a "hack". It's an unexpected and really cool thing that we didn't expect.
I suppose this also is political, but what makes it a "hack" has nothing to do with politics. What makes it a hack is that people have orthodox beliefs about public scraping of websites that this challenged.
The "First Amendment" only deals with government restriction of free speech. You may not like a private company censoring your speech, but it's not a "First Amendment" issue. Indeed, the First Amendment means government can't stop private censorship.
Moreover, "Orwellian" is less about a totalitarian state and more about how politicians make lies that sound truth -- such as a senator claiming to be a constitutional lawyer making one of the most common and basic mistakes about the First Amendment.
This "voter integrity" issue is more doublethink, by the way, and EXACTLY what Orwell was talking about. It was about searching desperately for any excuse that could plausibly be exploited to turn the election in Trump's favor.