LaurieWired Profile picture
Jul 30 4 tweets 2 min read Read on X
Programming Languages used to be designed as powerful as possible.

Maximum possible utility given hardware constraints.

The pro move is to choose the *least* powerful, non-Turing-complete solution.

The entire web exists because of the Principle of Least Power: Image
Image
Don’t take my word for it. Tim Berners-Lee (inventor of HTML, HTTP, etc) had this to say:

“the less powerful the language, the more you can do with the data...”

HTML is purposefully *not* a real programming language.

The constraint pushed innovation to data processing. Image
Imagine an alternate-reality Web, where HTML didn’t exist.

Java applets would have been a serious contender; they certainly allowed for rich interactivity.

Yet, without a way to freely scrape simply formatted data, search engines would be a non-starter. Image
Berners-Lee warned that powerful languages have “all the attraction of being an open-ended hook into which anything can be placed”.

It’s hard to do, but sometimes you should ask yourself: can this be declared instead of coded?

Purposefully constraining yourself to the Principle of Least Power, not only reduces the attack surface, but opens up huge data analysis capabilities later.

Every programmer should take a look at Tim Burners-Lee article.

Written in 1998, but still insanely relevant today:
w3.org/DesignIssues/P…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with LaurieWired

LaurieWired Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @lauriewired

Jul 29
When you make a Bank ACH transaction, it’s literally just an SFTP upload.

Sent as a NACHA file, it's 940 bytes of ASCII text.

Bank-to-Bank transactions cost ~0.2 cents. As long as it travels via encrypted tunnel; it’s compliant!

Here’s how the quirky system works: Image
Image
Chase offers a sample NACHA file to look at.

Notice the rows padded with 9s. It’s an artifact of a 1970s rule about magnetic tape, "always fill the block".

To this day, total line count *must* be a multiple of ten; otherwise the bank will drop the transaction. Image
Image
Of course, larger Fintech firms (think Stripe) wrap it up with modern APIs, but SFTP is the default for most US Banks.

Hilariously, NACHA rules don’t clarify *how* transactions should be encrypted.

Only that “commercially reasonable” cryptography should be used. Image
Read 4 tweets
Jul 28
Intel’s not doing so hot lately. Meanwhile vendors are killing it at the RISC-V Summit in China.

One CPU got a specint2006/GHz rating of 10.4/GHz!

To put it in perspective, a i7-4790k (Haswell) scores 8.1/GHz.

RISC-V is hitting high-end desktop territory FAST: Image
Image
No one realizes how quickly a CPU supply chain; completely independent of western IP is progressing:

Not long ago RISC-V performance was a joke.

Now it’s trading blows with x86 and high end ARM!

In another generation or two it’s going to be a *serious* contender. Image
UltraRISC (the vendor with the high specint2006 score) is fabbing on a pretty old node; TSMC 12nm.

They don’t even have full vector support yet!

Imagine in a few years, a ~7nm chip /w full vector extensions.

China will have *fast* CPUs with no license chokepoints! Image
Read 4 tweets
Jul 22
Fading out audio is one of the most CPU-intensive tasks you can possibly do!

Values that get close (but not quite) zero, hit an underflow gap known as "Subnormal" range.

It’s a mathematical conundrum so tricky, both x86 and ARM made special CPU instructions just to handle it! Image
Image
In computer science, mathematical real numbers are approximated by floating-point representations.

For a single 32-bit float, the smallest “normal” positive we can hold is 1.17 × 10^-38

Tiny differences could get rounded off, leading to later Divide by Zero errors! Image
In the 80s, companies were fighting over the IEEE 754 (floating point) standard.

Intel wanted “good arithmetic”. DEC wanted to round off to zero.

As the most contested proposal, Intel suggested a “subnormal” routine, so that 32-bit floats gradually underflow. Image
Image
Read 5 tweets
Jul 21
Windows is one massive (private) Git repo.

When I was at MS, the Windows Source had around ~3k PRs a day!

Regular Git didn’t scale to those levels at the time.

Internally there was a progression from Git -> GVFS -> Scalar -> merge back to Git. Here's how it worked: Image
Cloning Defender’s antivirus repositories alone would take a full day!

You may be wondering how we dealt with merge conflicts.

Teams were heavily siloed, with most of the effort put up front on the build process.

Syntax and such was checked with local builds before pushing. Image
Once a PR was made, a ton of automated checks would start.

For something small like a Virus sig, it would make sure you didn’t break the defender engine ;)

Of course, PR's were hand-reviewed but honestly, the build process was so robust it caught just about anything. Image
Read 4 tweets
Jul 14
Microsoft took ~30 years to be compliant with the C++ standard!

Seriously. From 1993 to 2020, MSVC’s preprocessor wasn’t feature-complete.

Code that compiles perfectly on Linux often broke.

Hold your judgement; there's some interesting historical nuance: Image
Image
A tech race in the 80s led to unfixable debt.

Official standards wouldn’t exist until 1998.

MS engineers made “best guesses”, but they were competing with others (Borland, Watcom) for the C++ compiler market.

"We'll clean it up after capturing market share". Image
Popularity became their downfall.

Window’s business model was dependent on legacy compatibility.

The mantra of “never break old code” effectively tied the compiler team’s hands.

Those early “ship it quick, good enough” preprocessor decisions? Kind of a problem... Image
Image
Read 5 tweets
Jul 10
There’s a cursed C++ competition where programmers try to create the largest possible error message.

Finalists created ~1.5GB of error messages from just 256 bytes of source.

Preprocessor exploits were so easy, they had to create a separate division! Here's my favorites: Image
Image
One contestant experimented with C-Reduce; a way to search for C++ programs that create unusual compiler behavior.

Maximizing the fitness function for error as a reward, no templates, no pre-processor hacks.

Just nested parenthesis causing exponential error output! Image
For the “precision” subcategory, the aim is to create exactly pi megabytes (pi*1024*1024) of error.

Source had to be <256 bytes.

One anonymous entry hit it spot-on with just ~230 bytes of code. Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(