1/ In case you were wondering: Apple's replacement for Intel processors turns out to work really, really well. Some otherwise skeptical techies are calling it "black magic". It runs Intel code extraordinarily well.
2/ The basic reason is that Arm and Intel architectures have converged. Yes, the instruction sets are different, but the underlying architectural issues have become very similar.
3/ The biggest hurdle was "memory-ordering", the order in which two CPUs see modifications in memory by each other. It's the biggest problem affecting Microsoft's emulation of x86 on their Arm-based "Surface" laptops.
4/ So Apple simply cheated. They added Intel's memory-ordering to their CPU. When running translated x86 code, they switch the mode of the CPU to conform to Intel's memory ordering.
5/ With underlying architectural issues ironed out, running x86 code simply means translating those instructions to the Arm equivalent. This is very efficient and results in code that often runs at the same speed.
6/ Sometimes there isn't a direct equivalent, so the translation results in slightly slower code, but benchmarks show x86 being consistently at least 70% of the speed.
7/ In any case, a surprising number of popular apps already run on it. Apple seeded developer systems a few months back, allowing people to get their code ready.
8/ Normally, that wouldn't have been enough time. When you recompile code for a new architecture, it usually breaks. But as I said above: Arm and Intel architectures have converged enough that code is much less likely to break, making recompiling easier.
9/ Apple has made surprising choices. They've optimized JavaScript, with special JavaScript-specific instructions, double sized L1 caches, and probably other tricks I don't know of.
10/ Thus, as you browse the web, their new laptop will seem faster and last longer on battery, because JavaScript, even though other benchmarks show it roughly the same speed as Intel/AMD.
11/ The older MacBook Air had a dual core CPU that ran at 3.8 GHz, but when in low-power mode, 1.2 GHz. Switching between fast and slow modes is how it conserves power for mobile.
12/ But it's ultimately inefficient. The Intel CPU is designed to run at 5 GHz. Downclocking to 1 GHz saves power -- but not as much as if you'd designed the processor to run at 1 GHz to begin with.
13/ Apple's strategy is to use two processors: one designed to run fast above 3 GHz, and the other to run slow below 2 GHz. Apple calls this their "performance" and "efficiency" processors. Each optimized to be their best at their goal.
14/ When they need to conserve power, they turn off the "performance" processors and run code on their "efficiency" processors. They have 4x performance processors (twice that of their older Macs) plus 4x efficiency processors.
15/ All 8 can be active. When doing something that can use 8 processors, such as compiling code, it goes real REAL fast. 8 processors vs. 2 processors in their old notebooks make a difference.
16/ A big part of this story is that Intel is about 3 years behind on Moore's Law. Apple Silicon uses the latest 5nm tech from TMSC, while Intel uses the older 10nm/7nm generation. Much of Intel's product line uses the even older 14nm/10nm generation.
17/ None of this is actual "black magic". It's all pretty understandable. It's just all the various things have been executed really well, leading to a combined result that is a great leap forward.
18/ Another "magic" trick is how their "Swift" programming language uses "reference counting" instead of the "garbage collection" in Android. They did something in their CPU to double the speed of reference counting.
19/ ...even when translating x86 code, all that reference counting overhead (already more efficient than garbage collection) gets dropped in half. Yet another weird performance enhance to add to all the others.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hi. Professional C/C++ programmer here. The open-source code I can find written by Adam Back and Satoshi Nakamoto don't look remotely similar.
Back's code looks typical of academic Unix programmers who also hack their code to run on Windows.
Satoshi code was written by a professional Windows programmer who also wrote for Unix.
Stylistically, they look nothing alike. There's not enough time between 2005 when I can find the newest Adam Back and January 2009 when Satoshi published Bitcoin/0.1 to account for the change. Both are perfectly competent programmers, but stylistically, they are completely different.
The NYTimes tried to compare their English language in posts/emails. I'm compare their C/C++ language in their open-source code. The NYTimes merely points out they both use C++ as if that's another corroborating detail, when the actual code seems to disqualify Adam Back.
I was a professional Windows C/C++ programmer throughout the 1990s that had to also make code work on Unix. Satoshi's code speaks to me -- that's exactly the sort of code I wrote, down to using 'printf' instead of 'cout'.
What I mean to say is that he's gotten rid of all the C++ class hierarchy nonsense and is primarily using C++ as a smarter C with lightweight objects.
It's a VERY distinctive choice. Conversely, the "style" (where he puts spaces and braces) is non-distinctive, looks like all other code.
Okay, here's how this lie works: 1. everyone agreed that Russians did not hack election infrastructure 2. everyone agreed Russia meddled with the election in other ways, such as hacking the DNC and releasing emails from Podesta et al
She correctly notes that the intelligence community concluded that Russia '"did not impact recent U.S. election results" by conducting cyber attacks on infrastructure'.
🧵So let's talk about the difficulties Netflix is having streaming the Tyson v Paul fight, how the stream gets from there to your TV/computer. This will a longish thread.
In 1985 on his first fight, TV technology was based upon "broadcasts". That meant sending one copy of a video stream to thousands, often millions of receivers. A city would send the signal to a radio tower and broadcast that signal across a wide area.
In today's Internet, though, everybody gets their own stream. There is no broadcasting, no sharing of streams. Every viewer gets their own custom stream from a Netflix server. That we can get so many point-to-point stream across the Internet is mind boggling.
By the way, the energy density of C4 is 6.7 megajoules/kilogram.
The energy density of lithium-ion batteries is about 0.5 megajoules/kilogram.
C4 will "detonate" with a bang.
Lithium-ion batteries will go "woosh" with a fireball, if you can get them to explode. They conflagrate rather than detonate. They don't even deflagrate like gun powder.
To get a lithium-ion battery to explode (in a fireball) at all, you have to cause physical damage, overcharge it, or heat it up.
Causing heat is the only way a hacker could remotely cause such an event.
I don't want to get into it, but I don't think Travis is quite right. I mean, the original 25million view tweet is full of fail and you should always assume Tavis is right ....
...but I'm seeing things a little differently.
🧵1/n
I'm a professional, so I can take the risk of disagreeing with Tavis. But this is just too dangerous for non-professionals, you'll crash and burn. Even I am not likely to get out of this without some scrapes.
3/n To be fair, we are all being lazy here. We haven't put the work in to fully reverse engineer this thing. We are just sifting the tea leaves. We aren't looking further than just these few lines of code.
The reason IT support people are so bitter is that YOU (I mean YOU) cannot rationally describe the problem:
You: The Internet is down
IT: How do you know the Internet is down?
You: I can't get email.
IT: Is it possible that the email servers are down and the Internet is working just fine? Can you visit Twitter on your browser?
You: Yes, I can visit the twitter website.
IT: Is there any reason other than email to believe the Internet is down?
You: The last time I couldn't get email it was because the Internet was down.
The fact that IT doesn't call you a blithering idiot on every support call demonstrates saintly restraint, even if a little bit of their frustration leaks through.
A lot of good replies to my tweet, but so far this is the best: