Moore’s Law was a self-fulfilling prophecy because microprocessor manufacturers set it as their goal to keep up with it.
- Sophie Wilson, #qconlondon
4000 transistors is critical mass for a microprocessor. Fewer than that, you can’t do enough.
the ARM1 had 25000 transistors, in 1985. - Sophie Wilson, designer of the ARM1 #qconlondon
ARM1 was designed using a computer. Its predecessor (the 6502) was laid out by hand. #qconlondon
Reduced instruction set -> smaller number of transistors in the instructor decoder.
32 bit (instead of 8)
Better architecture made it faster
The Firepath microprocessor (2003) does signal processing for DSL everywhere, those green cabinets in the road.
6 million transistors
entirely laid out by a computer
Much more complicated instruction set.
For more power, add more microprocessors…
… limited by Amdahl’s Law.
Adding more processors only helps the parallelizable parts.
Quickly, the sequential part of your program dominates.
If it’s half parallelizable, you can’t exceed a 2x speedup ever.
No automatic compilation of scalar programming languages is going to work,
to scale computation ability with the increasing number of microprocessors in a computer.
We need a revolution in software.
- Sophie Wilson, #qconlondon
“Don’t write anything too slow, because you cannot assume that in the near future, a more powerful computer will come out and make that work.” Sophie Wilson, #QConLondon
For years, microprocessors increased in speed by 50%/year
but now it’s more like 3%.
All we can do is add more processors, so Amdahl’s Law rules.
Now we’re limited not by transistor size, but by power. They’re too hot.
Modern intel processors have high burst performance, but most of the time, they have to keep half the transistors dark.
They found best practices like: wrap each unit of work; report errors in a standard field; use span names that are specific enough to tell you what’s happening but general enough for useful grouping.
A deep Java performance talk that I don’t have enough context for, by @PeterLawrey
Project Panama is about replacing JNI. Meantime, if you want to share memory between processes cleverly and safely, you can use their chronicle-bytes library. #QConLondon
“If you go down to the low level for too long, you wind up writing systems that can’t be altered.” @PeterLawrey#QConLondon
This afternoon at #srecon, Adam Mckaig and Tahia Khan from @datadoghq about the evolution of their metrics backend
The high-level architecture looks very familiar to me. The slightly more detailed less so — many parts!
For scale, break up incoming data, put into kafka.
hash(customer_id) -> partition_id
… but then one kafka topic gets overloaded, so…
hash(customer_id) -> topic_id, partition_id
to send to topics in different clusters.
Today at #srecon, @allspaw and @ri_cook give deep insight on real tools, incident timelines, and clumsy automation.
But not in person. 😭
Great tools (as opposed to machines) are near to hand and conform to the person who wields them. Like a hammer, or `top`. Yeah.
They are opinionated, but not prescriptive.
(machines do what they do, and you conform to them)
In software, tools like `top` help us see what’s going on in the digital space. @ri_cook et al see our work taking place on two sides of a divide. There’s meatspace (where we are) and digital space (where the software runs). You can’t reach out and feel digital stuffs.
What can we learn from ALL the incidents? @courtneynash at @verica_io compiles reports from lots of companies into the VOID: Verica Open Incident Database. #SREcon
While every incident and every company is different, the distributions have the same shape. They are “positively skewed:” more short incidents than long ones.