The Internet Archive struggles mightily to save content when services shut down, but it is often a panicky reactive process. I wonder if there could be a world where the IA acts as a default host-of-record for startups, with a super-easy CDN relationship such that the content \
is archived-by-default in a heavily bandwidth restricted way, but your customers are served through conventional commercial means. I could imagine a CDN giving a nice rate for the "public good" of a continuously archived and mirrored service. I have a vague feeling that \
something like this could combine with a blockchain style technology to make internet applications that could outlive companies. A niche multiuser game that couldn't meet company revenue goals could still be "fed" by anyone that wanted to push resources at it, since the \
mechanisms would all be public and mirrored. There would be great temptation to keep "one critical piece" off of the public infrastructure, but being "certified pure" might be valuable to consumers as a sign that it won't just disappear.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Had a long talk with @ESYudkowsky about AI safety. The pessimists fear that we only have one shot at getting alignment right, and will probably blow it, but I think we will have ample experimental opportunities. \
Our core argument is the power provided by superhuman intelligence.
Being Really Smart isn't going to let you deploy nanotechnology weapons overnight, or immediately hack human psychology to the level of mind control. That may be possible over years, but not days or weeks. \
Technology development is almost never a pure product of intellect.
Intelligence can lead to power, but by itself it isn't power. Tens of thousands of years passed where humans were still getting regularly eaten by wild animals. \
People like the idea of hard and fast rules, but putting +/-infinity as a factor in a policy decision is almost never the best plan. Hiring is an obvious example, with "requirements" that filter out large chunks of applicants that don't have some kind of back channel \
influence. Disqualifications are rarely explicitly stated, but they exist. If a company has a reasonable flow of applications, just tossing out all the ex-cons is an easy call. On the other hand, applicants often don't appreciate that interviewing prospects is a very non-trivial\
cost to an organization, and "interview everyone" is not viable for places in high demand. Given imperfect signals, a degree of random exploration is mathematically optimal, but "add up your stats and roll a D20 to see if you get a job interview" is unlikely to be very popular \
A lot of indie game devs want to do everything themselves, either by leaning on the asset store, or by becoming a polymath coder/artist/modeler/sound designer. It isn't impossible, and everyone has their favorite example, but it definitely isn't the high-probability path to \
actually produce something successful. If it is a personal growth hobby, then fine, but if you want to compete in a very crowded market, expanding the team with complementary skill sets is usually critical. I think about this a lot as I sit here working on AI by myself.
To make this a little more actionable -- maybe the next development task you brace yourself for shouldn't be learning how to make hair in Blender, but rather scouting partners or contractors you can afford (do not value your time at zero!). There may be a middle ground for \
Before the iPhone existed, I worked on a few games for what were called "feature phones": Doom RPG 1&2, Orcs&Elves 1&2, and Wolfenstein RPG.
Qualcomm's native-code BREW platform had better versions, but I haven't seen any emulators and archives for it, so they may be lost at \
this point. The J2ME (java mobile) versions are still floating around, and can be emulated.
My son wanted to get O&E2 running, so we set out on a little adventure.
Kemulator ran the game, but audio was glitchy and it hung after you died in game. Well, we are programmers, we \
should be able to fix it. Unlike most emulator projects, Kemulator turned out to be closed source abandonware, so we moved over to freej2me, which is a live github project.
The hang didn't happen, but audio was even worse. Missing sound effects was a simple bug fix -- \
I get asked for career advice a lot, and while my "learn deeply" pitch may be good long term advice, it doesn't help breaking in -- being able to write your own tool chain with a hex editor is great and all, but it doesn't add value at most companies. \
I suspect there is a useful path that I think of as the "tool master". Modern art and programming tools are enormously complex systems, and the typical user only touches a tiny fraction of their features. Picking a path of study that revolves around deeply learning a tool rather\
than building works is potentially backwards, but if you learn why every feature exists, you actually learn a lot about the craft the tool is used for, and you are very likely to be able to add value to a team almost immediately by teaching tricks to the existing developers, \
After complaining that numpy took many hours to solve a 64k x 64k matrix, I broke out cuSolver, Nvidia's GPU linear algebra library. A 32k matrix gets solved (LU decomp) over 1000x faster than base numpy (with MKL not loving my AMD CPU), but a 64k matrix of floats is too big \
to solve directly on my 24 GB Titan RTX card. The nice thing about working with a low level library is that you have to explicitly allocate the temporary working buffers, so when it doesn't fit on the device, I can put it in pinned host memory or on my other card connected \
by NVLink. The 64k matrix gets solved in 109 s with nvlink memory, which is still 200x faster. At 32k, the comparison is:
Local mem: 2.2
Nvlink mem: 21.7
Host mem: 80.8
Clearly very bandwidth bound! There is probably a super-linear speedup for explicit multi-gpu computation. \