I own a handful of large synthetic gems, in part because some part of me is tickled by owning items that are cheap now, but would've been readily recognized as hugely valuable 1000 years ago. What, if anything, tops lab-created rubies for that ratio?
Penicillin would not be immediately recognized as valuable. Same problem with a giant tungsten cube. You can't trade it right after you get tossed back in time.
"Purple dye" is the most plausible reply so far imo (3x value by weight of gold), though I'm not sure how much they cared back then about getting the exact shade or properties of snail-derived Tyrian purple.
(A lot of other replies, in my opinion, vastly overestimate the willingness of some medieval merchant to pay a lot for a nonstandard good that they've never seen before and don't have confidence future buyers will pay them for.)
To everyone saying "maps", their world was full of bad maps. They can't tell you've got a true map by staring at the paper.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
In futures anything like the present world, a major reason I expect AGI to have a discontinuous impact, is that I expect AGI is not *allowed* to do huge amounts of economic work before it is *literally* revolutionary in the sense of political revolution.
What's AI gonna do in the current political environment? Design safe, reliable, effective vaccines in an hour instead of a day? That already wasn't the part that takes time; the part that takes time is the FDA, which doesn't go any faster with non-politically-revolutionary AI.
Is an AI going to build houses? We can already build houses, technologically speaking. It's just illegal.
Is an AI going to practice better medicine? With no MD??!?
Can an AI tutor kids? Colleges aren't about human capital. Knowledge isn't worth money, accreditation is.
@wolftivy@willdjarvis The scenario you generate is, in two critical ways, so unlike the real world that the answer to your scenario wouldn’t generalize to the real world. One, I expect that any "true" AGI (as it would appear in the laboratory) scales to superintelligence by running deeper and wider.
@wolftivy@willdjarvis Second, AI is not currently on a trend to be controllable or alignable at all, at the superintelligent level. If alignable at all, it will be barely, narrowly alignable, on some task chosen and constrained to a bare minimum of ambition for alignability.
@wolftivy@willdjarvis So in the real world, when you "have" AGI first, what you mostly have is a non-survivable planetary emergency where you cannot scale it and then have it do anything nice, and as soon as somebody else steals or reproduces the code and scales it, everybody dies.
I shall observe for the record that those same people who proclaim "Don't worry about AGI, we have no idea how intelligence works and no idea how to build one!" also have 'no idea' how to build OpenAI Codex or how GPT-3 works, under that same standard.
Should one conversely say "To build Codex you just train it real hard to do a thing, and the way GPT-3 works is a bunch of layers that got trained real hard by gradient descent" - if you can build things with that little knowledge, maybe it's possible to build AGI that way too.
Ah, but is AGI different? By what law of mind? Didn't you just get through telling me you had no idea how AGI worked? Then don't tell me you know it can't be a bunch of layers, and can't pop up if you train real hard on gradient descent on a loss function.
After many years, I think that the real core of the argument for "AGI risk" (AGI ruin) is appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, ON THE FIRST TRY, will kill you ON THAT FIRST TRY, not let you learn and try again.
From there, any oriented person has heard enough info to panic (hopefully in a controlled way). It is *supremely* hard to get things right on the first try. It supposes an ahistorical level of competence. That isn't "risk", it's an asteroid spotted on direct course for Earth.
There is further understanding that makes things look worse, like realizing how little info we have even now about what actually goes on inside GPTs, and the likely results if that stays true and we're doing the equivalent of trying to build a secure OS without knowing its code.
I realize this take of mine may be controversial, but the modern collapse of sexual desire does seem to suggest that our civilization has become bored with vanilla sex and we must move towards a more BDSM / Slaaneshi aesthetic in order to survive.
look, I'm not saying we couldn't ALSO solve this problem by removing the supply constraints that turn housing, medicine, childcare, and education into Infinite Price Engines; but we're clearly NOT going to do that, leaving Slaanesh as our only option
so many commenters suggesting things that are FAR less politically realistic than a mass civilizational turn towards Slaaneshi decadence. get inside the Overton Window you poor naive bunnies.
Suppose the US govt announces a nontaxable $300/month universal benefit for all 18+ citizens (no job or nonjob requirement), paid for by a new tax on land values (so monetarily neutral). What is the effect on wages?
I'm sorry, I definitely should've clarified this: "Wages" as in "wages-per-hour" rather than as in "total wage income of all laborers".
If you don't expect this operation to be monetarily neutral, please answer for real wages.