This. Arguably the greatest achievement of the American republic was the end to both unreplaceable tyrants-for-life, and bloody wars of succession among pretenders to the throne. I can't think of anything more important in politics than preserving the peaceful transfer of power.
“The worst case, however, is not that Trump rejects the election outcome. The worst case is that he uses his power to prevent a decisive outcome against him.” theatlantic.com/magazine/archi…
“If you are a voter, think about voting in person after all. More than half a million postal votes were rejected in this year’s primaries, even without Trump trying to suppress them. If you are at relatively low risk for COVID-19, volunteer to work at the polls.
“If you know people who are open to reason, spread word that it is normal for the results to keep changing after Election Night. If you manage news coverage, anticipate extraconstitutional measures, and position reporters and crews to respond to them.
“If you are an election administrator, plan for contingencies you never had to imagine before. If you are a mayor, consider how to deploy your police to ward off interlopers with bad intent. If you are a law-enforcement officer, protect the freedom to vote.
“If you are a legislator, choose not to participate in chicanery. If you are a judge on the bench in a battleground state, refresh your acquaintance with election case law. If you have a place in the military chain of command, remember your duty to turn aside unlawful orders.
“If you are a civil servant, know that your country needs you more than ever to do the right thing when you’re asked to do otherwise.”
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If “low-hanging fruit” or “ideas getting harder to find” was the main factor in the rate of technological progress, then the fastest progress would have been in the Stone Age.
Ideas were *very easy to find* in the Stone Age! There was *so much* low-hanging fruit!
Instead, the pattern we see is the opposite: progress accelerates over time. (Note that the chart below is *already on a log scale*)
Clearly, there is some positive factor that more than makes up for ideas getting harder to find / low-hanging fruit getting picked.
“Ideas getting harder to find” is ambiguous, let me clarify.
In the econ literature it refers to a specific phenomenon, which is that it takes exponentially increasing R&D investment to sustain exponential growth. This is basically all the low-hanging fruit getting picked.
• The AI can do better at the goal if it can upgrade itself
• It will fail at the goal if it is shut down or destroyed (“you can’t get the coffee if you’re dead”)
• Less obviously, it will fail if anyone ever *modifies* its goals
There is an AI doom argument that goes, in essence:
1. Sufficiently advanced AI will be smarter than us 2. Anything smarter than us, we cannot control 3. Having something in the world that we cannot control would be bad
∴ Sufficiently advanced AI would be bad. QED
One counter is to deny (1), eg: AI will never be that smart; intelligence is multi-dimensional and it doesn't make sense to compare them; super-human intelligence is so far in the future that we shouldn't worry about it; etc
This is becoming less popular recently as AI advances.
Another counter is to deny (2): we can build superintelligent systems, but have them be our tools or servants.
This is probably most popular among techno-optimists.
1. So dangerous that no one can use it safely 2. Safe if used carefully, dangerous otherwise 3. Safe if used normally, dangerous in malicious hands 4. So safe that even bad actors cannot cause harm
Important to know which you are talking about.
Arguably:
Level 1 should be banned
Level 2 requires licensing/insurance schemes
Level 3 requires security against bad actors
Level 4 is ideal!
(All of this is a bit oversimplified but hopefully useful)
“Optimal Policies Tend to Seek Power” supposedly gives a theoretical basis for power-seeking behavior from AI
But it seems to just analyze a toy model and show that if you head towards a larger part of the state space, you are more likely to optimize a random reward function?
The intro claims that “power-seeking tendencies arise not from anthropomorphism, but from certain graphical symmetries present in many MDPs [markov decision processes]”
But what is actually demonstrated seems much more trivial than that. What am I missing?