here is the csv data and the process behind creating this heatmap of parler video GPS locations gist.github.com/kylemcdonald/8…
here is an interactive map for all the geocoded parler videos. you can look into specific cities and countries, and check timestamps and video ids kylemcdonald.net/parler/map/
a short tutorial on how to view the remaining parler videos by editing your DNS records. many have been made inaccessible, but some are still available gist.github.com/kylemcdonald/d…
you can see people moving from the white house (bottom) to the capitol (top), if you plot the video metadata with time on the x axis and longitude on the y axis. it looks like almost every second is covered from at least one angle.
1200+ videos uploaded to parler in the DC area on 1/6
wasn’t glenn the guy on backup vocals for poitras and snowden? really taking a new direction for his sophomore album.
there are around 14,000 parler videos that include personal computer usernames in the video metadata. these videos are typically edited with adobe products. (i replaced identifiable names with asterisks.)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
a new study on bitcoin energy use gives us the most accurate picture yet, and it basically confirms what we already knew: bitcoin is using about as much energy as the entire internet (around 12GW or 100TWh/year).
previous work on bitcoin energy has been “top-down”, broadly based on market-driven data, assuming that miners are spending a % of their profit on electricity. but when electricity prices go down or bitcoin price goes up, top-down approaches overestimate—by up to 50% for CBECI.
this new study is different because it measures tiny variations in the way that bitcoin mining equipment generates random numbers. these variations serve as a fingerprint allowing us to directly estimate the proportion of different machines.
one of the biggest features @OpenAI could add right now would be contextual generation constraints for their LLMs. for example, forcing responses to match a JSON template.
this is only possible when you have access to the inner sampling loop. the first company to offer this at scale is going to open up a huge market, because templated responses will lure in data scientists trying to make sense of big natural language corpora.
in practical terms: this would allow a programmer to point the chatgpt api at thousands of documents and turn them into a thousand-row spreadsheet/database with well-defined columns/schema. right now this is fraught with hiccups (though it's still faster than doing it manually).
is there any research on, like, LLM societies? 1. ask an LLM for unique 10-word descriptors for 1000 different people 2. have it fill out each person with a backstory 3. track their locations in a virtual space and run simulated interactions between them when they meet.
the sims, feat gpt-4: turing’s purgatory
give the entire society a real challenge. natural disaster, impending war, rising fascism, climate change, and ask the LLM to read through millions of lines of interactions and find the most interesting ones, compile it into a book of plays. like an AI DAU gq.com/story/movie-se…
one year later, i've just published to arxiv a final post-mortem of ethereum's proof-of-work era emissions arxiv.org/abs/2112.01238 the final damage: 18.1 million tons of CO2, 20% more than the nord stream pipeline leaks kylemcdonald.github.io/ethereum-emiss…
i opted not to add any new mining equipment benchmarks (which would have gone in the red circle), and not to make any other changes that would have modified my previous predictions. i just ran the model for the remaining amount of time until the merge.
for easy replication, and possibly useful for other folks doing historical analysis of ethereum: i published a ~200MB sqlite3 database that includes the miner address and extra_data field for every PoW-era block. see the readme for a link github.com/kylemcdonald/e…
seeing this anti-web3 link go around, "keep the web free, say no to web3". i think this takedown misunderstands some details, so i'm going to try and clarify which critiques i think are actually valuable yesterweb.org/no-to-web3/ind…
1 "quadratic voting means people vote with money" yes, but it favors 10 people spending $1 over 1 person spending $10. it's better than existing lobbying systems that only empower people w a ton of money. quadratic voting works when each person gets a fixed set of votes.
the bigger issue is that web3 has no way to give individual people a fixed set of votes ("proof of personhood"). so it will always fundamentally be about what you can afford—and in the worst cases, about how many accounts you're willing to create ("sybil attacks").
New research & tracker for Ethereum energy + emissions. I estimate that every day Ethereum delays PoS, it emits ~20ktCO2. That’s comparable to two to three coal power plants. kylemcdonald.github.io/ethereum-emiss…
This is where I say “what I found will shock you!” and give some more scary numbers and comparisons. But basically, I found confirmation that previous estimates are probably in the right range. I’ll recap in this thread, but I also wrote a short summary kcimc.medium.com/ethereum-emiss…
If my numbers are right, Ethereum is using around 2.6GW right now, annualized to 23TWh/year. Comparable in scale to a small country, or a US state like Massachusetts (21TWh/year). But also, only 0.1% of global electricity.