here is the csv data and the process behind creating this heatmap of parler video GPS locations gist.github.com/kylemcdonald/8…
here is an interactive map for all the geocoded parler videos. you can look into specific cities and countries, and check timestamps and video ids kylemcdonald.net/parler/map/
a short tutorial on how to view the remaining parler videos by editing your DNS records. many have been made inaccessible, but some are still available gist.github.com/kylemcdonald/d…
you can see people moving from the white house (bottom) to the capitol (top), if you plot the video metadata with time on the x axis and longitude on the y axis. it looks like almost every second is covered from at least one angle.
1200+ videos uploaded to parler in the DC area on 1/6
wasn’t glenn the guy on backup vocals for poitras and snowden? really taking a new direction for his sophomore album.
there are around 14,000 parler videos that include personal computer usernames in the video metadata. these videos are typically edited with adobe products. (i replaced identifiable names with asterisks.)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
if you're not following the experiments happening with gemini, you are missing out. there are some deeply new & weird ideas here (mostly coming out of google creative lab) that are gonna be around long after the ghibli memes are gone 🧵
what is the future of human-AI interaction? is it virtual avatars/talking heads that feel like talking to another human? i don't think so. i think this is a fairly limited use case, maybe good for therapy, education, or customer service grisoon.github.io/INFP/
i think the future of human-AI interaction is probably a lot more like this co-drawing demo from @trudypainter our shared medium with LLMs is typically text, or voice, but most of the time we want a more native shared medium
i'm building an experimental tool for exploring 25 years of my old sketchbooks, with image and text recognition powered by gemini
the first step could not be automated—i pulled out my box of 33 sketchbooks, and took a photo of every single page. around 4300 pages in total.
it feels like a lot. it's almost my whole life. one of my highschool art teachers required us to keep a sketchbook, and i never stopped. it's where i keep my thoughts organized. a lot of my thinking is visual or diagrammatic.
a new study on bitcoin energy use gives us the most accurate picture yet, and it basically confirms what we already knew: bitcoin is using about as much energy as the entire internet (around 12GW or 100TWh/year).
previous work on bitcoin energy has been “top-down”, broadly based on market-driven data, assuming that miners are spending a % of their profit on electricity. but when electricity prices go down or bitcoin price goes up, top-down approaches overestimate—by up to 50% for CBECI.
this new study is different because it measures tiny variations in the way that bitcoin mining equipment generates random numbers. these variations serve as a fingerprint allowing us to directly estimate the proportion of different machines.
one of the biggest features @OpenAI could add right now would be contextual generation constraints for their LLMs. for example, forcing responses to match a JSON template.
this is only possible when you have access to the inner sampling loop. the first company to offer this at scale is going to open up a huge market, because templated responses will lure in data scientists trying to make sense of big natural language corpora.
in practical terms: this would allow a programmer to point the chatgpt api at thousands of documents and turn them into a thousand-row spreadsheet/database with well-defined columns/schema. right now this is fraught with hiccups (though it's still faster than doing it manually).
is there any research on, like, LLM societies? 1. ask an LLM for unique 10-word descriptors for 1000 different people 2. have it fill out each person with a backstory 3. track their locations in a virtual space and run simulated interactions between them when they meet.
the sims, feat gpt-4: turing’s purgatory
give the entire society a real challenge. natural disaster, impending war, rising fascism, climate change, and ask the LLM to read through millions of lines of interactions and find the most interesting ones, compile it into a book of plays. like an AI DAU gq.com/story/movie-se…
one year later, i've just published to arxiv a final post-mortem of ethereum's proof-of-work era emissions arxiv.org/abs/2112.01238 the final damage: 18.1 million tons of CO2, 20% more than the nord stream pipeline leaks kylemcdonald.github.io/ethereum-emiss…
i opted not to add any new mining equipment benchmarks (which would have gone in the red circle), and not to make any other changes that would have modified my previous predictions. i just ran the model for the remaining amount of time until the merge.
for easy replication, and possibly useful for other folks doing historical analysis of ethereum: i published a ~200MB sqlite3 database that includes the miner address and extra_data field for every PoW-era block. see the readme for a link github.com/kylemcdonald/e…