Mostly Peaceful Aztec Empire
after a video game trailer featuring an aztec warrior goes viral, posters bravely stand up against human sacrifice — and the knee-jerk defense of a uniquely evil society
In the prevailing cultural narrative, instilled in Americans from childhood, Europeans are bloodthirsty murderers in an Edenic pre-Columbian paradise: Pocahontas villains disrupting the peaceful lives of buckskin-clad Indians frolicking through the forest, singing to raccoons,… twitter.com/i/web/status/1…
People who defend Aztec culture like to point out that other historical civilizations engaged in an array of brutality, including human sacrifice. This is true, but what makes Aztec society unique is not the presence of human sacrifice, but its centrality. An analog can be found… twitter.com/i/web/status/1…
If this debauched gender-goblin pride parade wasn’t bad enough, consider that the Aztecs also sacrificed children to the god Tlaloc, who was pleased by their tears because they resembled the rain he controlled. Archeological evidence suggests that, on the way to execution,… twitter.com/i/web/status/1…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
BASE REALITY: An Interview with Grimes
the dawn of ai: technocracy v. the artist class, grimes trains an AI clone, simulation theory, the optimistic moral imperative, abundance, and waking up the cosmic robot gods
—@micsolana
“That’s not their job though,” Grimes says. “That’s not their job. That is the job of the artist.”
It was a question we’d circle a few more times throughout the interview: why are the architects of artificial intelligence so terrible at explaining what artificial intelligence is… twitter.com/i/web/status/1…
“It’s still kind of ChatGPT,” Grimes says. “It’s also so funny being in Silicon Valley because I was like, yeah, my consciousness exists (NOTE: cloned). But then everyone else is like, ‘oh, I have one too.’ Everyone here just has a chatbot of themselves.
USA Today wrote “Some experts argued roles like this should go to actors who naturally have this body type.” But nobody naturally weighs 600 pounds, as Brendan Fraser’s character does. @river_is_nice dives into the fat activist response to his Emmy win — piratewires.com/p/the-whale-do…
Roxanne Gay on The Whale: "[The writer & director] considered fatness the ultimate human failure." But it's not controversial to view 600lb obesity as a tragic human failure to avoid at all costs. And Gay, like all fat activists, implies a double standard for obesity here.
Some said the quiet part out loud and directly compared the fat suit to blackface. But being fat is not an immutable characteristic like sex or skin color. It's not an identity that people are born and raised in like certain religious affiliations or ethnicities.
Corporate cost-cutting, social media, and a weak justice system coalesced into the perfect 21st century cultural Molotov cocktail. Milwaukee’s Kia Boys lit the fuse, and an explosion of car thefts swept the nation. @NickAndrewRusso’s full-time PW debut — piratewires.com/p/kia-boy-stol…
Some Kias & Hyundais, lacking basic safety features, are insanely easy to steal.
Kids in Milwaukee figured this out. They started stealing them and posting hot-wiring tutorials and joyriding POVs to TikTok.
Before long, “Kia Boys” had popped up in dozens of American cities.
Thieves as young as 10. High-profile victims including NFL players, undercover cops, and Fox News crews. A growing death toll. Dropped insurance coverage. Free steering wheel clubs. Class action lawsuits.
We ran some of the most viral chatbot screenshots through 6 different AI text detectors. The results: some of the screenshots are hoaxes, or the tools don't work. Or it's a mix of both. piratewires.com/p/ai-text-dete…
This GPT-3 screenshot, which has over 620k views, was unambiguously categorized by all but one detector as human-written. OpenAI's detector wouldn't analyze it because the text is too short. But some detectors categorized other screenshots tweeted in that same thread as from AI.
Complicating the picture, when we had the detectors analyze ChatGPT-written text that we prompted ourselves, they couldn't consistently unanimously and unambiguously agree on the right answer.