I'm one of those in-the-trenches LLM researchers. I haven't participated much in the AI or AI safety discourse on Twitter.
After having read and thought through various arguments for and against AI existential risk, here's a summary of my current beliefs.
1. Superhuman AGI can be dangerous, the same way a team of villainous high-IQ scientists would be dangerous. In humans, we've been fortunate that intelligence and goodwill correlate, but this is not necessarily the case for AI.