Mark Riedl Profile picture
AI for storytelling, games, explainability, safety, ethics. Professor @GeorgiaTech. Associate Director @MLatGT. Time travel expert. Geek. Dad. he/him
Mar 22 6 tweets 2 min read
What’s wrong with “explainable A.I.”

(Making the rounds again.)

1. Most XAI algorithms were not designed with humans in mind or consideration of what doctors really need help with

2. I still believe in the power of XAI, but research is still nascent ...… 3. Verification, which is advocated, serves a different purpose than explanation. Verification tells us whether a system is compliant with regulations. Explanation can tell us on an instance-by-instance timescale whether to trust the system.

4. It’s not one-or-the-other, do both
Mar 21 4 tweets 1 min read
Rich white dudes will go to great lengths to avoid their money helping marginalized communities Image Tell me how this doesn’t just end up giving money to those who already have privileges? What does this individual look like? Someone who doesn’t have food and housing insecurity so they can focus on their studies in K-12 and find themselves with time and energy to do more.
Mar 20 8 tweets 2 min read
The US college system is under a lot of stress right now. Also, just about everything about this thread is wrong College debt is a huge problem. States have been underfunding public universities for the last 40 years. This is one of the main costs of exploding student debt. The states should honor their commitments to educating their populations. Boomers were the last generation to benefit
Jan 12 8 tweets 3 min read
I’ve been told my conversations with the author were influential to this book and that it says nice things about my research “There’s no manual of human interaction, Riedl sighs”
Sep 6, 2021 7 tweets 2 min read
I would normally never send anyone to Lesswrong. com, but someone posted about Sam Altman remarks about OpenAI’s plans for GPT-4, and I have thoughts… 1/7 GPT-4 will focus on coding (ala Codex). It will not be much bigger than GPT-3. The focus will instead be "line of sight" planning. Which is not really planning, it just means bigger context windows and output windows. 2/7
Sep 4, 2021 5 tweets 2 min read
I used GPT-J to create new loot items Image Sometimes GPT gets a bit over-excited and starts to tell a story instead Image
Aug 1, 2021 7 tweets 2 min read
In about 3 weeks universities will be in session again. Many universities (like my own) want to pretend that things will be back to normal. The buildings and classrooms and quads will all be there and look the same. The routines of commuting to classes will be the same… 1/7 But WE will not be the same. We may still be suffering from mental fatigue. We may have developed new life routines and work habits that are suddenly incompatible with on-campus life. 2/7
Apr 17, 2021 9 tweets 7 min read
For some insane reason, my team submitted 7 papers to the NAACL Workshop on Narrative Understanding.

Even more insane: all seven were accepted! 1. Fabula Entropy Indexing: Objective Measures of Story Coherence
@lcastricato @spencerfrazier @JonathanBalloch

A new way to OBJECTIVELY measure the coherence of story generation systems. Grounded in narratology and validated in controlled studies
May 15, 2020 10 tweets 3 min read
I’m finally ready to release my neural net based lyrics parody generation system…

Introducing: Weird A.I. Yankovic!

Runs on Google Colab:… You can provide the rhyme scheme and syllables per line for an existing song, and it will write new lyrics to match.

In the true spirit of parody, here is a Michael Jackson song (“Beat It”) rewritten to be about food.

Then you can sing the song yourself to the horror of others
Sep 21, 2019 7 tweets 1 min read
How do I get this type of gig? Hollywood: It’s about an experimental AI that—

Me: that fails to converge until the programmer cleans up millions of lines of labeled data? Right! Very suspenseful! Never know if that’s going to work.

Hollywood: you’re fired
Feb 19, 2019 7 tweets 2 min read
This is a problem for the DOD and has been for a long time…

But it is addressable by fixing our higher education system, which is not producing enough AI/ML engineers and skewed toward a small # of elite universities. Under the belief that there are no secret algorithms, only secret engineering, the DOD mostly needs people that can build hardened AI/ML systems (let’s ignore the question of what these systems are for for the moment).
Sep 17, 2018 4 tweets 2 min read
I’m super geeked ➡️ this is video of @MatthewGuz playing a game generated by a ML algorithm trained on video of Super Mario Bros., Kirby, and Mega Man Everything is learned from scratch: level design and mechanics/rules.

Paper: What I like about this video is that the game is very different from any of the training examples.

The conceptual expansion algorithm is able to extrapolate beyond the training data, hypothesizing the existence of models that aren’t directly supported by the training data