Julian Togelius Profile picture
AI and games researcher. Associate professor at NYU; director of @NYUGameLab; co-founder of https://t.co/FnakJLkAXW.
Dec 9, 2023 4 tweets 1 min read
At least from this press release, it seems that the EU AI Act came out less bad than feared. We seem to have avoided any need for licensing or similar for foundation models, and open-source distribution is permitted and seemingly even encouraged.
europarl.europa.eu/news/en/press-… The transparency requirements could be good or bad depending on the details. The copyright law compliance requirement is more questionable, as it seems to me that copyright law should need to adapt to the age of AI rather than the other way around.
Aug 15, 2023 12 tweets 3 min read
We have a new, embarrassingly simple method for generating sprites and levels from text. It's also fast. We call it the five-dollar model.
Read the paper:

And see it in acton: https://t.co/QW0ZpLn8QHarxiv.org/abs/2308.04052
By training on a few hundred human-created 2D levels and their annotations, we created a model that could reliably create levels that corresponded to simple text prompts. In milliseconds. Image
Apr 4, 2023 20 tweets 4 min read
Is Elden Ring an existential risk to humanity? That is the question I consider in my new blog post. Yes, it is a comment on the superintelligence/x-risk debate. Read the full thing here:
togelius.blogspot.com/2023/04/is-eld…
Or follow along for a few tweets first, if you prefer. First of all: Elden Ring is not an existential risk to humanity. It's a great game, and it's a ridiculous idea that it would kill us all. But why would anyone take seriously that "AI" could kill us all, when Elden Ring couldn't?
Mar 25, 2023 13 tweets 4 min read
Rob Long @rgblong bringing the heat Like Rob, I, too, want to live in a world where reading William James would help me do AI research. But, according to Rob, it doesn't.
Feb 14, 2023 12 tweets 6 min read
Large language levels can do... lots of things. But can they generate game levels? Playable game levels, where puzzles are solvable? Two papers announced today address this question. The first is by our team at @NYUGameLab - read on for more: In our paper, "Level Generation Through Large Language Models", by Graham Todd, @Smearle_RH, @utheprodigyn, @Bumblebor, and myself, we fine-tune GPT-2 and GPT-3 to generate Sokoban games encoded row-by-row as strings.
arxiv.org/abs/2302.05817
Dec 5, 2022 4 tweets 1 min read
My main advice going into AI research is to not do what everyone else is excited about right now because it gets amazing results. The next breakthrough is going to come from somewhere else. So go work on something half-obscure which has a chance of moving us in new direction. I know that it might feel like the topic of the day is the pinnacle of AI and the future will be all about this thing. People thought so about LISP, A*, expert systems, decision trees, SVMs, Deep RL etc. Historically, an approach will plateau, and new perspectives come along.
Oct 13, 2022 21 tweets 26 min read
@kchonyc @davidchalmers42 In all, not very strong evidence for sentience. But maybe some weak evidence for. So is there any evidence against? @kchonyc @davidchalmers42 Okay. So what form should the evidence against takes?
Aug 8, 2022 20 tweets 4 min read
Some people (still!) think that video games are not important research topic, or at best a niche application for AI. So I wrote this "Apology for Video Games Research":
togelius.blogspot.com/2022/08/apolog… First of all, this is an apology in the sense of Socrates' apology: a forceful argument. I am certainly not apologizing for studying video games, and neither should you. Video games are perhaps the most important research topic there is.
Aug 6, 2022 7 tweets 2 min read
A post by someone who really wants crypto to succeed, lamenting that none of the purported applications do anything useful. The interesting part: the applications he's most optimistic about are... in games! Let that sink in. That's how bad it is.
amirbolous.com/posts/crypto-f… For context, no game designer worth their mettle believes there is any use case for crypto technologies in games at all. The "crypto games" that come to market, buoyed by rotten venture capital, are terribly designed and/or outright scams.
Jun 25, 2022 7 tweets 2 min read
My position on AGI is now officially that I'm an "AGI noncognitivist". For reference, see theological noncognitivism.
en.wikipedia.org/wiki/Theologic… Basically, I don't think the expression "artificial general intelligence" means anything, so discussions about when it will arrive or what risks or promises it might have are also meaningless. The same goes for every attempt I've seen at replacing the term with something better.
Jun 25, 2022 6 tweets 2 min read
Inspired by that Dall-e (like the other image generators du jour) uses upscaling at the end, I asked it to generate "An extremely intricate oriental tiling pattern". Interestingly the detail that was apparently generated in upscaling is nicely regular. Image Another one, nice pattern but less impressed with the color choice here Image
Jun 17, 2022 6 tweets 2 min read
The nonsense continues. Step by step. Image It is important to remind yourself that the system has no interest in the truth, because it has no concept of the truth, because it has not concept of the world. Image
Jan 10, 2022 22 tweets 4 min read
Call me naive and late to the party, but the individual "moral" appeal of web3 just struck me. As in: there are lots of idealistic people who want to be like the pioneers of the internet and modern software a few decades back, so they answer the call of the crypto sirens. There was a time when the people who built the tools and protocols that became the internet were the good guys. Revolutionaries, even. Making borderless communication possible, ushering in radical freedom of speech, opening up possibilities all around.
Dec 23, 2020 11 tweets 3 min read
As a student, I would sometimes look at the careers of researchers in my field and be surprised at how they abandoned promising research lines and started working on something less interesting. Now I understand they often simply did the research they could get funded. As a general rule (with many exceptions) it’s easier to get less interesting research funded. For the same reason that as a student, is easier to get an A if you choose a well-defined essay topic where you already know a lot and all the information is readily available.
Sep 26, 2020 10 tweets 2 min read
I understand you're annoyed about that rejection from NeurIPS. As an AC, I can confirm that I recommended rejection for a bunch of perfectly adequate papers. They were just not very exciting. Of course, all of the rejected papers has some kind of issue. Like, the wrong benchmark, insufficient ablations, forgot to cite some important work, or something. But all papers have faults. Think of the ML papers that influenced you the most. Are they perfect? Didn't think so.
Aug 22, 2020 5 tweets 1 min read
Nice piece by @GaryMarcus and Ernest Davis: GPT-3 simply predicts words and does not reason about the world. Consequently, it's just saying things that "sound good" without any regard to truth. It's like free association. You may say that this shouldn't need to be pointed out, as the network's developers never claimed anything else. However, with every impressive achievement in AI, there are people who claim that we have now cracked AI and AGI is around the corner.
Aug 18, 2020 8 tweets 2 min read
Watching for inspiration. Maybe it's time for a vacation soon. Image Image
Aug 7, 2020 5 tweets 4 min read
We can use reinforcement learning to learn to generate levels (and other functional objects). But how can we control and collaborate with these generators? We present RL Brush, a mixed-initiative level design tool.
Paper:
arxiv.org/abs/2008.02778
Try it:
rlbrush.app The core idea is that this because this learned generator is incremental (unlike e.g. a search-based or a GAN-based generator) we can display the next generator step as a suggestion to the user. If we have several generators (or parametrizations) we can show multiple suggestions.
Aug 5, 2020 4 tweets 1 min read
The automation of bullshit in the wild. Ideally, this leads to the delegitimization of bad writing. Disrupting the bullshit industry. But I'm an optimist. I mean, lots of people like reading low-content, unsubstantiated, unfalsifiable text. Like horoscopes, growth hacking, bad business writing and bad continental philosophy.
Jul 17, 2020 4 tweets 1 min read
I have the same impression. We can now automate the production of passable text on basically any topic. What's hard is to produce text that doesn't fall apart when you look closely. But that's hard for humans as well. GPT-3 often performs like a clever student who hasn't done their reading trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.
Jul 17, 2020 4 tweets 2 min read
In this paper, we introduce a system that evolves players to play a game, then levels that challenge the agent more, then agents that can play the new levels, and so on... We show that we can gradually evolve more complex levels in two different games. Does it seem familiar somehow? Yes, indeed, we build on the POET algorithm by @ruiwang2uiuc @joelbot3000 @kenneth0stanley @jeffclune. But where POET created new simple walker environments, we create complete levels for games with comparatively complex rules.