Dan Goodman Profile picture
Computational neuroscientist @imperialcollege. I like to make stuff: @briansimulator @neuromatch. 🐘 @neuralreckoning@neuromatch.social
Oct 9, 2023 12 tweets 3 min read
I'm happy to announce the start of a new free and open online course on neuroscience for people with a machine learning or similar background, co-developed by @MarcusGhosh. YouTube Videos and Jupyter-based exercises will be released weekly. There is a Discord for discussions. Image For more details about the structure of the course, and to watch the first video "Why neuroscience?" go straight to the course website:



Currently available are videos for "week 0" and exercises for "week 1", but more coming soon.neuro4ml.github.io
May 28, 2022 8 tweets 2 min read
Over the last few years, each time I review a paper or (even worse) a grant application and give a negative opinion, I feel bad. I know that by doing so I'm harming someone's career and also just giving them of those days where you show your reviews to friends and want to cry. I noticed that unconsciously I've been drifting towards increasingly positive reviews to the point where I started asking myself whether or not I should just always recommend accept. But that feels kind of dishonest to the person asking you to review.

May 28, 2022 14 tweets 3 min read
This has led to some quite strongly worded disagreement, so perhaps it's worth expanding a bit on this to explain myself and for me to understand the disagreement better. I find pre publication peer review for journals as currently done ethically problematic for the reasons described in my thread. I have no objection to post pub PR strictly limited to questions of technical correctness, as long as it's not used to decide significance.
May 26, 2022 13 tweets 4 min read
The current system of journals and peer review is not serving science. I have therefore resigned from all editorial roles and will no longer do pre-publication peer review. I explain why in this article and in the thread below. Please consider joining me.

neural-reckoning.org/reviewing.html Journals made sense historically as the only way to get work out to an audience. That's now a solved problem with preprint servers, and, by closing access, journals are actively hindering this. Their only role now is managing peer review and editorial curation.
May 10, 2022 4 tweets 1 min read
Let's play fantasy science! Imagine a future where science is run the way it should be. Don't hold back. For me, I'm thinking "what would science look like in the star trek universe?" but you don't have to be as nerdy as me. What does science look like in your ideal world? Mine: we get to work on the problem we think is most interesting/promising, not someone else's idea. There's no abuse of power because nobody has power over others. We don't spend huge amounts of time writing papers in an outdated format that's hard to read and write. Etc.
Jun 16, 2021 11 tweets 3 min read
New preprint/tweeprint! 🧵👇

Modularity can be structural (what connects to what) or functional (specialised groups of neurons). Are these related? Yes, but more weakly than you might guess.

Work by PhD student Gabriel Béna - feedback appreciated!

arxiv.org/abs/2106.02626 TLDR: enforcing structural modularity in the architecture of a NN trained on a task naturally composed of subtasks leads to module specialisation on subtasks, but only at extreme levels. Even quite high levels of structural modularity lead to no functional specialisation.
Feb 17, 2021 9 tweets 4 min read
The spikes must flow!

I'd love to announce a new paper with that title, but sadly the editors at Neuron changed it.

Still v happy this paper is out because there's a revolution taking place in spiking neural networks and I want everyone to know about it. 👇🧵 Two of the things that make the brain interesting are (a) it is intelligent, it lets us make sense of very complex, noisy sensory data, (b) neurons use this super weird method of communicating. Now, for the first time, we can train spiking networks that can do hard tasks.
Sep 24, 2020 10 tweets 3 min read
One of the things we're most excited about for #nmc3 is our new approach to reducing bias by eliminating editorial selection. We're replacing it with a scheduling algorithm that ensures people get to see the science they're interested in. Want to hear more? (1/10) One of the big sources of bias is editorial selection. Conference organisers select a subset of talks to be featured in single tracks, and others are given posters or multi-track sessions. We wanted to find a way to eliminate this bias. (2/10)
Sep 1, 2020 4 tweets 1 min read
Workshop finished. Thanks to all our speakers and the over 500 people who came to the talks and discussions. We are so pleasantly surprised by how much interest there was that we're thinking of making it an annual event. Get in touch if you'd like to be involved. Some things we'll think about for next time include contributed talks rather than just invited (which we would have done this time if we'd realised how many people would come), a practical session and maybe a challenge announced 6 months before.
Aug 6, 2020 4 tweets 4 min read
Comrades! The time of the spike is finally at hand!

Come to our workshop on new approaches to training spiking neural networks, Aug 31 / Sep 1.

Details and free registration at neural-reckoning.github.io/snn_workshop_2…

Co-organised with @hisspikeness, virtual spike cake by Pamela Hathway Spike cake made from bluebe... Speakers are @SanderBohte @astronomind @FranzScherr @virtualmind Timothee Masquelier @ClopathLab @NeuroNaud Julian Goeltz.

There are no contributed talks or posters, but we will be having open ended discussions at the end of both days.
Jul 31, 2020 6 tweets 2 min read
GPT3 reminds me of @danieldennett notion of deepity, something that sounds profound but is actually empty. We know it doesn't have any lived experience so the statements it produces don't have self generated meaning. But this is exactly what makes it so useful. 👇1/3 If we want to understand things that generate rather than regurgitate meaning, then GPT3 tells us what we don't need to understand. And what it tells us is surprising! Things we thought were important turn out not to be. This is massive progress. 2/3
Jul 30, 2020 4 tweets 2 min read
@briansimulator tutorial, Aug 7th 2-6pm BST. We will run it as a Zoom meeting. Free registration at the link below to get the URL (please don't share so we can avoid zoombombing). You'll get an email with a link to the event details page with URL.

eventbrite.co.uk/e/brian-online… @briansimulator cc @neuromatch #NeuromatchAcademy @CNSorg @worldwideneuro
Jul 30, 2020 12 tweets 2 min read
Brian tutorial looks like it will probably happen Fri 7th August, 2-6pm BST, will send confirmation and joining instructions later. Ideas on how we should structure it? Please like messages in thread below to indicate your interest, or reply with ideas. Can prob do more than one We could start with a how-to on installing Brian, working with Anaconda and virtual environments, and general best practices for working with Python and Brian.
Jul 21, 2020 4 tweets 4 min read
Would anyone be interested (or have students who would) in a live @briansimulator tutorial/workshop of a few hours? Thinking some time the week or two after @neuromatch #NeuromatchAcademy. Maybe also of interest to some @CNSorg #CNS2020Online attendees? Trying to gauge interest. @briansimulator @neuromatch @CNSorg Also thoughts on good platform? Would be interested to experiment with something other than crowdcast and zoom. Considering Microsoft Teams, YouTube or one of those streaming services meant for game streaming.
Jul 15, 2020 6 tweets 1 min read
Open discussion. What would we like science to be like in an ideal world?

I'm looking for a positive vision of how the future could be. We all have a load of stuff we can rant about how the current system is broken, but instead, can we imagine how we would like it to work? I'll start with a couple. I'd like an inclusive world where algorithms (like those used by @neuromatch) connected people based on their scientific needs and interests (e.g. connecting the right experimentalist with the right theorist).
Jul 4, 2020 4 tweets 2 min read
I'm not involved in @neuromatch academy, except as a mentor. I think it's right that for next year, it should no longer be US based to avoid this, and that something non US based should be organised for Iranian students, but I don't think it's right to cancel or postpone. 1/ @neuromatch Too many people have made plans at this point, including financial ones, and it's starting in a week. If there had been a month to deal with this, it might have been feasible to reorganize, but not at this late notice. 2/
May 22, 2020 7 tweets 3 min read
Anyone want to run a public Minecraft server during #neuromatch next week? Maybe build a giant, functional redstone brain or neuron? @neuromatch @neuromatch Or any other games that are not too much focused on murder?
Mar 24, 2020 8 tweets 10 min read
Reminder #neuromatch2020 free online comp neuro unconference Mar 30-31. Speakers include Bengio @drkjjeffery @russpoldrack @neimarkgeffen @behrenstimb @VisualMemoryLab @DaniSBassett @KordingLab. Register by tomorrow for talk/"poster" and mind-matching. neuromatch.io What is "mind matching"? You paste a few of your abstracts when you register, and we automatically match you with 6 scientists with related research interests (excluding people you know) for 15-20m 1-to-1 chats. Great way to find new collaborators!
Feb 28, 2020 7 tweets 2 min read
Ok, time to resolve the "is the brain a computer?" debate once and for all (ha ha).

In terms of their symbol manipulation capabilities only, the brain is equivalent to a computer (in some local sense, ignoring size/time/error limitations).

I think everyone can agree on this? This leaves open the possibility that there are differences outside the realm of symbols.

Note that this formulation is not trivially true, and some people do think that the brain can do some symbol manipulation tasks that a computer cannot, but they're a small minority I guess.
Oct 30, 2019 8 tweets 2 min read
OK so I've had a chance to read this paper now. This is really exciting stuff that could fundamentally change how neuroscience is done. It's worth reading. That said, I'm on board but not exactly. I think I'm a reluctant machine learner. (1/8) Let's start with where I agree. What makes the brain interesting is that it can perform well at different tasks. For too long, neuroscientists have studied how the brain can solve simple tasks, and so they came up with simple models that didn't scale to difficult tasks. (2/8)
Jul 14, 2019 4 tweets 2 min read
#CNS2019Barcelona attendees, come and check out my students' work today. Starting with "Discovering the Building Blocks of Hearing" by Lotte Weerts at 12 (talk given by me as Lotte couldn't make it), and our posters P99-P101 this evening (5.30-9) 👇 #CNS2019Barcelona "Neural Topic Modelling" (Pamela Hathway, P100). Using techniques from large scale text analysis for analysing high dimensional neural data, e.g. NeuroPixels, in a scalable way