One of the things we're most excited about for #nmc3 is our new approach to reducing bias by eliminating editorial selection. We're replacing it with a scheduling algorithm that ensures people get to see the science they're interested in. Want to hear more? (1/10)
One of the big sources of bias is editorial selection. Conference organisers select a subset of talks to be featured in single tracks, and others are given posters or multi-track sessions. We wanted to find a way to eliminate this bias. (2/10)
At #nmc3 every submission gets a talk. After submission closes, we'll ask participants to go through a (blinded) list of abstracts to select the ones they're interested in. Using participants' and presenters' free times, we'll automatically build an optimal schedule. (3/10)
Some talks will be of wide interest, and they'll end up being scheduled close to single track, and others will be more specialised and they'll end up multi-track, but at times that don't conflict with other talks that are interesting to the same audience. (4/10)
We'll also group together related talks using the same topic modelling algorithms used for neuromatching in previous iterations of NMC (and CCN). In particular, the "interactive talks" (5m+10m Q&A) will be grouped into 2h sessions to make communities of talks (5/10)
The scheduling problem can be posed as a massive integer programming problem with potentially millions of decision variables and constraints, and a very complex objective function taking into account the number of watched hours, and thematic grouping. But it can be solved. (6/10)
Another problem is gathering people's preferences. We can't ask everyone to read all the abstracts submitted, so we will have an abstract browser with manual keyword search and automated suggestions based on topic modelling. This will be quick and painless. (7/10)
A side benefit of all this is that for everyone who gives their preferences, we will generate a personalised suggested schedule including alternatives for each hour of the conference. (8/10)
We're hoping it will lead to one of the most diverse, inclusive and interesting neuroscience conferences ever, where the aim is not to have the flashiest talks highlighted, but for everyone to be able to see the science that they are most interested in. (9/10)
Thanks also to the amazing efforts of @titipat_a, @tulakann and @patrickmineault for heroic efforts putting everything together in no time at all to make this possible. See you at #nmc3! Don't forget to submit your abstract by Oct 2. (10/10)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm happy to announce the start of a new free and open online course on neuroscience for people with a machine learning or similar background, co-developed by @MarcusGhosh. YouTube Videos and Jupyter-based exercises will be released weekly. There is a Discord for discussions.
For more details about the structure of the course, and to watch the first video "Why neuroscience?" go straight to the course website:
Currently available are videos for "week 0" and exercises for "week 1", but more coming soon.neuro4ml.github.io
Why did I create this course? Well, I think both neuroscience and ML can be enriched by knowing about each other and my feeling is that a general purpose intro to neuro or comp-neuro isn't the right way to inspire people in ML to be interested in neuro.
Over the last few years, each time I review a paper or (even worse) a grant application and give a negative opinion, I feel bad. I know that by doing so I'm harming someone's career and also just giving them of those days where you show your reviews to friends and want to cry.
I noticed that unconsciously I've been drifting towards increasingly positive reviews to the point where I started asking myself whether or not I should just always recommend accept. But that feels kind of dishonest to the person asking you to review.
This is partly what led me to consider stopping reviewing. I didn't feel like I could honestly satisfy both my conscience about the right way to treat the person being reviewed and the person asking for the review. So perhaps the only solution was to stop?
This has led to some quite strongly worded disagreement, so perhaps it's worth expanding a bit on this to explain myself and for me to understand the disagreement better.
I find pre publication peer review for journals as currently done ethically problematic for the reasons described in my thread. I have no objection to post pub PR strictly limited to questions of technical correctness, as long as it's not used to decide significance.
As a consequence, I don't think I should be doing pre pub PR. Should I be made to do this even if I think it's ethically wrong?
The current system of journals and peer review is not serving science. I have therefore resigned from all editorial roles and will no longer do pre-publication peer review. I explain why in this article and in the thread below. Please consider joining me.
Journals made sense historically as the only way to get work out to an audience. That's now a solved problem with preprint servers, and, by closing access, journals are actively hindering this. Their only role now is managing peer review and editorial curation.
Peer review can catch errors in papers before they're published, but we know that many errors still get through and it fails to correct problems found after publication. So we cannot rely on peer review alone. Post-publication review does a much better job.
Let's play fantasy science! Imagine a future where science is run the way it should be. Don't hold back. For me, I'm thinking "what would science look like in the star trek universe?" but you don't have to be as nerdy as me. What does science look like in your ideal world?
Mine: we get to work on the problem we think is most interesting/promising, not someone else's idea. There's no abuse of power because nobody has power over others. We don't spend huge amounts of time writing papers in an outdated format that's hard to read and write. Etc.
We work collaboratively, not competitively and communicate our ideas and results as we get them. And of course, most importantly, theorists are crowned as God emperors, as they should be.
Modularity can be structural (what connects to what) or functional (specialised groups of neurons). Are these related? Yes, but more weakly than you might guess.
Work by PhD student Gabriel Béna - feedback appreciated!
TLDR: enforcing structural modularity in the architecture of a NN trained on a task naturally composed of subtasks leads to module specialisation on subtasks, but only at extreme levels. Even quite high levels of structural modularity lead to no functional specialisation.
We looked at the simplest possible case of two modules, each densely connected with sparse connections between them. This lets us precisely control structural modularity from maximum (single interconnect) to no modularity (fully interconnected).