#CNS2019Barcelona attendees, come and check out my students' work today. Starting with "Discovering the Building Blocks of Hearing" by Lotte Weerts at 12 (talk given by me as Lotte couldn't make it), and our posters P99-P101 this evening (5.30-9) 👇
#CNS2019Barcelona "Neural Topic Modelling" (Pamela Hathway, P100). Using techniques from large scale text analysis for analysing high dimensional neural data, e.g. NeuroPixels, in a scalable way
#CNS2019Barcelona "Generalisation of frequency mixing and temporal interference phenomena through Volterra analysis" (Nicolas Perez, P99)
#CNS2019Barcelona "An Attentional Inhibitory Feedback Network for Multi-label Classification" (Yang Chu, P101)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm happy to announce the start of a new free and open online course on neuroscience for people with a machine learning or similar background, co-developed by @MarcusGhosh. YouTube Videos and Jupyter-based exercises will be released weekly. There is a Discord for discussions.
For more details about the structure of the course, and to watch the first video "Why neuroscience?" go straight to the course website:
Currently available are videos for "week 0" and exercises for "week 1", but more coming soon.neuro4ml.github.io
Why did I create this course? Well, I think both neuroscience and ML can be enriched by knowing about each other and my feeling is that a general purpose intro to neuro or comp-neuro isn't the right way to inspire people in ML to be interested in neuro.
Over the last few years, each time I review a paper or (even worse) a grant application and give a negative opinion, I feel bad. I know that by doing so I'm harming someone's career and also just giving them of those days where you show your reviews to friends and want to cry.
I noticed that unconsciously I've been drifting towards increasingly positive reviews to the point where I started asking myself whether or not I should just always recommend accept. But that feels kind of dishonest to the person asking you to review.
This is partly what led me to consider stopping reviewing. I didn't feel like I could honestly satisfy both my conscience about the right way to treat the person being reviewed and the person asking for the review. So perhaps the only solution was to stop?
This has led to some quite strongly worded disagreement, so perhaps it's worth expanding a bit on this to explain myself and for me to understand the disagreement better.
I find pre publication peer review for journals as currently done ethically problematic for the reasons described in my thread. I have no objection to post pub PR strictly limited to questions of technical correctness, as long as it's not used to decide significance.
As a consequence, I don't think I should be doing pre pub PR. Should I be made to do this even if I think it's ethically wrong?
The current system of journals and peer review is not serving science. I have therefore resigned from all editorial roles and will no longer do pre-publication peer review. I explain why in this article and in the thread below. Please consider joining me.
Journals made sense historically as the only way to get work out to an audience. That's now a solved problem with preprint servers, and, by closing access, journals are actively hindering this. Their only role now is managing peer review and editorial curation.
Peer review can catch errors in papers before they're published, but we know that many errors still get through and it fails to correct problems found after publication. So we cannot rely on peer review alone. Post-publication review does a much better job.
Let's play fantasy science! Imagine a future where science is run the way it should be. Don't hold back. For me, I'm thinking "what would science look like in the star trek universe?" but you don't have to be as nerdy as me. What does science look like in your ideal world?
Mine: we get to work on the problem we think is most interesting/promising, not someone else's idea. There's no abuse of power because nobody has power over others. We don't spend huge amounts of time writing papers in an outdated format that's hard to read and write. Etc.
We work collaboratively, not competitively and communicate our ideas and results as we get them. And of course, most importantly, theorists are crowned as God emperors, as they should be.
Modularity can be structural (what connects to what) or functional (specialised groups of neurons). Are these related? Yes, but more weakly than you might guess.
Work by PhD student Gabriel Béna - feedback appreciated!
TLDR: enforcing structural modularity in the architecture of a NN trained on a task naturally composed of subtasks leads to module specialisation on subtasks, but only at extreme levels. Even quite high levels of structural modularity lead to no functional specialisation.
We looked at the simplest possible case of two modules, each densely connected with sparse connections between them. This lets us precisely control structural modularity from maximum (single interconnect) to no modularity (fully interconnected).