Discover and read the best of Twitter Threads about #tweeprint

Most recents (22)

It's a new paper day, so here goes my very first #tweeprint! It was a pleasure to work on this with the incredible Ramanujan Srinath and Doug Ruff.…
We can use visual information in very flexible ways (you might grab or ignore an ice cream cone depending on whether it's yours), which means that visual information must be routed through our brains flexibly through processes like selective attention 2/10
Contrary to my favorite hypothesis (!), we showed recently that attention does not substantially change the amount of visual information in visual cortex.… 3/10
Read 10 tweets
New Preprint 🎉

"Methodological considerations for studying neural oscillations"

With Natalie Schaworonkow (@nschawor) and Bradley Voytek (@bradleyvoytek), we review key methodological issues and concerns for analyzing oscillatory neural activity.

We use simulated data to demonstrate and describe 7 key issues we think should always be considered for measuring neural oscillations.

We try and review and pull together recommendations, citing and combining topics from across the current literature.

#tweeprint #AcademicEEG The image is a "graphical abstract", showing some
#1: Neural oscillations are not always present.

Neural activity contains aperiodic activity, which has power across all frequencies, and can appear rhythmic.

To validate oscillation specific power, analyses should start with a detection step, verifying oscillatory presence. Figure 1 from the paper. A shows a Dirac delta function, wit
Read 14 tweets
Very happy to announce that my very first paper just came out in PNAS.
“Neuronal circuits overcome imbalance in excitation and inhibition by adjusting connection numbers”… #tweeprint below (1/7)
The paper is a result of a collaboration between @uni_tue @MPICybernetics and @WeizmannScience. A joint effort of @SelfOrgAnna, Moses lab, and Segal labs.
Hippocampal and cortical networks typically have about 20-30% of inhibitory neurons. But would they work with the other percentages? We looked at the activity of networks grown using a novel protocol to precisely control E/I ratios. (3/7) Image
Read 7 tweets
1/ Hark, a #tweeprint! Our new paper is up on #bioRxiv! It’s the results from our #OpenScope project with the @AllenInstitute about learning from unexpected events in the neocortical microcircuit!

Here's a thread with more details...…
2/ In this paper, we show that unexpected events drive different changes in the responses of somata and distal apical dendrites in primary visual cortex pyramidal neurons. Schematic figure of a pyramidal neuron with a long apical tr
3/ Previous research has shown that neurons in sensory areas respond to unexpected events. It’s hypothesized that these responses guide our brain in learning a predictive, hierarchical model of the world in an unsupervised (or self-supervised) manner.
Read 20 tweets
Excited to share my first #tweeprint, a work together with @EngelTatiana and @SelfOrgAnna:
Are you interested in timescales but you estimate them by fitting exponential functions on data autocorrelations? you might need to reconsider.… Image
1/5 We show that the standard procedure of estimating timescales by fitting exponential decay functions on data autocorrelations often fails to recover the correct timescales, exhibiting large estimation errors due to a statistical bias in autocorrelations of finite data samples. Image
2/5 We propose a new method based on Approximate Bayesian Computations that estimates the timescales by fitting the autocorrelation of sample data using a generative model and returns a posterior distribution of timescales quantifying the estimation uncertainty. Image
Read 6 tweets
My new work with Jonathan Pillow @jpillowtime! "High-contrast 'gaudy' images improve the training of deep neural network models of visual cortex."

We found that gaudy images can train DNNs with little data---perfect for neuro experiments! #tweeprint
Our goal is to predict visual cortical responses from natural images. Often linear regression is used to map image features to responses b/c lack of experimental data. Here, we use a DNN (readout network) and avoid overfitting b/c our gaudy images are tailored for training!
The gaudy transformation is simple: Push pixel intensities to the extremes (either 0 or 255). We were inspired by 50-year-old active learning theory that says (under certain cases) the optimal training images are the ones that increase the variance in every pixel dimension.
Read 8 tweets
Very excited to share my first #tweeprint today!

Work by me, Daniel Trotter, @NeuroNaud and @mossy_fibers.

We address short-term plasticity with a linear-nonlinear model and find interesting algorithmic similarities between single synapses and CNNs.…
2/ In this paper, we question how to best describe complex short-term plasticity (STP) dynamics in a computational model.

Typically, people tend to categorize synapses into either facilitating (STF) or depressing (STD) types. Image
3/ This STF-STD dichotomy, however, is an oversimplification. Some synapses display more complex dynamics.

At hippocampal mossy fiber synapses, for example, facilitation is supra-linear in low (arguably more physiological) extracellular [Ca2+]. Image
Read 25 tweets
Computational neuroscience has lately had great success at modeling perception with ANNs - but it has been unclear if this approach translates to higher cognitive systems. We made some exciting progress in modeling human language processing… #tweeprint 1/
This work is the result of a terrific collaboration with @ibandlank @GretaTuckute @KaufCarina @eghbalhosseini @Nancy_Kanwisher Josh Tenenbaum and @ev_fedorenko; @mitbrainandcog @MIT_CBMM @mcgovernmit 2/
Work by @ev_fedorenko and others has localized the language network as a set of regions that support high-level language processing (e.g.…) BUT the actual mechanisms underlying human language processing have remained unknown. 3/
Read 15 tweets
I’m so excited and proud to finally be able to share my most recent 2nd author paper in @CellCellPress and the cover I designed.👩🏻‍🔬👩🏻‍🎨 #sciart #science #tweeprint

Neuronal Inactivity Co-opts LTP Machinery to Drive Potassium Channel Splicing and Homeostatic Spike Widening

Neurons can change many of their biochemical and electrical properties, AKA Plasticity. It’s hypothesized that this ability could be a substrate for learning and memory, as well as other key brain functions. Neuronal plasticity typically is categorized into two kinds:

(1) Hebbian plasticity: “positive feedback” that let’s a neuron reinforce new stimuli. However, left unchecked Hebbian plasticity leads to instability. In comes: (2) Homeostatic plasticity: “negative feedback” that stabilizes a neuron after a long period of stimuli changes.

Read 12 tweets
Excited to share a preprint of our work "Learning is shaped by abrupt changes in neural engagement," advised by Aaron Batista, Steve Chase, and Byron Yu.…

I'm (even more?) excited to finally make my own #tweeprint! (1/n)
Internal states such as our attention and motivation involve brain-wide changes in neural activity. We know changes in these states can impact your behavior. For example, when someone surprises you:
If changes in internal states can impact immediate behavior, maybe they can also impact how you *learn* new behaviors. To learn, neural activity must change in particular ways. But what if internal state changes move you in the wrong way?
Read 12 tweets
1/ Need a distraction from the pandemic? It's #tweeprint time!!!

I'm very excited to share here with you new work from myself, @NeuroNaud, @guerguiev, Alexandre Payeur, and @hisspikeness:…

We think our results are quite exciting, so let's go!
2/ Here, we are concerned with the credit assignment problem. How can feedback from higher-order areas inform plasticity in lower-order areas in order to ensure efficient and effective learning?
3/ Based on the LTP/LTD literature (e.g.…), we propose a "burst-dependent synaptic plasticity" rule (BDSP). It says, if there is a presynaptic eligibility trace, then:

- postsynaptic burst = LTP
- postsynaptic single spike = LTD
Read 34 tweets
Are you skeptical about successor representations? Want to know how our new model can learn cognitive maps, context-specific representations, do transitive inference, and flexible hierarchical planning? #tweeprint...(1) @vicariousai @swaroopgj @rvrikhye…
As @yael_niv pointed out in her recent article, learning context specific representations from aliased observations is a challenge. Our agent can learn the layout of a room from severely aliased random walk sequences, only 4 unique observations in the room!
And it works even when the room is empty, with no unique observations in the center of the room. The observations are now severely aliased and correlated, but it still recovers the map of the room.
Read 14 tweets
New preprint with @andpru @paulgribble using @KINARMLab to go from feedback to feedforward control. #tweeprint below...
(1/6)Many previous work have shown that learning new feedforward motor commands transfer to fast feedback responses (i.e. reflexes) to mechanical perturbations.………
(2/6)Whether this transfer take place from feedback responses to feedforward control is currently unknown as it is hard to elicit learning in reflexes without engaging associated voluntary responses.
Read 7 tweets
Out now! New preprint on the effects of repetition on performance in sequence production…

Work with Danny Kwon and @DiedrichsenJorn, @diedrichsenlab.

#tweeprint in thread below 👇
@DiedrichsenJorn @diedrichsenlab 1/5. When trying to master a motor skill, we tend to practice repeating the same movements over and over. Indeed repetition benefits have been shown for single, point-to-point reaching movements.
@DiedrichsenJorn @diedrichsenlab 2/5. Here we used a discrete sequence production (DSP) task to ask whether this also applies to movement sequences, and which processes (planning vs. execution) mediate such effects.
Read 6 tweets
1/ #tweeprint time! If you are interested in #interneurons, #hippocampus and developmental mechanisms of neural circuit function, check out our new preprint “Hippocampal hub neurons maintain unique functional properties throughout their lifetime”…
2/ Ten years ago, the Cossart lab discovered that rare GABAergic neurons are able to orchestrate single-handedly network dynamics in the CA3 region of the hippocampus . As these cells showed a remarkably high functional connectivity, they were termed ‘hub cells’.
3/ A few years later, @PicardoMichel and others in the lab (in collaboration with @GordFishell lab) found that many hub cells were GABA cells born early in embryogenesis (thus termed ‘early-born GABA cells’ or ‘ebGABAs’).
Read 13 tweets
time for a #tweeprint!

Our study of human executive function is out in @NatureNeuro today:…
@NatureNeuro In this study we sought to understand the neural correlates of cognitive control, the process by which the brain controls and optimized thoughts and behaviors. 2/n
@NatureNeuro To do so, we examined how the activity of individual human neurons and intracranial electrical signals from human neurosurgical patients undergoing monitoring for epilepsy and surgery for deep brain stimulation while they did a supercharged Stroop task. 3/n
Read 7 tweets
#tweeprint time: rapid feedback adaptation to force field of different direction and different kinds in 250ms (thread):…
Simplistic view in reach control and adaptation is that the compensation for external loads is supported by impedance control within a trial, and trial by trial adaptation. This paper continues a series where we show that the story is likely more complex...
First unpredictable loads induce changes in co-contraction which correlated with changes in feedback gains, we suggested that instead of making the limb stiff, co-contractions makes the neural controller more robust:…
Read 6 tweets
1/ #tweeprint time everybody! It's about neural coding (and I mean that literally). We asked the following Q: if info is encoded in the neocortex with both rate and synchrony of spikes, do different subtypes of neurons display differential sensitivity to these two info streams?
2/ In the example image above, a binary signal is encoded with either a rate or synchrony code. The rate code uses high rate = 1, low rate = 0. In contrast, in the synchrony code the cells have a constant rate-of-fire, but high synch = 1, low synch = 0.
3/ We hypothesized that PV+ and SST+ interneurons would be differentially sensitive to these two codes, due to their intrinsic and synaptic properties. For example, the short-term depressing synapses of PV+ cells may make them more sensitive to synchrony.
Read 19 tweets
New preprint with @KoenVdvoorde where we investigate what goes wrong with the explicit component of motor adaptation in elderly adults.

#tweeprint time:

It all started from the following observation: @KoenVdvoorde previously showed that the explicit component of adaptation but not the implicit one declines with aging (…)

but this contrasts with papers showing that elderly adults usually rely on cognition to maintain good motor performance in motor tasks:

Read 9 tweets
#tweeprint time!
How do we generate the right muscle commands to grasp objects? We present a neural network model that replicates the vision to action pipeline for grasping objects and shows internal activity very similar to the monkey brain.…
Monkeys grasped and lifted many objects while we recorded neural activity in the grasping circuit (AIP, F5, & M1 - see original paper All of these areas have been shown to be necessary for properly pre-shaping the hand during grasping.
We show that the advanced layers of a convolutional neural network trained to identify objects (Alexnet) has features very similar to those in AIP, and may therefore be reasonable inputs to the grasping circuit, while muscle velocity was most congruent with activity in M1.
Read 10 tweets
1/8 Our paper “Movement science needs different pose tracking algorithms” is on arxiv. #tweeprint
2/8 In this paper, we give ideas for how pose estimation algorithms should change to best serve movement science -- by quantifying different variables, better ground truth, tracking in time, and more...
3/8 Many fields of science and engineering rely on movement data for research. Insights from movement data impact neuroscience, bioengineering, sports science, psychology, physiology, biophysics, robotics and even more fields
Read 8 tweets
Universality and individuality in neural dynamics across large populations of recurrent networks

With fantastic collaborators @niru_m, @ItsNeuronal, @MattGolub_Neuro, @SuryaGanguli.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli Many recent studies find striking similarities between representations in biological brains 🧠and artificial neural networks 🤖 trained to solve analogous tasks.
@niru_m @ItsNeuronal @MattGolub_Neuro @SuryaGanguli This is pretty crazy when you think about it because brains and ANNs have serious differences in their biophysical/architectural details.
Read 15 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!