Discover and read the best of Twitter Threads about #Tweeprint

Most recents (24)

Interested in the effect of #psychedelics on the brain? Did you know you can image the cortex and extract valuable physiological information at the same time in an ecologically valid setup? We used @kernelco Flow1 to do exactly that! A #tweeprint:
Flow1 is a Time Domain (TD) fNIRS device, which measures cortical brain activity (oxy- and deoxy-hemoglobin concentrations) and physiology such as pulse rate (PR) and pulse rate variability (PRV).
In our study, 15 participants underwent subanesthetic #ketamine and saline injections. First, we found that ketamine administration caused significant changes in systemic physiology: increase in PR and decrease in PRV.
Read 8 tweets
We have a new #preprint out! It’s about what happens in your ears when your eyes move. A #tweeprint about @Stephschlebusch, @cindyking40, David L.K. Murphy et al. 1/
First some background: A couple years ago, we (Gruters, Murphy et al.) discovered that your ears *make* tiny sounds when your eyes move. See theatlantic.com/science/archiv… by @edyong209 2/
@edyong209 And the original paper here, open access: pnas.org/doi/abs/10.107… 3/
Read 16 tweets
🚨🚨🚨#TWEEPRINT TIME🚨🚨🚨
A big mystery in brain research is what are the neural mechanisms that drive individual differences in higher order cognitive processes. Here we present a new theoretical framework w/ @brody_lab @mikio_aoi @SussilloDavid @ValerioMante @jpillowtime
1/16
First, we trained rats to perform flexible evidence accumulation (like the monkeys in Mante et al 2013). Rats were presented with a train of auditory pulses, and were cued to selectively accumulate location (ignoring frequency) or to accumulate frequency (ignoring location).
2/16 Image
Using an automated, high-throughput training procedure we trained 20 rats to solve the task with high performance, collecting more than 120,000 trials for each rat! While rats performed the task, we recorded neural activity in frontal cortex to study population activity.
3/16 Image
Read 17 tweets
Neurophysiology data is so expensive and valuable. Let’s not waste it! Let's share it and reuse it!

Our @eLife paper (elifesciences.org/articles/78362) outlines the NWB software ecosystem for standardizing, analyzing, and sharing neurophysiology data #tweeprint 1/11 Image
Neurophysiology experiments span many species, tasks, and recording modalities. Labs are also diverse in the analysis tools and programming languages they use. This diversity has for a long time resulted in silos in the field. 2/11 Image
Neurophysiology data collected for one purpose is often useful to answer other questions, but sharing data is difficult and tedious- data formats are diverse and complex, and essential metadata may be missing. We need a way to bridge the gap between groups. 3/11
Read 11 tweets
🚨⏰ 🚨 #TWEEPRINT TIME 🚨⏰ 🚨

💫🎊🥳My postdoc work is now online! 🎉🌝💫

@shenoystanford @SussilloDavid and I have been working to understand how neural networks perform multiple related/interfering computations using the computation through dynamics framework.
1/15 Image
We identified a neural substrate for compositional computation through reverse engineering multitasking artificial recurrent neural networks. We call these building blocks dynamical motifs, and they can be composed in different ways to implement different tasks.
2/15 Image
To identify shared motifs, we interpolated across static inputs that configured the network to perform different tasks & tracked fixed points for each interpolated input setting. This is something like an empirical bifurcation diagram of a high dimensional dynamical system.
3/15
Read 15 tweets
Excited to share our new preprint on how network dynamics and structured connectivity jointly define the spatial and temporal profiles of neural correlations. Work w/ @roxana_zeraati, @SelfOrgAnna, @EngelTatiana: arxiv.org/abs/2207.07930 Here is a #tweeprint:
1/8 Correlated fluctuations in the activity of neural populations occur across multiple temporal and spatial scales, which relate to computations in many cognitive tasks.
2/8 While temporal and spatial correlations are usually studied separately, they emerge from the same spatiotemporal dynamics. So how are they related to each other?
Read 9 tweets
I am super excited to share my first 🚨#tweeprint🚨 for my first @biorxivpreprint with @spiros1776 & @YiotaPoirazi on how to efficiently add dendrites to SNNs using Dendrify and @briansimulator:
biorxiv.org/content/10.110…
A thread 🧵 bellow👇 (1/11)
The problem🤔:
Decades of experimental and theoretical research show that neuronal dendrites do not just receive input from other neurons, but also perform complex functions semi-independently from the soma. (2/11)
Although dendritic operations greatly affect single-cell computations, their role in network functions (with a few notable exceptions @tyrell_turing @NeuroNaud) remains unexplored. (3/11)
Read 12 tweets
1/ Very excited to announce the 🚨 #tweeprint 🚨 for our new paper "Cortical integration of higher-order thalamic inputs is lineage-dependent" - check it out here biorxiv.org/content/10.110… #biorxiv_neursci or read on for some serious devneuro/systems crossover mashup...
2/ Sensory processing requires that the brain is able to integrate info about a stimulus with info that reflects the context in which it was experienced. In the cortex, the major sources of these types of info are the first-order (sensory) and higher-order (context) thalamus
3/ Higher-order inputs, in particular, have been implicated in several fundamental cognitive processes like attention, plasticity and conscious perception, and are known to shape the sensory response properties of cortical neurons
Read 24 tweets
New preprint alert!

“Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning”

Work by Karan Grewal, @jerem_forest, Ben Cohen, and @SubutaiAhmad.

#tweeprint below 👇 (1/13)

biorxiv.org/content/10.110…
@jerem_forest @SubutaiAhmad Can the properties of biological dendrites add value to artificial neural networks?

TLDR: Yes, we augmented standard artificial neurons with properties of active dendrites and found that our model can learn continually much better than standard ANNs. (2/13)
The commonly used point neuron model is nothing like its biological counterpart. It assumes a simple linear integration and fire mechanism, while biological neurons are significantly more sophisticated and display a wide range of complex non-linear integrative properties. (3/13)
Read 13 tweets
Excited to share my new paper on representational drift! Thanks to my collaborators @CPehlevan @fgh_shiva @dlipshutz @AnirvanMS @chklovskii #tweeprint (biorxiv.org/content/10.110…)
1) Many thought long-term memories and stable task performance were related to stable neuronal representations. Surprisingly recent experiments showed that neural activity in several brain regions continuously change
even after animals have fully learned and stably perform their tasks. The underlying mechanisms and dynamics of this “representational drift” remain largely unknown.
Read 12 tweets
Thrilled to share our latest pre-print: Stimulating peripheral nerves from within blood vessels with a millimeter-sized battery-free implant.

Buckle up, it's #bioelectronics #tweeprint time!

biorxiv.org/content/10.110…
The main idea is that by making bioelectronics small enough to fit through a catheter we can use a vascular neurosurgical approach to avoid open surgery and hit hard-to-reach targets like spinal cord.
This is hard to do because we need a way to deliver data and power to a millimeter-sized implant deep within tissue. We discussed in a recent review, magnetoelectric (ME) materials have many advantages for solving this problem: onlinelibrary.wiley.com/doi/full/10.10…
Read 9 tweets
As a scientist on Twitter, you want to get your research known (right?).
The best way to do this is write cool threads about your papers/preprints!🧵

-- Here's some (personal) advice on how to make #SciTwitter threads --

Comments & further tips welcome! 🙏
1/
Why do a thread on your paper/preprint (aka #Tweeprint)?
- Spread the knowledge
- Get feedback
- Something to pin to your profile
- Increase your chances of being cited, invited, known by your peers
- Add your personal twist & info that is not in the paper

2/
Preparation
- Use a text editor for drafting. Threads might not be saveable as drafts in twitter
- Choose ONE main message
- Draft 1 sub-message per slide, supported by up to 4 figures, or 1 gif or video.

- Use spaces to increase readability

- add
↙️a tweet counter
3/
Read 19 tweets
#tweeprint
Who tells the #hippocampus what and when to learn? Our latest article together with @adriamilcar @IvanRaikov7 @rennocosta @annamurav @SolteszLab @PaulVerschure, is out in @TrendsCognSci. cell.com/trends/cogniti…
Link to open access:
authors.elsevier.com/c/1cype4sIRvHk…
We describe the entorhinal-hippocampal circuit (EHC) components enabling self-supervised learning. Cortical projections enter the hippocampus via the entorhinal cortex and loop over DG and CA fields, functioning as a comparator to reconstruct its inputs(see Lörincz & Buzsáki 2000 Image
How can it approximate raw and reconstructed inputs? GABAergic neurons activation relate to the learning stage and their projections mostly counter-current to the perforant pathway, suggesting they are part of a circuit implementing backpropagation of the error within the EHC. Image
Read 10 tweets
It's a new paper day, so here goes my very first #tweeprint! It was a pleasure to work on this with the incredible Ramanujan Srinath and Doug Ruff. doi.org/10.1101/2021.0…
1/10
We can use visual information in very flexible ways (you might grab or ignore an ice cream cone depending on whether it's yours), which means that visual information must be routed through our brains flexibly through processes like selective attention 2/10
Contrary to my favorite hypothesis (!), we showed recently that attention does not substantially change the amount of visual information in visual cortex. nature.com/articles/s4159… 3/10
Read 10 tweets
New Preprint 🎉

"Methodological considerations for studying neural oscillations"

With Natalie Schaworonkow (@nschawor) and Bradley Voytek (@bradleyvoytek), we review key methodological issues and concerns for analyzing oscillatory neural activity.

🧵:
psyarxiv.com/hvd67/
We use simulated data to demonstrate and describe 7 key issues we think should always be considered for measuring neural oscillations.

We try and review and pull together recommendations, citing and combining topics from across the current literature.

#tweeprint #AcademicEEG The image is a "graphical abstract", showing some
#1: Neural oscillations are not always present.

Neural activity contains aperiodic activity, which has power across all frequencies, and can appear rhythmic.

To validate oscillation specific power, analyses should start with a detection step, verifying oscillatory presence. Figure 1 from the paper. A shows a Dirac delta function, wit
Read 14 tweets
Very happy to announce that my very first paper just came out in PNAS.
“Neuronal circuits overcome imbalance in excitation and inhibition by adjusting connection numbers”
pnas.org/content/118/12… #tweeprint below (1/7)
The paper is a result of a collaboration between @uni_tue @MPICybernetics and @WeizmannScience. A joint effort of @SelfOrgAnna, Moses lab, and Segal labs.
(2/7)
Hippocampal and cortical networks typically have about 20-30% of inhibitory neurons. But would they work with the other percentages? We looked at the activity of networks grown using a novel protocol to precisely control E/I ratios. (3/7) Image
Read 7 tweets
1/ Hark, a #tweeprint! Our new paper is up on #bioRxiv! It’s the results from our #OpenScope project with the @AllenInstitute about learning from unexpected events in the neocortical microcircuit!

Here's a thread with more details...

biorxiv.org/content/10.110…
2/ In this paper, we show that unexpected events drive different changes in the responses of somata and distal apical dendrites in primary visual cortex pyramidal neurons. Schematic figure of a pyramidal neuron with a long apical tr
3/ Previous research has shown that neurons in sensory areas respond to unexpected events. It’s hypothesized that these responses guide our brain in learning a predictive, hierarchical model of the world in an unsupervised (or self-supervised) manner.
Read 20 tweets
My new work with Jonathan Pillow @jpillowtime! "High-contrast 'gaudy' images improve the training of deep neural network models of visual cortex."

We found that gaudy images can train DNNs with little data---perfect for neuro experiments!
arxiv.org/abs/2006.11412 #tweeprint
Our goal is to predict visual cortical responses from natural images. Often linear regression is used to map image features to responses b/c lack of experimental data. Here, we use a DNN (readout network) and avoid overfitting b/c our gaudy images are tailored for training!
The gaudy transformation is simple: Push pixel intensities to the extremes (either 0 or 255). We were inspired by 50-year-old active learning theory that says (under certain cases) the optimal training images are the ones that increase the variance in every pixel dimension.
Read 8 tweets
Very excited to share my first #tweeprint today!

Work by me, Daniel Trotter, @NeuroNaud and @mossy_fibers.

We address short-term plasticity with a linear-nonlinear model and find interesting algorithmic similarities between single synapses and CNNs.

biorxiv.org/content/10.110…
2/ In this paper, we question how to best describe complex short-term plasticity (STP) dynamics in a computational model.

Typically, people tend to categorize synapses into either facilitating (STF) or depressing (STD) types. Image
3/ This STF-STD dichotomy, however, is an oversimplification. Some synapses display more complex dynamics.

At hippocampal mossy fiber synapses, for example, facilitation is supra-linear in low (arguably more physiological) extracellular [Ca2+]. Image
Read 25 tweets
Computational neuroscience has lately had great success at modeling perception with ANNs - but it has been unclear if this approach translates to higher cognitive systems. We made some exciting progress in modeling human language processing biorxiv.org/content/10.110… #tweeprint 1/
This work is the result of a terrific collaboration with @ibandlank @GretaTuckute @KaufCarina @eghbalhosseini @Nancy_Kanwisher Josh Tenenbaum and @ev_fedorenko; @mitbrainandcog @MIT_CBMM @mcgovernmit 2/
Work by @ev_fedorenko and others has localized the language network as a set of regions that support high-level language processing (e.g. sciencedirect.com/science/articl…) BUT the actual mechanisms underlying human language processing have remained unknown. 3/
Read 15 tweets
I’m so excited and proud to finally be able to share my most recent 2nd author paper in @CellCellPress and the cover I designed.👩🏻‍🔬👩🏻‍🎨 #sciart #science #tweeprint

Neuronal Inactivity Co-opts LTP Machinery to Drive Potassium Channel Splicing and Homeostatic Spike Widening

1/n
Neurons can change many of their biochemical and electrical properties, AKA Plasticity. It’s hypothesized that this ability could be a substrate for learning and memory, as well as other key brain functions. Neuronal plasticity typically is categorized into two kinds:

2/n
(1) Hebbian plasticity: “positive feedback” that let’s a neuron reinforce new stimuli. However, left unchecked Hebbian plasticity leads to instability. In comes: (2) Homeostatic plasticity: “negative feedback” that stabilizes a neuron after a long period of stimuli changes.

3/n
Read 12 tweets
Excited to share a preprint of our work "Learning is shaped by abrupt changes in neural engagement," advised by Aaron Batista, Steve Chase, and Byron Yu.

biorxiv.org/content/10.110…

I'm (even more?) excited to finally make my own #tweeprint! (1/n)
Internal states such as our attention and motivation involve brain-wide changes in neural activity. We know changes in these states can impact your behavior. For example, when someone surprises you:
(2/n)
If changes in internal states can impact immediate behavior, maybe they can also impact how you *learn* new behaviors. To learn, neural activity must change in particular ways. But what if internal state changes move you in the wrong way?
(3/n)
Read 12 tweets
1/ Need a distraction from the pandemic? It's #tweeprint time!!!

I'm very excited to share here with you new work from myself, @NeuroNaud, @guerguiev, Alexandre Payeur, and @hisspikeness:

biorxiv.org/content/10.110…

We think our results are quite exciting, so let's go!
2/ Here, we are concerned with the credit assignment problem. How can feedback from higher-order areas inform plasticity in lower-order areas in order to ensure efficient and effective learning?
3/ Based on the LTP/LTD literature (e.g. jneurosci.org/content/26/41/…), we propose a "burst-dependent synaptic plasticity" rule (BDSP). It says, if there is a presynaptic eligibility trace, then:

- postsynaptic burst = LTP
- postsynaptic single spike = LTD
Read 34 tweets
Are you skeptical about successor representations? Want to know how our new model can learn cognitive maps, context-specific representations, do transitive inference, and flexible hierarchical planning? #tweeprint...(1) @vicariousai @swaroopgj @rvrikhye biorxiv.org/content/10.110…
As @yael_niv pointed out in her recent article, learning context specific representations from aliased observations is a challenge. Our agent can learn the layout of a room from severely aliased random walk sequences, only 4 unique observations in the room!
And it works even when the room is empty, with no unique observations in the center of the room. The observations are now severely aliased and correlated, but it still recovers the map of the room.
Read 14 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!