, 10 tweets, 4 min read Read on Twitter
Day 1 at #ccn18 was quite stimulating. Short summary of interesting information ahead. All the papers can be found at - ccneuro.org/Papers/Accepte…
@AlisonGopnik argued that the "bugs" of childhood might actually be developmental features. The phase of exploring the world without many priors might be important in rapidly inferring and learning associations.
Ugurcan Mugan argued that long-range vision + sufficiently complex environments are important for the development of high-level planning. The argument was based on a simple predator-prey game where the prey's access to information and kinds of obstacles were manipulated
@thisismyhat presented his work on feedback alignment in training NNs. That the random feedback could be construed as the feedforward pass for another network was interesting. Still have to mull over the potential consequences 🤯
Had an interesting discussion with @KohitijKar about how much the representations in IT cortex are malleable. Apparently, in monkeys, it has been observed that IT representations don't change much, when they learn categories, indicating some downstream area is the decoder 🤷‍♂️
Aria Wang's work showed that object affordances could be computed in a bottom-up fashion using CNNs. Another step in qualifying the bottom-up computability of factors that are traditionally considered "conceptual".
Had a nice discussion with Ruyuan Zhang about their work on assessing the connection between the usual CNNs and the brain by considering adversarial examples. Adv. examples might provide windows into the nature of the features the network cares about in performing it's task.
Andrew Zaharia presented a cool, and seemingly informative, technique for visualizing representational spaces by establishing spherical categorical bounds and then reducing the dimensionality of the distances between the hyoerspheres. The results look beautiful.
@rockNroll87q presented preliminary results about the nature of "predictive" feedback to EVC. It's interesting that the "inpainting" decoder's features are better matched to EVC features in the non-occluded region! 7T might shed more/better light (by assessing cortical layers)
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Sushrut Thorat
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!