How do we generate the right muscle commands to grasp objects? We present a neural network model that replicates the vision to action pipeline for grasping objects and shows internal activity very similar to the monkey brain.
Monkeys grasped and lifted many objects while we recorded neural activity in the grasping circuit (AIP, F5, & M1 - see original paper elifesciences.org/articles/15278). All of these areas have been shown to be necessary for properly pre-shaping the hand during grasping.
We show that the advanced layers of a convolutional neural network trained to identify objects (Alexnet) has features very similar to those in AIP, and may therefore be reasonable inputs to the grasping circuit, while muscle velocity was most congruent with activity in M1.