Published: 21st April 2021
Deep Neural Networks can be made more human-like by training with large datasets: IISc study
They studied 13 different perceptual effects and found that Convolutional or deep neural networks that have their object representations match coarsely with the brain are still outperformed by humans
Researchers from the Indian Institute of Science (IISc) in their study have found crucial qualitative differences between the human brain and Deep Neural Networks, and these gaps can be filled by training the deep networks on larger datasets, incorporating more constraints or by modifying network architecture.
The team from the Centre for Neuroscience (CNS) studied 13 different perceptual effects and found that Convolutional or deep neural networks that have their object representations match coarsely with the brain are still outperformed by humans. "Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences," said SP Arun, Associate Professor at CNS and senior author of the study in a note by the institute. Identifying these differences can push us closer to making these networks more brain-like, he added.
In their paper Qualitative similarities and differences in visual object representations between brains and deep networks, Nature Communications, Georgin Jacob, RT Pramod, Harish Katti, SP Arun (2021), noted that although Deep neural networks have revolutionized computer vision and their object representations across layers match coarsely with visual cortical areas in the brain, whether these representations exhibit qualitative patterns seen in human perception or brain representations remains unresolved. During their study, they found that phenomena such as the Thatcher effect, mirror confusion, Weber's law, relative size, multiple object normalization and correlated sparseness were present in deep neural networks trained after training it for object recognition.
However, phenomena such as 3D shape processing, surface invariance, occlusion, natural parts and the global advantage were absent in trained networks. Explaining one of the experiments -- Global Advantage -- Georgin Jacob, first author and PhD student at CNS said "For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it. Similarly, when presented with an image of a face, humans first look at the face as a whole, and then focus on finer details like the eyes, nose, mouth and so on. Surprisingly, neural networks showed a local advantage. This means that, unlike the brain, the networks focus on the finer details of an image first. Therefore, even though these neural networks and the human brain carry out the same object recognition tasks, the steps followed by the two are very different." The study provides hints to what could be incorporated in the deep networks to improve it.