Our ability to compare sensory stimuli is a fundamental cognitive function, which is known to be affected by two biases: choice bias, which reflects a preference for a given response, and contraction bias, which reflects a tendency to perceive stimuli as similar to previous ones. To test whether both reflect supervised processes, we designed feedback protocols aimed to modify them and tested them in human participants. Choice bias was readily modifiable. However, contraction bias was not. To compare these results to those predicted from an optimal supervised process, we studied a noise-matched optimal linear discriminator (Perceptron). In this model, both biases were substantially modified, indicating that the “resilience” of contraction bias to feedback does not maximize performance. These results suggest that perceptual discrimination is a hierarchical, two-stage process. In the first, stimulus statistics are learned and integrated with representations in an unsupervised process that is impenetrable to external feedback. In the second, a binary judgment, learned in a supervised way, is applied to the combined percept.
SIGNIFICANCE STATEMENT The seemingly effortless process of inferring physical reality from the sensory input is highly influenced by previous knowledge, leading to perceptual biases. Two common ones are contraction bias (the tendency to perceive stimuli as similar to previous ones) and choice bias (the tendency to prefer a specific response). Combining human psychophysical experiments with computational modeling we show that they reflect two different learning processes. Contraction bias reflects unsupervised learning of stimuli statistics, whereas choice bias results from supervised or reinforcement learning. This dissociation reveals a hierarchical, two-stage process. The first, where stimuli statistics are learned and integrated with representations, is unsupervised. The second, where a binary judgment is applied to the combined percept, is learned in a supervised way.
Experiences are represented in the brain by patterns of neuronal activity. Ensembles of neurons representing experience undergo activity-dependent plasticity and are important for learning and recall. They are thus considered cellular engrams of memory. Yet, the cellular events that bias neurons to become part of a neuronal representation are largely unknown. In rodents, turnover of structural connectivity has been proposed to underlie the turnover of neuronal representations and also to be a cellular mechanism defining the time duration for which memories are stored in the hippocampus. If these hypotheses are true, structural dynamics of connectivity should be involved in the formation of neuronal representations and concurrently important for learning and recall. To tackle these questions, we used deep-brain 2-photon (2P) time-lapse imaging in transgenic mice in which neurons expressing the Immediate Early Gene (IEG) Arc (activity-regulated cytoskeleton-associated protein) could be permanently labeled during a specific time window. This enabled us to investigate the dynamics of excitatory synaptic connectivity—using dendritic spines as proxies—of hippocampal CA1 (cornu ammonis 1) pyramidal neurons (PNs) becoming part of neuronal representations exploiting Arc as an indicator of being part of neuronal representations. We discovered that neurons that will prospectively express Arc have slower turnover of synaptic connectivity, thus suggesting that synaptic stability prior to experience can bias neurons to become part of representations or possibly engrams. We also found a negative correlation between stability of structural synaptic connectivity and the ability to recall features of a hippocampal-dependent memory, which suggests that faster structural turnover in hippocampal CA1 might be functional for memory.
Sensory information is processed in the visual cortex in distinct streams of different anatomical and functional properties. A comparable organizational principle has also been proposed to underlie auditory processing. This raises the question of whether a similar principle characterize the somatosensory domain. One property of a cortical stream is a hierarchical organization of the neuronal response properties along an anatomically distinct pathway. Indeed, several hierarchies between specific somatosensory cortical regions have been identified, primarily using electrophysiology, in non-human primates. However, it has been unclear how these local hierarchies are organized throughout the cortex. Here we used phase-encoded bilateral full-body light touch stimulation in healthy humans under functional MRI to study the large-scale organization of hierarchies in the somatosensory domain. We quantified two measures of hierarchy of BOLD responses, selectivity and laterality. We measured how selectivity and laterality change as we move away from the central sulcus within four gross anatomically-distinct regions. We found that both selectivity and laterality decrease in three directions: parietal, posteriorly along the parietal lobe, frontal, anteriorly along the frontal lobe and medial, inferiorly-anteriorly along the medial wall. The decline of selectivity and laterality along these directions provides evidence for hierarchical gradients. In view of the anatomical segregation of these three directions, the multiplicity of body representations in each region and the hierarchical gradients in our findings, we propose that as in the visual and auditory domains, these directions are streams of somatosensory information processing.
Penfield’s description of the ‘homunculus’, a ‘grotesque creature’ with large lips and hands and small trunk and legs depicting the representation of body-parts within the primary somatosensory cortex (S1), is one of the most prominent contributions to the neurosciences. Since then, numerous studies have identified additional body-parts representations outside of S1. Nevertheless, it has been implicitly assumed that S1’s homunculus is representative of the entire somatosensory cortex. Therefore, the distribution of body-parts representations in other brain regions, the property that gave Penfield’s homunculus its famous ‘grotesque’ appearance, has been overlooked. We used whole-body somatosensory stimulation, functional MRI and a new cortical parcellation to quantify the organization of the cortical somatosensory representation. Our analysis showed first, an extensive somatosensory response over the cortex; and second, that the proportional representation of body parts differs substantially between major neuroanatomical regions and from S1, with, for instance, much larger trunk representation at higher brain regions, potentially in relation to the regions’ functional specialization. These results extend Penfield’s initial findings to the higher level of somatosensory processing and suggest a major role for somatosensation in human cognition.
Yosef Grodzinsky, Isabelle Deschamps, Peter Pieperhoff, Francesca Iannilli, Galit Agmon, Yonatan Loewenstein, and Katrin Amunts. 11/4/2019. “Logical negation mapped onto the brain.” Brain Structure and Function, 10, Pp. 1-13. Abstract
High-level cognitive capacities that serve communication, reasoning, and calculation are essential for finding our way in the world. But whether and to what extent these complex behaviors share the same neuronal substrate are still unresolved questions. The present study separated the aspects of logic from language and numerosity—mental faculties whose distinctness has been debated for centuries—and identified a new cytoarchitectonic area as correlate for an operation involving logical negation. A novel experimental paradigm that was implemented here in an RT/fMRI study showed a single cluster of activity that pertains to logical negation. It was distinct from clusters that were activated by numerical comparison and from the traditional language regions. The localization of this cluster was described by a newly identified cytoarchitectonic area in the left anterior insula, ventro-medial to Broca’s region. We provide evidence for the congruence between the histologically and functionally defined regions on multiple measures. Its position in the left anterior insula suggests that it functions as a mediator between language and reasoning areas.
The induction of immediate-early gene (IEG) expression in brain nuclei in response to an experience is necessary for the formation of long-term memories. Additionally, the rapid dynamics of IEG induction and decay motivates the common use of IEG expression as markers for identification of neuronal assemblies (“ensembles”) encoding recent experience. However, major gaps remain in understanding the rules governing the distribution of IEGs within neuronal assemblies. Thus, the extent of correlation between coexpressed IEGs, the cell specificity of IEG expression, and the spatial distribution of IEG expression have not been comprehensively studied. To address these gaps, we utilized quantitative multiplexed single-molecule fluorescence in situ hybridization (smFISH) and measured the expression of IEGs (Arc, Egr2, and Nr4a1) within spiny projection neurons (SPNs) in the dorsal striatum of mice following acute exposure to cocaine. Exploring the relevance of our observations to other brain structures and stimuli, we also analyzed data from a study of single-cell RNA sequencing of mouse cortical neurons. We found that while IEG expression is graded, the expression of multiple IEGs is tightly correlated at the level of individual neurons. Interestingly, we observed that region-specific rules govern the induction of IEGs in SPN subtypes within striatal subdomains. We further observed that IEG-expressing assemblies form spatially defined clusters within which the extent of IEG expression correlates with cluster size. Together, our results suggest the existence of IEG-expressing neuronal “superensembles,” which are associated in spatial clusters and characterized by coherent and robust expression of multiple IEGs.
Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.
Qualitative psychological principles are commonly utilized to influence the choices that people make. Can this goal be achieved more efficiently by using quantitative models of choice? Here, we launch an academic competition to compare the effectiveness of these two approaches.
Behavior deviating from our normative expectations often appears irrational. For example, even though behavior following the so-called matching law can maximize reward in a stationary foraging task, actual behavior commonly deviates from matching. Such behavioral deviations are interpreted as a failure of the subject; however, here we instead suggest that they reflect an adaptive strategy, suitable for uncertain, non-stationary environments. To prove it, we analyzed the behavior of primates that perform a dynamic foraging task. In such nonstationary environment, learning on both fast and slow timescales is beneficial: fast learning allows the animal to react to sudden changes, at the price of large fluctuations (variance) in the estimates of task relevant variables. Slow learning reduces the fluctuations but costs a bias that causes systematic behavioral deviations. Our behavioral analysis shows that the animals solved this bias-variance tradeoff by combining learning on both fast and slow timescales, suggesting that learning on multiple timescales can be a biologically plausible mechanism for optimizing decisions under uncertainty.
Penfield’s description of the “homunculus”, a “grotesque creature” with large lips and hands and small trunk and legs depicting the representation of body-parts within the primary somatosensory cortex (S1), is one of the most prominent contributions to the neurosciences. Since then, numerous studies have identified additional body-parts representations outside of S1. Nevertheless, it has been implicitly assumed that S1’s homunculus is representative of the entire somatosensory cortex. Therefore, the distribution of body-parts representations in other brain regions, the property that gave Penfield’s homunculus its famous “grotesque” appearance, has been overlooked. We used whole-body somatosensory stimulation, functional MRI and a new cortical parcellation to quantify the organization of the cortical somatosensory representation. Our analysis showed first, an extensive somatosensory response over the cortex; and second, that the proportional representation of body-parts differs substantially between major neuroanatomical regions and from S1, with, for instance, much larger trunk representation at higher brain regions, potentially in relation to the regions’ functional specialization. These results extend Penfield’s initial findings to the higher level of somatosensory processing and suggest a major role for somatosensation in human cognition.
Our goal in this study was to behaviorally characterize the property (or properties) that render negative quantifiers more complex in processing compared to their positive counterparts (e.g. the pair few/many). We examined two sources: (i) negative polarity; (ii) entailment reversal (aka downward monotonicity). While negative polarity can be found in other pairs in language such as dimensional adjectives (e.g. the pair small/large), only in quantifiers does negative polarity also reverse the entailment pattern of the sentence. By comparing the processing traits of negative quantifiers with those of non-monotone expressions that contain negative adjectives, using a verification task and measuring reaction times, we found that negative polarity is cognitively costly, but in downward monotone quantifiers it is even more so. We therefore conclude that both negative polarity and downward monotonicity contribute to the processing complexity of negative quantifiers.
Recent experiments demonstrate substantial volatility of excitatory connectivity in the absence of any learning. This challenges the hypothesis that stable synaptic connections are necessary for long-term maintenance of acquired information. Here we measure ongoing synaptic volatility and use theoretical modeling to study its consequences on cortical dynamics. We show that in the balanced cortex, patterns of neural activity are primarily determined by inhibitory connectivity, despite the fact that most synapses and neurons are excitatory. Similarly, we show that the inhibitory network is more effective in storing memory patterns than the excitatory one. As a result, network activity is robust to ongoing volatility of excitatory synapses, as long as this volatility does not disrupt the balance between excitation and inhibition. We thus hypothesize that inhibitory connectivity, rather than excitatory, controls the maintenance and loss of information over long periods of time in the volatile cortex.
It is generally believed that during economic decisions, striatal neurons represent the values associated with different actions. This hypothesis is based on studies, in which the activity of striatal neurons was measured while the subject was learning to prefer the more rewarding action. Here we show that these publications are subject to at least one of two critical confounds. First, we show that even weak temporal correlations in the neuronal data may result in an erroneous identification of action-value representations. Second, we show that experiments and analyses designed to dissociate action-value representation from the representation of other decision variables cannot do so. We suggest solutions to identifying action-value representation that are not subject to these confounds. Applying one solution to previously identified action-value neurons in the basal ganglia we fail to detect action-value representations. We conclude that the claim that striatal neurons encode action-values must await new experiments and analyses.
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose E-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using E-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.
According to the synaptic trace theory of memory, activity-induced changes in the pattern of synaptic connections underlie the storage of information for long periods. In this framework, the stability of memory critically depends on the stability of the underlying synaptic connections. Surprisingly however, synaptic connections in the living brain are highly volatile, which poses a fundamental challenge to the synaptic trace theory. Here we review recent experimental evidence that link the initial formation of a memory with changes in the pattern of connectivity, but also evidence that synaptic connections are considerably volatile even in the absence of learning. Then we consider different theoretical models that have been put forward to explain how memory can be maintained with such volatile building blocks.
hough often ignored, many studies have shown that implicit stimulus‐specific expectations play an important role in perception. However, what information about the prior distribution of stimuli is integrated into these perceptual expectations and how this information is utilized in the process of perceptual decision making is not clear.
Here we address this question for the case of a simple two‐tone discrimination task. We find a large perceptual bias favoring the mean of previous stimuli, i.e. “contraction bias” ‐small magnitudes are overestimated and large magnitudes are underestimated. We propose a biologically plausible computational model that accounts for this phenomenon in the general population.
We then apply this proposed model to a specific population ‐ dyslexics ‐ to characterize their poorer performance in this task computationally. Our findings show that dyslexics’ perceptual deficit can be accounted for by inadequate weighting of their implicit memory of past trials relative to their internal noise. Underweighting the stimulus statistics decreases dyslexics’ ability to compensate for noisy observations. This study provides the first description of a specific computational deficit associated with dyslexia.
It has long been known that we subjectively experience longer stimuli as being more intense. A recent study sheds light on the neural mechanisms underlying this bias by tracking the formation of a percept of intensity in the rat brain.
The selection and timing of actions are subject to determinate influences such as sensory cues and internal state as well as to effectively stochastic variability. Although stochastic choice mechanisms are assumed by many theoretical models, their origin and mechanisms remain poorly understood. Here we investigated this issue by studying how neural circuits in the frontal cortex determine action timing in rats performing a waiting task. Electrophysiological recordings from two regions necessary for this behavior, medial prefrontal cortex (mPFC) and secondary motor cortex (M2), revealed an unexpected functional dissociation. Both areas encoded deterministic biases in action timing, but only M2 neurons reflected stochastic trial-by-trial fluctuations. This differential coding was reflected in distinct timescales of neural dynamics in the two frontal cortical areas. These results suggest a two-stage model in which stochastic components of action timing decisions are injected by circuits downstream of those carrying deterministic bias signals.