Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
  • Detection of synchronized oscillations in the electroencephalogram: an evaluation of methods.

    24 October 2018

    The signal averaging approach typically used in ERP research assumes that peaks in ERP waveforms reflect neural activity that is uncorrelated with activity in the ongoing EEG. However, this assumption has been challenged by research suggesting that ERP peaks reflect event-related synchronization of ongoing EEG oscillations. In this study, we investigated the validity of a set of methods that have been used to demonstrate that particular ERP peaks result from synchronized EEG oscillations. We simulated epochs of EEG data by superimposing phasic peaks on noise characterized by the power spectrum of the EEG. When applied to the simulated data, the methods in question produced results that have previously been interpreted as evidence of synchronized oscillations, even though no such synchrony was present. These findings suggest that proposed analysis methods may not effectively disambiguate competing views of ERP generation.

  • Parameterization of connectionist models.

    24 October 2018

    We present a method for estimating parameters of connectionist models that allows the model's output to fit as closely as possible to empirical data. The method minimizes a cost function that measures the difference between statistics computed from the model's output and statistics computed from the subjects' performance. An optimization algorithm finds the values of the parameters that minimize the value of this cost function. The cost function also indicates whether the model's statistics are significantly different from the data's. In some cases, the method can find the optimal parameters automatically. In others, the method may facilitate the manual search for optimal parameters. The method has been implemented in Matlab, is fully documented, and is available for free download from the Psychonomic Society Web archive at www.psychonomic.org/archive/.

  • The restricted influence of sparseness of coding on the capacity of familiarity discrimination networks.

    24 October 2018

    Much evidence indicates that the perirhinal cortex is involved in the familiarity discrimination aspect of recognition memory. It has been previously shown under selective conditions that neural networks performing familiarity discrimination can achieve very high storage capacity, being able to deal with many times more stimuli than associative memory networks can in associative recall. The capacity of associative memories for recall has been shown to be highly dependent on the sparseness of coding. However, previous work on the networks of Bogacz et al, Norman and O'Reilly and Sohal and Hasselmo that model familiarity discrimination in the perirhinal cortex has not investigated the effects of the sparseness of encoding on capacity. This paper explores how sparseness of coding influences the capacity of each of these published models and establishes that sparse coding influences the capacity of the different models in different ways. The capacity of the Bogacz et al model can be made independent of the sparseness of coding. Capacity increases as coding becomes sparser for a simplified version of the neocortical part of the Norman and O'Reilly model, whereas capacity decreases as coding becomes sparser for a simplified version of the Sohal and Hasselmo model. Thus in general, and in contrast to associative memory networks, sparse encoding results in little or no advantage for the capacity of familiarity discrimination networks. Hence it may be less important for coding to be sparse in the perirhinal cortex than it is in the hippocampus. Additionally, it is established that the capacities of the networks are strongly dependent on the precise form of the learning rules (synaptic plasticity) used in the network. This finding indicates that the precise characteristics of synaptic plastic changes in the real brain are likely to have major influences on storage capacity.

  • The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks.

    24 October 2018

    In this article, the authors consider optimal decision making in two-alternative forced-choice (TAFC) tasks. They begin by analyzing 6 models of TAFC decision making and show that all but one can be reduced to the drift diffusion model, implementing the statistically optimal algorithm (most accurate for a given speed or fastest for a given accuracy). They prove further that there is always an optimal trade-off between speed and accuracy that maximizes various reward functions, including reward rate (percentage of correct responses per unit time), as well as several other objective functions, including ones weighted for accuracy. They use these findings to address empirical data and make novel predictions about performance under optimality.

  • Improved conditions for the generation of beta oscillations in the subthalamic nucleus--globus pallidus network.

    24 October 2018

    A key pathology in the development of Parkinson's disease is the occurrence of persistent beta oscillations, which are correlated with difficulty in movement initiation. We investigated the network model composed of the subthalamic nucleus (STN) and globus pallidus (GP) developed by A. Nevado Holgado et al. [(2010) Journal of Neuroscience, 30, 12340-12352], who identified the conditions under which this circuit could generate beta oscillations. Our work extended their analysis by deriving improved analytic stability conditions for realistic values of the synaptic transmission delay between STN and GP neurons. The improved conditions were significantly closer to the results of simulations for the range of synaptic transmission delays measured experimentally. Furthermore, our analysis explained how changes in cortical and striatal input to the STN-GP network influenced oscillations generated by the circuit. As we have identified when a system of mutually connected populations of excitatory and inhibitory neurons can generate oscillations, our results may also find applications in the study of neural oscillations produced by assemblies of excitatory and inhibitory neurons in other brain regions.

  • Optimal decision network with distributed representation.

    24 October 2018

    On the basis of detailed analysis of reaction times and neurophysiological data from tasks involving choice, it has been proposed that the brain implements an optimal statistical test during simple perceptual decisions. It has been shown recently how this optimal test can be implemented in biologically plausible models of decision networks, but this analysis was restricted to very simplified localist models which include abstract units describing activity of whole cell assemblies rather than individual neurons. This paper derives the optimal parameters in a model of a decision network including individual neurons, in which the alternatives are represented by distributed patterns of neuronal activity. It is also shown how the optimal weights in the decision network can be learnt via iterative rules using information accessible for individual synapses. Simulations demonstrate that the network with the optimal synaptic weights achieves better performance and matches fundamental behavioural regularities observed in choice tasks (Hick's law and the relationship between the error rate and the time for decision) better than a network with synaptic weights set according to a standard Hebb rule.

  • Do humans produce the speed-accuracy trade-off that maximizes reward rate?

    24 October 2018

    In this paper we investigate trade-offs between speed and accuracy that are produced by humans when confronted with a sequence of choices between two alternatives. We assume that the choice process is described by the drift diffusion model, in which the speed-accuracy trade-off is primarily controlled by the value of the decision threshold. We test the hypothesis that participants choose the decision threshold that maximizes reward rate, defined as an average number of rewards per unit of time. In particular, we test four predictions derived on the basis of this hypothesis in two behavioural experiments. The data from all participants of our experiments provide support only for some of the predictions, and on average the participants are slower and more accurate than predicted by reward rate maximization. However, when we limit our analysis to subgroups of 30-50% of participants who earned the highest overall rewards, all the predictions are satisfied by the data. This suggests that a substantial subset of participants do select decision thresholds that maximize reward rate. We also discuss possible reasons why the remaining participants select thresholds higher than optimal, including the possibility that participants optimize a combination of reward rate and accuracy or that they compensate for the influence of timing uncertainty, or both.

  • Posterior weighted reinforcement learning with state uncertainty.

    24 October 2018

    Reinforcement learning models generally assume that a stimulus is presented that allows a learner to unambiguously identify the state of nature, and the reward received is drawn from a distribution that depends on that state. However, in any natural environment, the stimulus is noisy. When there is state uncertainty, it is no longer immediately obvious how to perform reinforcement learning, since the observed reward cannot be unambiguously allocated to a state of the environment. This letter addresses the problem of incorporating state uncertainty in reinforcement learning models. We show that simply ignoring the uncertainty and allocating the reward to the most likely state of the environment results in incorrect value estimates. Furthermore, using only the information that is available before observing the reward also results in incorrect estimates. We therefore introduce a new technique, posterior weighted reinforcement learning, in which the estimates of state probabilities are updated according to the observed rewards (e.g., if a learner observes a reward usually associated with a particular state, this state becomes more likely). We show analytically that this modified algorithm can converge to correct reward estimates and confirm this with numerical experiments. The algorithm is shown to be a variant of the expectation-maximization algorithm, allowing rigorous convergence analyses to be carried out. A possible neural implementation of the algorithm in the cortico-basal-ganglia-thalamic network is presented, and experimental predictions of our model are discussed.

  • Optimal decision making on the basis of evidence represented in spike trains.

    24 October 2018

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.