Abstract
The function of a neuron can be described simultaneously at several levels of abstraction. For instance, a spike train represents the result of a computation done by a single neuron with its inputs, but it also represents the result of a complex function realized by the network in which the neuron is embedded. When models of large parts of the brain are considered, it may be desirable to use computational modules operating at a very abstract level. However, it is shown here that abstract neural functions depend on detailed features of the single neuron model used in the network reproducing the abstract function. Examples are given of the multiplicative function, motion detection, short-term memory and timing. All these operations rely on one or another feature of the extended Leaky Integrate-and-Fire neuron used in this paper, e.g. probabilistic synapses, post-synaptic currents modelled with alpha functions or partial reset of the somatic membrane. Consequently it is suggested that neural modelling at an abstract level does not obviate the need for a clear statement on the nature of the underlying model of biological neuron. In that sense, not many abstract functions are convincingly grounded, not even the standard formal neurons used in most artificial neural networks.
Original language | English |
---|---|
Pages (from-to) | 11-19 |
Number of pages | 0 |
Journal | BioSystems |
Volume | 40 |
Issue number | 0 |
DOIs | |
Publication status | Published - 1997 |
Keywords
- Membrane Potentials
- Memory
- Short-Term
- Models
- Biological
- Motion Perception
- Neurons