Computational model of entorhinal-hippocamapal loop derived from a single principle
A. Lõrincz and Gy. Buzsáki
IJCNN, Washington
1999
Abstract
Prediction based on information emerging from a high dimensional sensory
system becomes less demanding if the processed sensory information can be
separated into components that evolve independently. Networks that develop
independent components (ICs) in an efficient manner can be built from two
stages. We identify these stages with the CA3 and CA1 layers of the
hippocampus (HC). The forming of ICs requires non-linear operation, whereas
IC outputs arise under linear operation. Thus two-phase operation is a
consequence of our initial principle. We identify linear operation with
the sharp wave phase and non-linear operation with the theta phase.
Prediction can be learned by Hebbian means provided that information
about the past and the present is available concurrently. The concurrent
occurrence can be achieved by delaying structures, e.g., by loops. The
loop structure requires a third layer that we identify with the entorhinal
cortex (EC). The output of the computational structure should not be
modified by the presence of the loop. Thus the loop made of the three
layers ought to form a dynamic reconstructing (generative) network. This
means that the loop is capable to output ICs in linear mode provided that
the new layer, the EC, encodes a representation of the ICs. Proper encoding
also requires the compensation of the delays. Delay compensation can be
achieved by means of the predictive structure itself. The dynamical equation
suggests two predictive structures, the EC to CA1 connections that operate
during theta phase and the recurrent collateral system of the CA3 field
that operates during sharp wave phase. Proper encoding into the EC is possible
during linear operation in a supervised manner. The reconstruction dynamic
network can be seen as an error compensating control architecture. The HC
part of the reconstruction network is inputted by the error, the mismatch
between the primary input to the EC-HC loop and the reconstructed input
conveyed by the hippocampus. Errors between primary input and reconstructed
input can arise when the information is "novel" and until the predictive
structures are formed. The control architecture promotes the two-phase operation:
it can work both in a linear as well as in a non-linear mode. Linear and
non-linear operations are both necessary to the statistical tuning of the
CA3 and CA1 fields. We assume that it is possible to extend the EC into a
distributed and hierarchical reconstruction network system, the long-term
memory (LTM). The novel information provided by the LTM to the HC undergoes
statistical analysis. The output of the HC encodes the novel information into
the LTM in a supervised manner. We assume that the formation of the predictive
structures in the LTM is not supervised by the HC. Therefore LTM layers
undergoing HC supervised training may have poor predictive properties and
the novel information reaching the hippocampus is temporally convolved by
the reconstruction dynamics within the LTM. Convolution should be counteracted
to form ICs. Temporal convolution produced by reconstruction networks has a
special mathematical structure. This special mathematical structure allows a
simplified blind source deconvolution (BSD) stage to utilize. We identify the
BSD network with the dentate gyrus and show that the dentate gyrus satisfies
nicely the strict requirements of the simplified BSD stage.