Hebbian Learning of Artificial Grammars

G Ganis, H Schendan

Research output: Contribution to journalConference articlepeer-review

3 Downloads (Pure)

Abstract

A connectionist model is presented that used a Hebbian learning rule to acquire knowledge about an artificial grammar (AG). The validity of the model was evaluated by the simulation of two classic experiments from the AG learning literature. The first experiment showed that human subjects were significantly better at learning to recall a set of strings generated by an AG, than by a random, process. The model shows the same pattern of performance. The second experiment showed that human subjects were able to generalize the knowledge they acquired during AG learning to novel strings generated by the same grammar. The model is also capable of generalization, and the percentage of errors made by human subjects and by the model are qualitatively very similar. Overall, the model suggests that Hebbian learning is a viable candidate for the mechanism by which human subjects become sensitive to the regularities present in AG's. From the perspective of computational neuroscience, the implications of the model for implicit learning theory, as well as what the model may suggest about the relationship between implicit and explicit memory, are discussed.
Original languageEnglish
Pages (from-to)838-843
Number of pages0
JournalProceedings of the Cognitive Science Society
Volume14
Issue number0
Publication statusPublished - 1992
EventFourteenth Annual Conference of the Cognitive Science Society - Bloomington, Indiana
Duration: 1 Jan 1992 → …

Fingerprint

Dive into the research topics of 'Hebbian Learning of Artificial Grammars'. Together they form a unique fingerprint.

Cite this