ABSTRACT: In this article I show how three underlying
principles of several models --- including the (generalized) context
model (Medin & Schaffer, 1978; Nosofsky 1986) and standard back
propagation networks (Rumelhart, Hinton & Williams, 1986) --- have
been synthesized into the ALCOVE model (Kruschke, 1992). I illustrate
the importance of each principle with data from human
category-learning experiments. Whereas models that incorporate the
three principles can fit the human data from various experiments
reasonably well, several other models that lack one or more of the
principles fail to capture human performance.
I discuss the geneology of ALCOVE and its relations to standard back
propagation, radial basis function networks, and the generalized
context model, among others. Each of the models implements one or
more of the principles of error-driven learning, dimensional attention
shifts, and quasi-local representation.