Events

The learning by gradient descent that occurs when deep learning networks are trained by an omniscient supervisor does not resemble the learning that animals do. The most basic process in animal learning is the storing in memory of quantitative facts extracted from sensory experience by problem specific computations. Because neural nets lack symbolic memory, they do not store quantitative facts (numbers). The second most basic process is the adoption of an appropriate stochastic model. The models animals adopt rest on strong problem-specific ontological commitments. The models enable the efficient encoding of the facts (minimize memory load) and enable actions that anticipate future events. I illustrate these aspects of animal learning by three examples: the learning of the solar ephemeris by bees; path integration and the learning of the cognitive map in ants and bees; the learning of short- and long-trial probabilities in mice.