Events

Classical theories of cognition are symbolic, relying on rules and complex mental data structures. These approaches are powerful and inherently systematic but often brittle and intolerant of noise, and do not reflect the low-level structure of human brains. Connectionist or "sub-symbolic" approaches do reflect brain structure and are robust to noise, but are generally viewed as opaque and, often, as incapable of representing the complex structures necessary for higher cognition. What if we could combine the strengths of these two approaches and simultaneously eliminate their respective drawbacks? In this talk I'll describe a method by which neural networks can encode and manipulate complex symbolic structures hidden in their representations, and I'll give a large-scale example of how a network can learn from examples to create internal structures appropriate to perform a sentence comprehension task.