Perception involves a complex interaction between feedforward (bottom-up) sensory-driven inputs and feedback (top-down) attention and memory-driven processes. A mechanistic understanding of feedforward processing, and its limitations, is a necessary first step towards elucidating key aspects of perceptual functions and dysfunctions. In this talk, I will review our ongoing effort towards the development of a large-scale, neurophysiologically accurate computational model of feedforward visual processing in the primate cortex. I will present experimental evidence from a recent electrophysiology study with awake behaving monkeys engaged in a rapid natural scene categorization task. The results suggest that bottom-up processes may provide a satisfactory description of the very first pass of information in the visual cortex. I will then survey recent work extending a feedforward hierarchical model from the processing of 2D shape to motion, depth and color. I will show that this bio-inspired approach to computer vision performs on par with, or better than state-of-the-art computer vision systems in several real-world applications. This demonstrates that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.