The standard framework of cognitive science – the computational-representational theory of mind (CRT) – has it that mental processes are the result of brains implementing computations that range over representations. These representations have standardly been thought of as intentional states – that is, states that have contents such that they are about or representations of particular things. For example, one standard interpretation of generative linguistics is that the mind performs computations over states that are representations of, inter alia, noun phrases, phonemes, and theta roles. Recently, however, Chomsky (2000), Burge (2010), and others generally supposed to be adherents of the CRT, have argued that at least some of the so-called representations posited under the framework are not intentional: they are not about anything at all. Jones and Love (2011) assert, in particular, that Bayesian accounts of perception rely on computations that range over non-intentional states. This claim is particularly remarkable because Bayesian inference is often construed as a variety of hypothesis testing – a process notoriously difficult to characterize in non-intentional terms. Thus far, the arguments for these claims have been unsatisfying. To rectify the situation, I’ll examine a particular Bayesian account of color perception given by Allred (2012) and Brainard et al. (2008; 1997). Even though their account is couched in the intentional idiom of hypothesis testing, I’ll argue that we can preserve its explanatory power without attributing intentional states to the early perceptual system it describes. The conclusion of this analysis is not that intentionality can be eliminated wholesale from cognitive science, as Stich, Dennett, and the early behaviorists have advocated. Rather, analysis of why intentional explanation proves unnecessary in this particular case sheds light on the conditions under which intentional explanation does prove explanatorily fruitful.