A general architecture of language production starts from message encoding, goes to lemma selection, lexeme retrieval, segmental retrieval, syllable construction, and to articulation (Ferreira, 2010). Proximate unit is the first selectable phonological unit after lexeme retrieval in spoken word production, and previous research suggested that the proximate unit differs across languages (O’Seaghdha, et al., 2010). Phoneme is considered to be the proximate unit in English, whereas syllable segment is the proximate unit in Mandarin Chinese. However, one may argue that this difference could be related to the orthographic features of the languages. Previous studies have shown that orthographic information exerts important influence on spoken word recognition among different languages (e.g., Ziegler & Ferrend, 1998; Zou, et al., 2012), and this effect may be due to the possibility that orthographic knowledge changes phonological representation. It remains unclear, however, whether a similar effect will be shown in spoken word production. In this talk, I will propose a set of experiments that examine 1) whether the exposure to orthographic information influences the proximate unit in spoken word production, and 2) whether orthographic strategies are used in spoken word production even when no explicit orthographic information is presented. Our preliminary results suggested that orthographic information indeed has an effect and orthographic strategies are utilized in a picture-pair association task without any explicit orthographic information.