Since Santelmann and Jusczyk's (1998) investigation of English-learning infants' sensitivity to discontinuous morphosyntactic dependencies, there has been considerable research investigating the mechanisms of non-adjacent dependency learning. Many of the ensuing studies involved artificial languages, and the investigation of dependencies at the level of words, syllables, and segments. While these studies targeted characteristics of natural language structure they were interested in examining, others were sacrificed, opening them up to questions about their relevance to theories of language acquisition. I will present a series of experiments from my lab in which we attempted to make the artificial languages a little more natural, as well as experiments that explore how non-adjacent patterns may be leveraged for other types of learning. From the successes and failures, I conclude that findings about non-adjacent dependency learning in artificial languages might have fairly limited relevance for the acquisition of dependencies in natural languages. However, they may be relevant for understanding more broadly how learners detect structure in their input.