Events

Most computer scientists would say that computational linguistics underwent something of a revolution in the early 1990's, due to the development and subsequent popularization of statistical approaches to problem-solving. These approaches, embraced wholeheartedly by computer scientists, notoriously eschew traditional linguistic characterization, in favor of predictive performance. This trend is a microcosm of a larger trend in science, though it can be argued that natural language processing is the best example of this.

Machine translation (MT) was at the heart of this shift in computer science. With the failure of rule-based approaches to MT prior to 1990, this field has since been almost totally dominated by statistical techniques. We are now attempting to push the boundaries of machine translation to create systems that translate (or interpret) incrementally from SOV to SVO languages, starting with Japanese to English. Verbs and other items near the head of a sentence are obviously essential; thus, they must be predicted ahead of time for the system to function incrementally. In addition, the system, as a human, must be able to judge when it has sufficient information to commit to a partial translation, balancing performance and accuracy: it must be able to "think on its feet." To do this, we turn to machine learning approaches.

In this talk, I will discuss the history of machine translation and how we are attempting to build upon its successes to create self-optimizing incremental translation systems, with particular focus on the issue of predicting items that occur late in Japanese sentences before they are observed.