On May 12th, Google announced the release of SyntaxNet, an open-source neural network framework that provides a foundation for Natural Language Understanding (NLU) systems. NLU systems have proven to be the hardest for machines to crack, so this release represents a significant step forward in machine learning–both in the ability to learn and decode language and then potentially apply this learning framework to new contexts.

The release includes all the code needed to train new SyntaxNet models. It also introduces us to Parsey McParseface, an English parser trained to analyze English text. The Parsey McParseface parser is an advanced algorithm used to diagram, decipher and otherwise decode written and spoken speech, ready to install and use. According to Google’s official announcement released yesterday, “Parsey McParseface is built on powerful machine learning algorithms that learn to analyze the linguistic structure of language, and that can explain the functional role of each word in a given sentence.” Applications range from speech recognition to AI to machine learning to machine writing. As Google released the code, they challenged developers to try to improve the accuracy in English and apply the algorithm to other languages.

As a Spanish teacher and instructional technologist, my two “yo-es” (as Jorge Luis Borges would say) are at odds. This deep neural networking could potentially bring health services to millions around the world. It could provide guidance to many who need it: career training, crisis management, counselling, companionship. At the same time, could these AI systems impose a post-postcolonial system of priorities or power? Short of geopolitical transformation, could this technology just make people linguistically lazy? Follow me on this one: what soda and sloth was to physical beings in WALL-E, well, sloppy syntax could become to speaking beings. It could actually happen.

Am I fascinated that language is so complex to be beyond the combined computational power at Google? Sí, señor. Am I curious to learn what we can learn from the research that goes into machine language learning? Sí, señor. Am I nervous about what this means or language teachers in the role we currently define for ourselves? Un poquito. Am I equally nervous about linguistic hegemony as engineered into these systems? Sí, señor. On the flipside, could these systems and technologies actually help us identify and preserve language communities? A lo mejor, sí.

This raises some fundamental questions about languages and language teaching. We are grappling with these as modern language teachers as part of our departmental review.

  • What role do the study of grammar and syntax have in language study? Do students need to know their parts of speech? Do they themselves need to know how to parse language if Parsey McParseface can do it for them?
  • If Google can parse and translate everything for us, what need exists to study a foreign language?
  • What are the unique language abilities of man than no machine can replicate? Parsing is not poetic, nor is it persuasive. How important is this?
  • Bots are already writing reports and news stories–is fiction next?
  • It has now been demonstrated that machines can master an artist’s style and I recently read that machines and replicate an artist’s unique style across genres. Are we too far removed from this in the realm of literature?
  • According to reports, the team at Google Brain has already begun inputting unpublished novels into supercomputers towards comprehensible creative AI writing. (One might ask whether romance novels–unpublished ones at that–are the best source of comprehensible input, but we’ll ignore that for now.) What might this mean for readers, writers and teachers?
  • If Google is learning to make sense of ambiguous utterances, does this democratize or downgrade persuasive, well-parsed prose?
  • Will bots replace teachers? What if they could be harnessed as our virtual, very qualified TAs.

All of these are fundamental question about learning, language and literature in our times. The dawn of a new time, the Age of Affective Computing, makes it imperative that we humans answer these questions before intelligent machines answer them for us.

Advertisements