MASSIVE dataset and Massively Multilingual NLU (MMNLU-22) competition and workshop will help researchers scale natural-language-understanding technology to every language on Earth
Imagine that all people around the world could use voice AI systems such as Alexa in their native tongues. One promising approach to realizing this vision is massively multilingual natural-language understanding (MMNLU), a paradigm in which a single machine learning model can parse and understand inputs from many typologically diverse languages. By learning a shared data representation that spans languages, the model can transfer knowledge from languages with abundant training data to those in which training data is scarce.
Today we are pleased to make three announcements related to MMNLU.
First, we are releasing a new dataset called MASSIVE, which is composed of one million labeled utterances spanning 51 languages, along with open-source code, which provides examples of how to perform massively multilingual NLU modeling and allows practitioners to re-create baseline results for intent classification and slot filling that are presented in our paper.
Second, we are launching a new competition using the MASSIVE dataset called Massively Multilingual NLU 2022 (MMNLU-22).
And third, we will cohost a workshop at EMNLP 2022 in Abu Dhabi and online, also called Massively Multilingual NLU 2022, which will highlight the results from the competition and include presentations from invited speakers and oral and poster sessions from submitted papers on multilingual natural-language processing (NLP).
“We are very excited to share this large multilingual dataset with the worldwide language research community,” says Prem Natarajan, vice president of Alexa AI Natural Understanding. “We hope that this dataset will enable researchers across the world to drive new advances in multilingual language understanding that expand the availability and reach of conversational-AI technologies.”
MASSIVE is a parallel dataset, meaning that every utterance is given in all 51 languages. This enables models to learn shared representations of utterances with the same intents, regardless of language, facilitating cross-linguistic training on natural-language-understanding (NLU) tasks. It also allows for adaptation to other NLP tasks such as machine translation, multilingual paraphrasing, new linguistic analyses of imperative morphologies, and more.
NLU — a subdiscipline of NLP — is a machine’s ability to understand the meaning of a text and identify the relevant entities. For instance, given the utterance “What is the temperature in New York?”, an NLU model might classify the intent as “weather_query” and recognize relevant entities as “weather_descriptor: temperature” and “place_name: new york.”
Further details to be found here