IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding

Bryan Wilie1, Karissa Vincentio2, Genta Indra Winata3, Samuel Cahyawijaya4, Xiaohong Li5, Zhi Yuan Lim5, Sidik Soleman6, Rahmad Mahendra7, Pascale Fung8, Syafri Bahar5, Ayu Purwarianti9
1Artificial Intelligence Center - ITB, 2Universitas Multimedia Nusantara (Comp.Eng.), 3The Hong Kong University of Science and Technology, 4HKUST, 5Gojek, 6PT Prosa Solusi Cerdas, 7Universitas Indonesia, 8Hong Kong University of Science and Technology, 9Bandung Institute of Technology


Although Indonesian is known to be the fourth most frequently used language over the internet, the research progress on this language in natural language processing (NLP) is slow-moving due to a lack of available resources. In response, we introduce the first-ever vast resource for training, evaluation, and benchmarking on Indonesian natural language understanding (IndoNLU) tasks. IndoNLU includes twelve tasks, ranging from single sentence classification to pair-sentences sequence labeling with different levels of complexity. The datasets for the tasks lie in different domains and styles to ensure task diversity. We also provide a set of Indonesian pre-trained models (IndoBERT) trained from a large and clean Indonesian dataset (Indo4B) collected from publicly available sources such as social media texts, blogs, news, and websites. We release baseline models for all twelve tasks, as well as the framework for benchmark evaluation, thus enabling everyone to benchmark their system performances.