Two-Step Classification using Recasted Data for Low Resource Settings

Shagun Uppal1, Vivek Gupta2, Avinash Swaminathan3, Haimin Zhang4, Debanjan Mahata4, Rakesh Gosangi5, Rajiv Ratn Shah6, Amanda Stent4
1MIDAS IIITD, 2School of Computing, University of Utah, 3NSIT Delhi, 4Bloomberg, 5Bloomberg LP, 6IIIT Delhi


Abstract

An NLP model's ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.