IJCOPE Journal

UGC Logo DOI / ISO Logo

International Journal of Creative and Open Research in Engineering and Management

A Peer-Reviewed, Open-Access International Journal Supporting Multidisciplinary Research, Digital Publishing Standards, DOI Registration, and Academic Indexing.
Journal Information
ISSN: 3108-1754 (Online)
Crossref DOI: Available
ISO Certification: 9001:2015
Publication Fee: 599/- INR
Compliance: UGC Journal Norms
License: CC BY 4.0
Peer Review: Double Blind
Volume 02, Issue 04

Published on: April 2026

FAIRNESS IN MULTILINGUAL LARGE LANGUAGE MODELS: ADDRESSING THE LANGUAGE DISPARITY GAP IN AI SYSTEMS

Upadhyay Awanish Dilipbhai Durgesh Yadav Lal Bahadur Lohar

Department of Computer Science and Engineering Parul Institute of Technology

Parul University Gujarat India

 

Article Status

Plagiarism Passed Peer Reviewed Open Access

Available Documents

Abstract

Current Large Language Models (LLMs) exhibit significant performance disparities across languages, with English and high-resource languages receiving disproportionate model capacity and training data while speakers of African, Southeast Asian, and Indigenous languages face substantially degraded service quality. This research addresses the critical challenge of fairness in multilingual LLMs by surveying recent developments (2023–2026), analyzing underserved language groups, and proposing methodological approaches to close the language fairness gap. We identify three primary dimensions of unfairness: data scarcity in low-resource languages, suboptimal model architectures for multilingual transfer, and inadequate fairness evaluation metrics. Through analysis of existing benchmarks (XGLUE, Masakhane, FLORES-200, NLLB), we demonstrate that performance parity across language families requires integrated approaches combining data augmentation, architectural innovations, and culturally-informed fairness metrics. Our work introduces the Cross-Lingual Fairness Index (CLFI), a novel metric extending the PEER (Probability of Equal Expected Rank) framework to LLM generation tasks, enabling quantitative assessment of language equity. Case studies from initiatives including Masakhane, IndicNLP, and Google's No Language Left Behind (NLLB) demonstrate feasibility of targeted interventions. We conclude that achieving fairness in multilingual LLMs requires sustained investment in low-resource languages, participatory involvement of native speakers, and adoption of language-aware evaluation protocols throughout the model development lifecycle.

Index Terms—Algorithmic Bias, AI Localization, Cross-Lingual Transfer, Fairness Metrics, Language Equity, Language Fairness, Low-Resource Languages, Multilingual LLMs

How to Cite this Paper

Dilipbhai, U. A., Yadav, D. & Lohar, L. B. (2026). Fairness in Multilingual Large Language Models: Addressing the Language Disparity Gap in AI Systems. International Journal of Creative and Open Research in Engineering and Management, <i>02</i>(04). https://doi.org/10.55041/ijcope.v2i4.338

Dilipbhai, Upadhyay, et al.. "Fairness in Multilingual Large Language Models: Addressing the Language Disparity Gap in AI Systems." International Journal of Creative and Open Research in Engineering and Management, vol. 02, no. 04, 2026, pp. . doi:https://doi.org/10.55041/ijcope.v2i4.338.

Dilipbhai, Upadhyay,Durgesh Yadav, and Lal Lohar. "Fairness in Multilingual Large Language Models: Addressing the Language Disparity Gap in AI Systems." International Journal of Creative and Open Research in Engineering and Management 02, no. 04 (2026). https://doi.org/https://doi.org/10.55041/ijcope.v2i4.338.

Search & Index

References

[1] T. Brown et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.

[2] OpenAI, “GPT-4 technical report,” arXiv:2303.08774, 2024.

[3] J. N. Pava, A. Singh, and A. Krishnan, “Mind the (Language) Gap: Multilingual LLM performance disparities and equity,” Stanford HAI Report, 2025.

[4] Ethnologue, “Languages of the World,” 27th ed., SIL International, 2024.

[5] UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” Doc. 41 C/Res.88, 2021.

[6] W. Nekoto et al., “Participatory Research for Low-Resourced Machine Translation,” Proc. EMNLP, pp. 61–72, 2020.

[7] P. Joshi et al., “The State and Fate of Linguistic Diversity and Multilingualism in the Age of Deep Learning,” Proc. ACL, pp. 6594–6613, 2021.

[8] A. Conneau et al., “XLM-R: Unsupervised Cross-lingual Representation Learning at Scale,” Proc. ACL, pp. 8440–8451, 2023.

[9] K. Papineni et al., “BLEU: a method for automatic evaluation of machine translation,” Proc. ACL, pp. 311–318, 2002.

[10] T. Bolukbasi et al., “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” Proc. NIPS, pp. 4349–4357, 2016.

Ethical Compliance & Review Process

  • All submissions are screened under plagiarism detection.
  • Review follows editorial policy.
  • Authors retain copyright.
  • Peer Review Type: Double-Blind Peer Review
  • Published on: Apr 13 2026
CCBYNC

This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are free to share and adapt this work for non-commercial purposes with proper attribution.

View License
Scroll to Top