On the Impact of Language Selection for Training and Evaluating Programming Language Models

Image credit: Fotor

Abstract

The recent advancements in Transformer-based Language Models have demonstrated significant potential in enhancing the multilingual capabilities of these models. The remarkable progress made in this domain not only applies to natural language tasks but also extends to the domain of programming languages. Despite the ability of these models to learn from multiple languages, evaluations typically focus on particular combinations of the same languages. In this study, we evaluate the similarity of programming languages by analyzing their representations using a CodeBERT-based model. Our experiments reveal that token representation in languages such as C++, Python, and Java exhibit proximity to one another, whereas the same tokens in languages such as Mathematica and R display significant dissimilarity. Our findings suggest that this phenomenon can potentially result in performance challenges when dealing with diverse languages. Thus, we recommend using our similarity measure to select a diverse set of programming languages when training and evaluating future models.

Publication
In 23rd IEEE International Working Conference on Source Code Analysis and Manipulation
Jonathan Katzy
Jonathan Katzy
PhD candidate, Multi-lingual Language Models for Software Engineering

My research focusses on multi-lingual performance of Large Language Models, when applied to Software Engineering tasks.