Automating Bias Testing of LLMs
Morales S., Clarisó R., Cabot J.
Proceedings - 2023 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023, pp. 1705-1707, 2023
Large Language Models (LLMs) are being quickly integrated in a myriad of software applications. This may introduce a number of biases, such as gender, age or ethnicity, in the behavior of such applications. To face this challenge, we explore the automatic generation of tests suites to assess the potential biases of an LLM. Each test is defined as a prompt used as input to the LLM and a test oracle that analyses the LLM output to detect the presence of biases.
doi:10.1109/ASE56229.2023.00018