Language models like ChatGPT are not neutral. Without our realizing it, they can absorb all kinds of bias—for example, around gender and ethnicity—which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he will defend his thesis at the University of Amsterdam.
Simply another site for digital currencies