In the ‘Wild West’ of AI Chatbots, Subtle Biases Related to Race and Caste Often Go Unchecked
University of WashingtonUniversity of Washington researchers developed a system for detecting subtle biases in AI models. They found seven of the eight popular AI models they tested in conversations around race and caste generated significant amounts of biased text in interactions — particularly when discussing caste. Open-source models fared far worse than two proprietary ChatGPT models.