Breaking News

Watch Springfield News, Weather, Sports, Breaking News Health in Review 2023 Roseburg hospital and medical group reduce workforce by 18 positions during health system restructuring. Upcoming Rabies Clinics hosted by Mobile County Health Department at Affordable Prices Fagen Fighters World War II Museum: A Minnesota Gem

The use of generative artificial intelligence models has been found to have biased responses that can lead to discrimination. Recent studies have highlighted that large language models such as Llama 2 and GPT-2 tend to exhibit biases against women. This bias can be seen in the visual representations of professions in Chinese tech company Baidu’s response to ChatGPT, Ernie Bot.

Bias in AI models arises from the data they are trained on, which is created by humans. To avoid biased responses or misleading information from genAI models, it is crucial to carefully select and trim data sets. However, obtaining appropriate data has been a challenge for both the private and public sectors in India.

Ivana Bartoletti, Wipro’s Global Chief Privacy and AI Governance Officer, emphasizes the importance of addressing bias in AI models. She points out that genAI automates human bias, leading to issues like biased advertisements for higher-paying jobs or higher credit limits being offered more frequently to men than women. This automation of bias is a result of historical disparities in pay between genders.

Leave a Reply