Recent Study Shows that ChatGPT Can Act Biased When Asked About Environmental Justice and Issues

A recent study from Virginia Tech, a university in the US, has highlighted prejudice about environmental issues in OpenAI’s artificial intelligence chatbot, ChatGPT. The study shows that when the researchers asked ChatGPT about environmental issues in different countries, it came with biased responses according to the location of the area. Researchers also found out that ChatGPT has some limitations and they are the reason that it isn’t able to deliver location specific information of many countries.

According to the report, the information given by ChatGPT is biased towards bigger and populated states in the USA. Less than 1% of the residents living in the counties of states such as Delaware or California did not have any information about environmental issues. Keep in mind that these are the urban areas in the USA. On the other hand, 90% of the population in rural states like Idaho and New Hampshire did not have local access to data about environmental issues and justice.

A lecturer in Virginia Tech’s Department of Geography, Kim, says that there is a strong need of further researching to tackle these prejudices of ChatGPT about the environment. The report also has a map of the USA which shows populations without access to information on environmental justice issues according to the location.

This study is a great way to know about prejudices of ChatGPT on political and environmental issues happening in the world. Some researchers from the UK and Brazil have already published a study that shows the errors and biases in the answers of ChatGPT. These are enough to raise alarms because they have power to mislead ChatGPT users and people who read these responses.

In conclusion, the report is a great indicator that shows that even though ChatGPT is a wonderful AI chatbot, it still cannot provide unbiased information about certain issues regarding politics and the environment. As technology is progressing fastly and public opinions are changing according to it, AI models need to be perfect and unbiased in their responses for the betterment of people. AI developers should work on making the AI models as perfect as they can be.

Read next: ChatGPT On Fire As It Generates Its Highest Ever Revenue After Reenabling Paid Subscriptions

Previous Post Next Post