
AI Tools Show Bias and Inaccuracy: Study Insights
A recent analysis highlights a pressing issue in the realm of generative AI tools. A study, conducted by Salesforce AI Research, found that approximately one-third of these tools, including popular search engines and deep research agents, often deliver biased and unsupported information. This is particularly concerning for Indian users who increasingly rely on these technologies for accurate information.
The research assessed various AI engines, including OpenAI’s GPT-4.5 and GPT-5, alongside Bing Chat, You.com, and Perplexity. The findings were alarming, revealing that GPT-4.5 produced a staggering 47% of unsupported claims. This raises serious questions about the reliability of AI-generated content, especially in critical areas such as health, finance, and education where misinformation can have severe consequences.
To evaluate these tools, researchers employed a set of metrics known as DeepTrace, which examines the depth and reliability of AI answers. The analysis categorized questions into contentious issues to detect biases and expertise-based queries to assess factual accuracy. The results indicated that many AI models provided one-sided answers. For instance, about 23% of Bing Chat's claims were unsupported, while You.com and Perplexity reported 31% unsupported claims.
These findings are crucial for Indian users who are witnessing a rapid integration of AI in various sectors. With the increasing use of AI tools in education, healthcare, and even governance, it is vital to ensure that users receive accurate and diverse information. This study highlights the need for advancements in AI technology to improve its reliability.
Experts, including Felix Simon from the University of Oxford, acknowledge the ongoing challenges with AI systems, despite their improvements. There are concerns that users may misinterpret the information provided by these tools, leading to a potential spread of misinformation. Aleksandra Urman from the University of Zurich also raised doubts regarding the methodology used in the study, emphasizing that further research is necessary.
As the use of generative AI expands, addressing these biases and inaccuracies becomes essential. In a diverse country like India, where misinformation can lead to significant harm, ensuring that AI-generated responses are accurate and well-sourced is not just a technological challenge but a societal imperative.