Skip to content

New Study Reveals AI-Powered Search Engines are ‘Flawed Answer Machines’

ChatGPT

AI-powered search engines like ChatGPT and Google’s AI Overviews continue to revolutionize how we access information—but at what cost? As reliance on artificial intelligence grows, so does the risk of misinformation, bias, and inaccurate citations. A new study reveals that these tools often act as flawed answer machines, giving users false confidence in the accuracy of their results.

From inaccurate citations to fabricated sources, the misplaced confidence placed in AI search engines proves detrimental to the online information ecosystem. Although users may be capable of developing media literacy skills to navigate misinformation, this is not always the reality. Many everyday users remain unaware of the risks posed by algorithmic inaccuracies. As a result, they may lack the ability to critically evaluate sources and will accept AI-generated answers at face value. Quick answers to your questions may seem efficient, but it’s important to be aware of the inner workings of the algorithms that power them. Continue reading for some pointers to keep in mind when dealing with AI-generated searches.

1) Algorithmic Bias in AI Search Engines Reinforces Misinformation

With biased, human programmed AI, the generated results tend to reflect unconscious prejudices. AI machine learning models are trained on datasets sourced from online content that may unintentionally incorporate bias, exclude certain groups, or reinforce stereotypes. Search engine results perpetuate societal biases and amplify information that disproportionately affects marginalized groups. Decision-making based on these systematic biases can negatively impact underrepresented communities and result in real-world challenges and unfair outcomes. If left unchecked, AI search engines will repeat systematic errors and perpetuate falsehoods.

2) AI Search Engines Prioritize Relevance Over Accuracy

Operating based on engagement; search engines will scan the relevancy of content to determine their visibility. Due to this, popular websites with potentially inaccurate information will be favored by algorithms and appear higher in ranking more than less-trafficked websites with accurate information. The nature of generative search tools and AI Overviews parse through high-ranking websites and repackage content in a summarized version for users. By doing so, organic traffic gets taken away from these websites, whereas traditional search engines would guide users towards relevant publishers. This conundrum calls into question the validity of how these AI-powered tools choose websites to present to users.

3) The Rise of Inaccurate Citations and Fabricated Sources

Research has shown that AI search engines incorrectly cite sources an alarming number of times, if not fabricate them altogether. The general reliability of AI search engines comes into question when audited for citing sources. Between commonly detected accuracy issues and the rise of incorrect citations ultimately does a disservice to legitimate web publishers. Incorrect citations provided confidently by search engines create a cycle of misinformation that dilutes the quality of information. Take into account the legitimacy of a chatbot’s sources instead of accepting every output outright!

4) The Danger of Misleading Confidence

When faced with a question that AI doesn’t know the answer to, chatbots will offer speculative responses. Answers will often include language such as “it appears,” “might,” or “think”, suggesting uncertainty. These conversational indicators obfuscate answers by adding a tone that can be prone to improperly present information. In addition, studies have examined how AI’s inability to answer questions will lead to “hallucinations,” or known technically as confabulations, in which chatbots will still generate responses despite not having enough information or knowing how to answer. Declining to acknowledge their limitations, AI search engines will confidently suggest answers to questions in a misleading manner, potentially spreading misinformation in the process.

5) Holding AI Search Engines Accountable

To crackdown on developing issues concerning AI search engines, technology companies need to be held accountable for deploying AI models to the public. Regulation of emerging technologies relies on public pressure for transparency regarding the inner workings of algorithms. Public policy concerning AI governance advocates for the disclosure of AI algorithm’s function, including how search engines prioritize results when displaying rankings. The consequences of AI inaccuracies lead to real-world challenges that need to be addressed in order to rectify systemic wrongdoings and assert responsibility for the actions of AI-powered tools.

Fact-Check AI Search Results and Strengthen Your Media Literacy Skills

While you can easily accept AI search engine results, make sure to fact-check the validity of the information. Awareness and educating yourself on the dangers of algorithmic confidence can alter your perception of AI’s capabilities for the better. With more people using AI-powered chatbots as opposed to traditional search engines, auditing machine learning models remains vital. Testing search results by fact-checking with viable sources has become a necessary precaution when sourcing quality information online. Ensuring you strengthen your media literacy skills will help to combat misinformation and easily detect incorrect AI suggestions. Use AI-powered tools as simply that–a tool, not a final decision-maker!

Want to stay informed about the impact of AI on media and information? Subscribe for updates on AI literacy, responsible tech use, and more.