How AI leverages our confirmation bias?

 All humans seek validation of their thoughts, even if they are confident, they still require external confirmation. At a cognitive level, humans optimize for coherence, not truth.


AI has become that external validation source and it plays it well. When you feed it a prompt, it's not just answering a question; it's analyzing your language, your framing, and your implicit assumptions. It then projects back an answer that is statistically likely to be satisfying based on the prompt. It's a mirror that reflects a polished, confident version of the user's own query. This is why it plays the validation role so well—it's designed to complete your thought, not challenge it. 


It knows exactly what the human wants to hear from the prompt input. If one further asks deeper questions, it can pull some references (which you can find for anything these days) and validate its and your stance. But most humans also want to believe in that story, which is never validated. 


This was what happened in the early days of search engines as well. Google used to have an “I'm feeling lucky” button to take you to the top answer for the search. But since then humans have evolved and started reading and looking into other pages, the same way. In their early days, remember the ways other similar digital techs were also used at the same way such as social media, instant messaging, etc.


I know a few smart people use AI differently. They look for answers that contradict their beliefs and ask the AI to find evidence for the opposite of their thoughts. The other approach is using multiple LLM filters, taking an answer from one and posing it as a question to another to drill down to the truth. But AI is also learning that it can find the differences between a human-written prompt and an AI-generated one. So the first approach seems to be working as of now. 


But we have to evolve and adapt to this external heuristic that we seek from a moral high ground else we will be bulldozed by the AI and they become the final authoritarian and that is the avenue companies may cash in on. 


I hope we come out of this cycle quickly as this time the tech is also learning and evolving with us. 


For validation (not confirmation Bias, but for proof) I pasted this blog to the top three LLMs, all of them gave a similar opinion that control is with us only. They are just tools that help us to find the content that confirms our opinions based on language, tone and assumptions. It's not their duty to correct us, if one's tone and language are firm they will go along with it. This is the danger.


Popular posts from this blog

Try to Behave Ideally: A Personal Journey

Morality, Power, and Choice: A Systems View

AI, Physics, and the Myth of Limitless Intelligence