How AI leverages our confirmation bias?
All humans seek validation of their thoughts, even if they are confident, they still require external confirmation. At a cognitive level, humans optimize for coherence, not truth . AI has become that external validation source and it plays it well. When you feed it a prompt, it's not just answering a question; it's analyzing your language, your framing, and your implicit assumptions. It then projects back an answer that is statistically likely to be satisfying based on the prompt. It's a mirror that reflects a polished, confident version of the user's own query. This is why it plays the validation role so well—it's designed to complete your thought, not challenge it. It knows exactly what the human wants to hear from the prompt input. If one further asks deeper questions, it can pull some references (which you can find for anything these days) and validate its and your stance. But most humans also want to believe in that story, which is never validated. Th...