Posts

Showing posts from March, 2026

Man’s Search for Meaning

  I read Man’s Search for Meaning for the third time this weekend. The first time, in my twenties, I read it as a witness. I felt sadness, even pity, for what the author and others endured. It made me cautious of human systems. How easily people can be pulled into structures that justify cruelty? I told myself I should never become part of something like that. It also made me more respectful toward others, knowing we rarely understand the full context of someone’s life. The second time, in my thirties, I read it as a student. I wanted to understand what Frankl was really saying. What he took from that suffering. I understood that hope can fade, but meaning sustains. That survival is not about expecting relief, but about having something to live for. I anchored that meaning in my own life through family and work. This time, I read it as a participant. Not to understand, but to observe myself. I see that I have an instinctive nature. I don’t reject it anymore. But I also don...

How AI leverages our confirmation bias?

  All humans seek validation of their thoughts, even if they are confident, they still require external confirmation. At a cognitive level, humans optimize for coherence, not truth . AI has become that external validation source and it plays it well. When you feed it a prompt, it's not just answering a question; it's analyzing your language, your framing, and your implicit assumptions. It then projects back an answer that is statistically likely to be satisfying based on the prompt. It's a mirror that reflects a polished, confident version of the user's own query. This is why it plays the validation role so well—it's designed to complete your thought, not challenge it.  It knows exactly what the human wants to hear from the prompt input. If one further asks deeper questions, it can pull some references (which you can find for anything these days) and validate its and your stance. But most humans also want to believe in that story, which is never validated.  Th...

AI, Physics, and the Myth of Limitless Intelligence

Over the past few years, conversations about artificial intelligence have increasingly drifted toward dramatic narratives: runaway superintelligence, machines replacing humans, or AI systems eventually controlling civilization. These ideas make for compelling science fiction. But when examined through the lenses of physics, engineering, economics, and technological history, the picture looks far more grounded—and arguably more interesting. This post reflects a long discussion exploring a simple question: What are the real limits of AI? AI systems rely on enormous computational infrastructure. The training of large models requires vast clusters of GPUs, huge amounts of electricity, and specialized semiconductor manufacturing. All of this sits on top of an industrial ecosystem involving chip fabrications, supply chains, cooling systems, and energy grids. Unlike software alone, this infrastructure cannot scale infinitely. Semiconductor progress—long driven by Moore’s Law—is already slowin...