My grandpa joked that calculators made us bad at math, GPS ruined our sense of direction, and AI is out to take over our brains. It was funny until I realized it might not be a joke. In a world where ChatGPT can write your essay in seconds, DALL-E can paint any picture, and 70% of teens reported using generative AI in 2024, a question arises: are we still thinking for ourselves?
To understand how our ability to think is changing, we need to define what “thinking” means. “Thinking” isn’t just knowing facts or solving math problems. It’s being able to wrestle with ambiguity, form connections, make decisions, ask questions, create new things, and understand why we think the way we do. In simpler words, thinking doesn’t mean knowing everything; it means being actively engaged in the process of knowing.
At first glance, AI seems to threaten that skill. After all, why bother learning to write a five-paragraph essay when a language model can do it faster? Why struggle through understanding physics when a chatbot can summarize it? We are increasingly tempted to skip the struggle, to trust the machine.
However, the machine has limitations. Critical thinking is a slow and uncomfortable process. It’s not efficient. It involves failure, contradiction, and reevaluation. Economist Education machines aren’t designed for critical thinking in the way humans are able to do, as they view things in black and white, lacking nuance and emotion. It processes information, but as Duke Corporate Education puts it, it “does not think”. They generate; we contemplate.
They predict; we ponder. The danger only arises when we forget our ability to question what’s put in front of us. If we look closer, the relationship between AI and human thinking isn’t subtraction; it’s actually transformation. Just like how the printing press didn’t destroy memory but expanded what we could remember beyond memory, AI pushes us to move beyond rote tasks and into deeper, abstract thought.
For example, a student who uses AI to summarize The Odyssey might initially lose an opportunity to struggle through the text, but if that summary sparks a unique interpretation, then the AI has served as a springboard, not a substitute. In this way, AI can amplify thinking instead of replacing it. Generative machines should not be seen as a threat. In actuality, Forbes believe they are “emerging as a tool that can take creativity to the next level”.
The key is to use AI with intention, as a tool rather than a crutch. In other words, AI is changing not whether we think, but how we think. It shifts our cognitive energy away from information gathering and toward interpretation, synthesis, and reflection. Instead of memorizing Shakespeare’s plays, we might ask: What would Othello think in the age of modern politics? Instead of calculating compound interest, we might wonder: How do financial algorithms shape economic inequality? AI is an inevitable presence in our world, and our unique value is not in what we know, but in how we use what we know.
This understanding of AI doesn’t happen automatically. It requires awareness and agency, two qualities that AI doesn’t possess but humans often forget to exercise. As AI becomes more embedded in daily life, we may feel tempted to trust whatever the algorithm says, especially when it’s right most of the time. But thinking for ourselves means daring to say, “Wait. Is that true?”.
This becomes more important when we consider that AI is not neutral. Its outputs reflect the data it’s trained on, which means it can carry the biases, blind spots, and historical inaccuracies in that data. Fielding Graduate University states that AI can strengthen confirmation bias, creating an “echo chamber”. Independent thought becomes not just a skill, but a moral responsibility.
But for all its risks and distortions, AI offers incredible potential. The National Virtual Teacher Association explains that for neurodivergent students, AI can offer tools to communicate and learn in new ways. The World Economic Forum adds that for those without access to education, it can provide explanations and mentorship once limited to privileged circles. For artists, scientists, and dreamers, it can offer new mediums of experimentation. It’s clear: AI doesn’t limit human thinking; it expands it.
So, where does that leave us? Are we doomed to become machine-dependent thinkers? Or are we on the brink of a new kind of thought? CNA recalls that Socrates was worried that writing would ruin memory and kill genuine dialogue. He thought students would become lazy, dependent on symbols instead of deep discussion. And yet, writing gave birth to literature, philosophy, and science. New tools always reshape thought, but whether they sharpen or dull us depends on how we employ them. We are being asked, more than ever, to know ourselves: our values, our judgments, our questions. We are being challenged to stay curious in a world full of easy answers. We are being called not just to think faster, but to think better. And maybe that’s the paradox of AI: the smarter it gets, the more human we have to become.



