Why Using AI Can Make Us Feel Smarter Than We Really Are
Why Using AI Can Make Us Feel Smarter Than We Really Are
The Strange Boost of Confidence AI Gives Us
There’s this odd thing that happens when people lean on AI: we start feeling a little too good about our abilities. Not because we suddenly are better at the task, but because the machine’s polished responses rub off on us. It’s similar to standing next to a really competent coworker and convincing yourself some of their talent has magically transferred to you.
Researchers have noticed this creeping confidence and decided to dig into it. What they found isn’t just interesting it’s a bit unsettling. It turns out that generative AI doesn’t just help us solve problems. It also quietly rewires how we judge our own competence, almost flattening a well known psychological pattern that normally keeps us somewhat grounded.
If you’ve ever heard of the Dunning Kruger effect the idea that people who are bad at something tend to overrate themselves, while the truly skilled often underestimate their abilities then picture that whole curve smoothed out by AI. Not only smoothed out, actually. In some cases, nudged in the completely opposite direction.
That’s the gist of what a team from Aalto University in Finland, working alongside researchers in Germany and Canada, discovered. And honestly, the more you think about it, the more you realize how easy it is to fall into the trap.
A Quick Refresher: What the Dunning Kruger Effect Is Really About
The Dunning Kruger effect isn’t a fancy insult people throw around online when someone acts overconfident. It’s a well documented psychological pattern: those with low ability often overestimate how good they are, while highly skilled people tend to second guess themselves. It shows up in places like judgment, language, reasoning basically anywhere humans have to think clearly.
Usually, this gap shows up when people take cognitive tests. You solve a set of problems and then estimate your own performance. That’s where the magical (and sometimes painful) mismatch comes out.
But slap an AI into the mix, and the whole dynamic shifts.
AI Doesn’t Just Help It Changes How We Judge Ourselves
One of the surprising takeaways from the study is that AI doesn’t discriminate. Whether someone is top tier at solving logic problems or barely scraping by, everyone tends to trust the machine too much.
And here’s the twist: the people who use AI the most confidently the ones you’d assume are savvier are actually the most likely to overestimate how well they performed with its help. It’s like the more familiar we are with AI tools, the more we forget to check whether the answer actually makes sense.
Robin Welsch, one of the researchers from Aalto University, mentioned in a statement that people were just… bad at judging their own performance when AI was involved. Not bad in the “haha look at these clueless humans” way, but in a universal, across the board sense.
Which raises an uncomfortable thought: maybe we're outsourcing part of our judgment to the machine without noticing.
Inside the Experiment: 500 People, Logic Problems
To see how this plays out under pressure, the scientists recruited 500 volunteers and handed them logical reasoning problems from the LSAT those brain twisting puzzles that law schools use to filter for quick, structured thinking.
Half the group got to use ChatGPT; the other half had to rely on their own mental machinery.
Then everyone AI users and non AI users had to say how well they thought they did. They were even offered extra money if their self assessment matched their actual performance. So people had a reason to be honest and careful.
And yet, the AI group still overshot their abilities.
The researchers suspected this had less to do with the difficulty of the questions and more with how people interacted with the AI. A lot of users asked just one question, accepted the first answer, and moved on. No follow ups. No double checking. No “Wait… does this even make sense?”
If you’ve ever used ChatGPT to help with a task late at night and caught yourself thinking, “That looks right, I guess,” then you already know what this feels like.
Cognitive Offloading: When We Let the AI Carry Too Much Weight
One of the biggest factors researchers identified is something called “cognitive offloading.” It’s basically the tendency to let the machine do the heavy lifting so we don’t have to wrestle with the reasoning ourselves.
To be fair, humans have always offloaded cognitive work GPS navigation, calculators, even spell check. But generative AI introduces a different flavor because it doesn’t just give you an answer; it gives you an answer that sounds confident, polished, even authoritative.
When you accept that answer too quickly, your brain doesn’t get the usual workout. You skip those internal checkpoints where you’d normally ask, “Is this right?” or “What’s the logic here?” That reflective pause is something psychologists call metacognitive monitoring, and it’s a crucial part of evaluating our own abilities.
When the pause disappears, our sense of competence inflates even though our actual performance doesn’t.
Why This Matters (Beyond Academic Curiosity)
It’s tempting to shrug this off as harmless. After all, who cares if you think you’re a little smarter when the machine is helping?
But here’s the catch: as AI becomes woven into everything work emails, school assignments, creative projects, coding it shapes how we think about our own skills. If you’re constantly drawing from a bucket of answers that look flawless on the surface, you start assuming your skills produced them.
That illusion can become a problem. In fields like law, medicine, engineering, or finance, overconfidence paired with blind trust in AI can lead to real world mistakes. Even in everyday life, it can make us complacent. If the machine “knows,” why bother understanding the task deeply?
And on the flip side, if you’re actually good at something, you might underestimate your abilities because you assume AI is doing all the heavy lifting. So the usual “curve” of Dunning Kruger doesn’t just flatten it gets weirdly inverted.
A Final Thought: AI Isn’t Making Us Stupid But It Is Making Us Less Aware
None of this means AI is bad or dangerous. The real issue is more subtle. Using AI without engaging your own reasoning is like doing bench presses with someone silently lifting the bar for you. Sure, the weight goes up, but you don’t build any strength.
AI can be an incredible tool. It can also be a mirror that reflects a slightly distorted version of our competence back at us. The challenge is recognizing when that reflection is getting too flattering.
If you want, I can also rewrite this in Spanish, create a bilingual version, or adapt it into a long SEO optimized Blogger article with images.
Open Your Mind !!!
Source: LiveScience
Comments
Post a Comment