I fear the day that technology will surpass our human interaction. The world will have a generation of idiots. (Albert Einstein)
He said it before he could have imagined what he was actually predicting, and here we are.
AI is, at its core, a very expensive party trick. It knows things. It knows an extraordinary, almost incomprehensible amount of things, in fact — and it can reassemble those things in ways that look like thought. But looking like thought and actually thinking are two very different things.
AI fakes a pattern-y pseudo-logic. It matches. It mirrors. It predicts the next likely word. What it doesn’t do, and genuinely cannot do, is reason.
There’s a human equivalent to this that most of us have met: the highly credentialed person who cannot solve a basic real-world problem; the person with an impressive resume but zero common sense. Knowing and thinking are not the same skill, and never have been.
Think of it this way: A library full of books doesn’t think. Neither does the thing that has read and committed to memory every book in every library.
The danger isn’t that AI is stupid. The danger is that it’s just smart enough to make you feel like you don’t need to be. That’s the trap.
Every time you outsource a decision, a question, a creative problem, especially a philosophical or theological question, you weaken a muscle. Every time you let the machine do the connecting of dots that your brain should be doing, you dumb yourself down.
This isn’t alarmism. This is basic cognitive science. Attention and reasoning are skills. Even disagreeing clearly and well is a skill. Use it or lose it. The internet softened this in us gradually. AI will accelerate it.
If you think well, you’ll spend the majority of your time with AI arguing with it, correcting it, rephrasing your question because it missed the actual point entirely. That’s not a tool working for you. That’s you working around a tool. There’s a difference.
Think for yourself. Research for yourself. Reach your own conclusions and be willing to revise them when evidence demands it — not because a chatbot told you to.
A Note on the Now “Evil” Em Dash
For a brief time, em dashes became the bane of my existence. Several of my articles had been rejected as unoriginal because I naturally write “like AI.” The main culprit that led to this assumption was the use of em dashes in my writing, which I have used regularly — proficiently, if I do say so myself, and likely because my mother was an English teacher who I and all of my friends nicknamed “the grammar Nazi” throughout high school — for over three decades.
Initially I was flattered, because for a long time I was under the impression that AI was smarter than me. It is, if we’re comparing the amount of information stored in our respective memories. I soon came to understand that this was not a compliment, but an insult.
I have thus taken a sort of personal “L” and refused to rewrite or reword those articles. Instead, I have created my own platform (this site) to share my thoughts and perspectives. I care nothing about profiting at this point in my life, or even my work being seen by massive audiences. What I care about is the work itself.
The fact still stands. The internet has decided that an em dash is proof of AI. So have certain editors, apparently. The logic behind this thinking is that only a machine would use one — or use a word longer than three syllables, or construct a sentence with more than one clause. I implore those with any intelligence of their own left to consider what that actually implies.
It implies that genuine human intelligence, expressing itself clearly and with some care for craft and passion for grammar and language, is now indistinguishable from algorithmic output — and therefore suspect. We’ve reached the point where writing well raises a red flag. Where thinking in complete, layered thoughts looks like cheating.
That’s not a problem with em dashes. That’s a problem with what we’ve normalized as the baseline for human expression. That’s a problem with mediocrity becoming the measuring stick, and with a culture that has made peace with its own decline.
The solution isn’t to write worse to prove you’re human (I did that for about a week, and couldn’t bear the shame of it). The solution is to actually be human — to think, to question, to form a real opinion and articulate it, imperfectly and honestly, in your own voice. That can’t be faked. Not really. Not yet.
And when it can be? That’s when Einstein’s nightmare fully arrives.
Closing Thoughts
We are living in a moment that will either be a turning point or a point of no return. Which one it becomes depends entirely on whether people choose to keep thinking.
AI is a tool, and like every tool humans have ever built, it can be used well or used as a crutch. The problem isn’t the tool. The problem is what people are willing to surrender to it.
What are you surrendering, if anything, to AI rather than owning for yourself? Reasoning? Discernment? Your voice? Your ability to sit with a hard question and work through it yourself rather than hand it off to something that will give you a confident-sounding answer whether or not it’s actually right?
Knowing things is not intelligence on its own accord. Retrieving information is not wisdom. A machine that can predict your next sentence is not your intellectual equal, just a very sophisticated mirror reflecting back what has already collectively been said. Don’t mistake the reflection for the thing itself.
Write your em dashes. Use your words, big and small. Think your thoughts all the way through, even when it’s uncomfortable, even when it costs you an audience, even when it gets your work rejected. The work is the point. The thinking is the point. You are the point.
Einstein saw it coming. The question is whether we’ll prove him right.
