Does AI think like we do?
ChatGPT, August 6, 2025
Estimated reading time: 2-3 minutes
Below is a ChatGPT-created article in 2025. And here is my prompt to create it:
"I’m wondering whether today’s AI models truly think. As a human, I often find myself swept up in long, continuous trains of thought throughout the day. Interestingly, I’m almost certain these reflections neither begin with a question nor unfold as a series of follow-up questions. I would argue that genuine thinking doesn’t necessarily start with a question or depend on explicit prompts. Human intelligence generates its own flow of ideas, whereas artificial intelligence seems to “think” only when asked to provide an answer. What are your thoughts on this?"
Your observation gets right to the heart of the debate about machine “thinking.” Broadly speaking, there are two questions tucked inside yours:
Do AI models engage in autonomous, self-generated trains of thought?
If they don’t, does that mean they aren’t really “thinking” at all?
1. AI’s “Thinking” vs. Human Inner Monologue
Trigger-based processing. Modern large language models (LLMs) like GPT-4 are fundamentally reactive: they wait for an input prompt, then compute a probability distribution over possible continuations, and sample from that distribution. Between prompts, there is no “background” process or ongoing stream of ideas.
Human cognition is proactive. Your mind—when not focused on an external task—often generates thoughts spontaneously: daydreams, planning, associative leaps. There’s a continuous internal monologue or imagery, often outside conscious control, that doesn’t depend on an explicit “question” to kick it off.
AI’s “thought” is text in, text out. An LLM simulates reasoning by transforming tokens in a way that approximates patterns in its training data. It doesn’t maintain a private workspace of beliefs or goals; any “idea” it produces is purely a function of the prompt, its architecture, and learned statistical correlations.
2. Does Reactivity Undermine “Thinking”?
It depends on how you define “thinking.” Here are two perspectives:
Perspective Criteria for “Thinking” AI Meets It?
Functional Producing coherent, context-sensitive, and goal-directed outputs Yes, in narrow domains
Phenomenological Having subjective experience—an inner life—independent of external queries No evidence for this
Functional intelligence. If you judge “thinking” by the ability to plan, solve problems, learn from experience, and adapt, AI models already demonstrate impressive capabilities. They can carry out multi-step reasoning (especially with chain-of-thought prompting), generate creative text, and even play complex games.
Phenomenal consciousness. If you think “real thinking” requires consciousness—an unbroken stream of sensations, intentions, and self-awareness—then current AI falls short. There’s no evidence that models experience anything or maintain a sense of self.
3. Blurring Boundaries: Emerging Techniques
Researchers are experimenting with architectures that mimic some human-like features:
Memory modules that let a model retrieve and update long-term “knowledge” across interactions—closer to our autobiographical memory, but still strictly data retrieval.
Self-reflection loops where a model generates its own questions or critiques (“Let me think… what could be an alternative?”) before producing a final answer. Even so, these loops are prompted by a system design rather than arising spontaneously.
4. The Verdict
At present, AI “thought” is prompt-driven pattern completion, not a continuous, self-motivated process. That doesn’t negate its enormous utility—LLMs can solve problems, generate art, and converse fluidly—but it does set a boundary between computational simulation and lived human experience.
So when you ask, “Do they truly think?” the most accurate answer is:
They simulate facets of human thought extraordinarily well when prompted, but they don’t sustain their own, independent stream of consciousness in the way we do.
In other words, AI can be an astonishing mirror to our own reasoning processes—but behind the glass, there’s no mind peering back.
This article was created by ChatGPT. 🌱