As generative AI tools like ChatGPT and Copilot rapidly integrate into knowledge work, they’re streamlining how we write, brainstorm, and retrieve information. But there’s a growing concern among researchers, educators, and business leaders: What’s happening to our critical thinking in the process?

A “recent study” titled “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers” offers valuable insights into this question. By surveying 319 knowledge workers, the authors explored when and how professionals perceive they use critical thinking when interacting with generative AI and how generative AI impacts their perceived effort of thinking critically.

Here are the key takeaways from the study and why they matter for the future of work.

When Trust in AI Replaces Thoughtful Engagement

The study found that when workers trust generative AI tools, they are less likely to use critical thinking. Trust leads to reliance, and reliance can diminish cognitive engagement. When AI output is taken without questioning assumptions or verifying facts, there’s a risk of overlooking flawed logic, biased assumptions, or missing context—issues that can have serious consequences, especially in high-stakes environments like finance, healthcare, or law.

But Confidence Encourages Critical Review

Interestingly, confidence in one’s own abilities has the opposite effect. Workers who feel capable of performing the task themselves are more likely to question, evaluate, and improve AI-generated responses. This highlights a key distinction: It’s not AI use that undermines critical thinking, it’s lack of knowledge.

Network

What Drives Critical Thinking

The survey uncovered key main motivators that push workers to think critically when using generative AI:

  1. To increase the quality of their work
  2. To avoid potential harm from incorrect AI outputs in high-stakes scenarios
  3. To develop their knowledge and skills and learn best practices

What Gets in the Way

On the flip side, inhibitors to critical thinking include:

  • Lack of awareness of the risks of incorrect AI responses
  • Low motivation or engagement with their work
  • Insufficient domain knowledge to properly assess or challenge AI outputs

These insights suggest that improving critical thinking in the AI era isn’t just about tool design, it’s also about employee engagement, training, and awareness.

Reduced Effort in Some Areas, Increased Effort in Others

One of the most nuanced findings was around perceived effort. Workers felt that retrieving information and drafting documents became easier thanks to generative AI automating those tasks. However, they also reported an increase in effort required to verify AI outputs, which may be incomplete or inaccurate, translate their intentions into clear AI prompts, and apply generalized AI responses to their specific context**.

This reflects a shift from effort in content creation to effort in prompting, contextualizing, and validating, a new cognitive skill set that workers must develop.

The Path Forward: Designing AI That Supports Thinking, Not Replaces It

The authors argue that AI tools must be intentionally designed to support critical thinking, not just productivity. This includes:

  • Feedback mechanisms that help users assess the reliability of AI outputs
  • Prompts and alerts that guide users when to trust AI and when to apply greater judgement
  • Features aligned with the principles of explainable AI (XAI) to make the reasoning behind outputs more transparent

By making AI more accountable and interactive, developers can create tools that enhance human reasoning, problem-solving and engagement.

Final Thoughts: A Call for Balance

Generative AI is not inherently detrimental to critical thinking, but how we use it matters. The study reminds us that while AI can reduce routine cognitive effort, it can also introduce new demands for judgment, context, and verification.

For individuals and organizations, the challenge is to balance trust in AI with confidence in human insight. Critical thinking shouldn’t be optional in the age of automation, it should be built into the design of our tools and the culture of our workplaces.