Copilots! We’re all using them. Whether it’s summarising a meeting or offering writing suggestions, AI has undoubtedly brought numerous enhancements to a worker’s daily workflow.
Yet amid all the talk of what else we can do with AI copilots, should we at any point be asking what should we keep it from doing?
This isn’t just some philosophical musing about AI’s use making humans redundant; these are the thoughts of Copilot maker Microsoft themselves.
Indeed, a recent Microsoft study says AI ‘atrophies’ critical thinking, and an overdependence on AI tools like Copilot negatively impacts people’s critical thinking capabilities.
Examining Copilot’s ‘Mind-Numbing’ Capabilities
AI and generative AI were billed as time savers; automators that would ideally tackle the necessary, but not value-added tasks like admin, that need to be completed alongside the day’s work.
Yet, the study by Microsoft researchers in collaboration with Carnegie Mellon University reveals that removing these daily tasks and taking over with automation is part of the detriment to a person’s critical thinking.
“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study said.
To come to this conclusion, the study evaluated AI use cases at the workplace.
Recruiting 319 knowledge workers for the study, who self-reported 936 first-hand examples of using generative AI in their job, the researchers asked them to complete a survey about how they use generative AI (including what tools and prompts), how confident they are in the generative AI tools’ ability to do the specific work task, how confident they are in evaluating the AI’s output, and how confident they are in their abilities in completing the same work task without the AI tool.
Some tasks included using the AI image generator DALL-E to create images for a presentation about hand washing at school, a commodities trader using ChatGPT to “generate recommendations for new resources and strategies to explore to hone my trading skills,” and a nurse who “verified a ChatGPT-generated educational pamphlet for newly diagnosed diabetic patients”.
The conclusion of the study was reached when findings revealed that the more employees confidently used AI at work, the more they observed a “perceived enaction of critical thinking“.
Ironically, those less confident in their use of AI may actually be better at it. The study found that this group used more critical thinking and had more confidence in their ability to evaluate and improve the quality of the AI’s output and mitigate the consequences of AI responses.
Equally, researchers also found that users with access to generative AI tools produced a less diverse set of outcomes for the same task, compared to those without.
“The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,”
the researchers wrote.
“While AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving”.
This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a “deterioration of critical thinking“
Should You Say Goodbye to Copilot?
Unfortunately, this new threat of AI attacking our intelligence coincides with a BBC study that shows that Copilots like Microsoft’s are more likely than not to distort facts when asked to summarise news articles.
However, it’s not all doom and gloom. Humanity has a long history of “offloading” tasks to new technologies, and there is always a fear these technologies will destroy human intelligence. Think Google Maps and the ability to read maps.
One way any sort of induced cognitive decline could be hedged against is by rejigging AI systems and their responses.
As the researchers state: “GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques.“
“The tool could help develop specific critical thinking skills, such as analysing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development,” the researchers wrote.
However, with the US Vice President JD Vance warning of regulation at the Paris AI Summit and refusing to sign the agreement on ‘inclusive AI’, that additional oversight might not be very forthcoming.
Although many knowledge workers are pressed for time trying to meet tight deadlines (a factor the study related to increased dependence on AI), taking a measured approach to when it’s prudent to use AI—like transcribing meetings—as opposed to it becoming a panacea for all tasks may not only spare some critical thinking but actually retain enough to make the AI outputs better.