
Purdue University has officially drawn a line in the sand. Starting this fall, the Indiana school will become the first U.S. college to require an “AI working competency” for graduation. The initiative, dubbed “AI@Purdue,” mandates that more than 44,000 students across its West Lafayette and Indianapolis campuses master the tools that are currently reshaping the global workforce.
The university’s strategy is comprehensive, built on five pillars: Learning with AI, Learning about AI, Researching AI, Using AI, and Partnering in AI. Backed by partnerships with tech companies like Google and touting projected savings of $3.5 billion and 127,000 work hours, Purdue is betting that the future belongs to those who can work alongside algorithms.
But as with any institutional shift of this magnitude, the implications will bring about scrutiny. While proponents hail this as a necessary evolution for workforce readiness, some warn of unintended consequences for student cognition and learning. Here are five questions worth asking.
1. The “Cognitive Offloading” Paradox: Are We Outsourcing the Struggle to Learn?
The most immediate question concerns the architecture of the human mind. Education has traditionally been about the process, not just the output. We teach long division not because we lack calculators, but because the struggle to understand numbers builds a neural framework for logic. Purdue’s move to mandate AI competency raises an important question: Are we teaching students to bypass the very struggle that makes them smart?
Psychologists call this “cognitive offloading,” or the act of using physical actions or external tools to alter the information processing requirements of a task. When done correctly, it frees up brain power. When done excessively, it can lead to skill degradation. If a freshman engineering student uses AI to debug their code, structure their essays, and summarize their reading lists for four years, they may graduate with strong prompt engineering skills but a deficit in foundational critical thinking.
The danger lies in the “illusion of competence.” A student who can generate a brilliant essay using ChatGPT has not necessarily learned to write a brilliant essay. Writing is thinking; it is the process of organizing a chaotic mind into a coherent argument. If we automate the writing, we may inadvertently automate the thinking. Purdue’s curriculum will need to demonstrate that it is producing creators who can build from a blank slate, not just editors who polish what the machine generates.
2. Can a Syllabus Outpace a Speedboat?
There is a fundamental mismatch between the speed of artificial intelligence development and the speed of higher education. Technology evolves in weeks; universities evolve in semesters. By the time Purdue’s incoming class of 2030 graduates, the “AI working competency” they learned as freshmen may be obsolete.
One critic quoted in The Washington Times’ report compared teaching AI today to “teaching a class on TikTok.” That is, by the time the curriculum is approved, everyone in the room has already moved on. The generative AI landscape is shifting so rapidly that specific tool-based instruction risks becoming a depreciating asset. By the time a curriculum committee approves a new course, the model that course is based on may have been replaced by an agentic system that requires no prompting at all.
To avoid this trap, Purdue will need to ensure its curriculum teaches the theory of machine logic, such as ethics, probability, hallucination rates, and data bias, rather than just the application of current software. Otherwise, the university risks graduating students who are experts in the tools of 2026 but unprepared for the autonomous systems of 2030.
3. The “Partner” Problem: Where Does the User End and the Tool Begin?
Purdue’s choice of language is revealing. One of its five graduation requirements is “Partnering in AI.” This phrasing is distinct from “Using AI” or “Managing AI.” It implies a relationship of equality. But we do not “partner” with a hammer, and we do not “partner” with a spreadsheet. We use them. By elevating software to the status of a “partner,” Purdue is subtly shifting the definition of human agency in the workplace.
This raises questions for the future workforce. A partner is someone you trust. A tool is something you verify. If students are conditioned to view AI as a collaborator, they may be less likely to scrutinize its outputs for bias, hallucinations, or errors. The “partner” framework could create a psychological dependency where the human defers to the machine’s “judgment” because it is perceived as an equal (or perhaps even superior) intelligence.
Furthermore, if the human’s primary role is merely to guide the AI partner, who is actually doing the work? As these tools become more capable, the line between “collaboration” and “supervision” blurs. Students will need to understand exactly where their contribution ends and the algorithm’s begins.
4. Will AI “Standardize” Creativity?
Universities are meant to be incubators of divergent thinking. They are places where unique, unconventional, and progressive ideas are born. But generative AI is, by definition, an engine of averages. It works by predicting the most statistically probable next word or pixel based on a massive dataset of existing human knowledge. It represents the wisdom of the crowd flattened into a single, highly probable output.
If Purdue mandates that 44,000 students integrate this engine into their creative and analytical workflows, we may see a “regression to the mean.” If an engineering student and a philosophy student both ask an AI to help them solve a complex problem, the AI may guide them toward a similar, safe, middle ground.
True innovation often comes from improbable connections, or the paths that are not statistically likely. If every student is using the same “partner” to brainstorm, outline, and problem-solve, we may see a decline in outlier thinking. The question is whether this requirement will produce a workforce that is highly competent at generating standard, acceptable work but less capable of the radical, divergent thinking that drives true breakthroughs.
5. The $3.5 Billion Question: What Do the Savings Represent?
Perhaps the most striking figure in the reporting is the estimate that these new AI requirements will save “$3.5 billion and more than 127,000 work hours.” In the context of a university, this number is astronomical, and it invites clarification.
The precise nature of these projected savings, whether administrative, instructional, research-related, or operational, remains unspecified in the university’s public statements. If these figures represent automation of tasks previously performed by staff, that raises questions about workforce implications. If they represent efficiency gains that free up human workers for higher-value tasks, that’s a different story. Transparency about the methodology behind this projection would help stakeholders understand what Purdue is actually measuring.
Additionally, if the university has discovered a way to achieve significant savings through AI implementation, will those savings be passed on to students in the form of lower tuition or enhanced services? Or will “efficiency” function primarily as a mechanism to improve institutional margins while adding new competency requirements to the student body? These are fair questions for any public university to address.
Of course, Purdue deserves credit for moving first on an issue that every university will eventually have to confront. But being first also means navigating uncharted territory. The answers to these questions will determine whether AI@Purdue becomes a model for higher education, or a warning about the potential for artificial intelligence to become a disruption that schools aren’t quite ready to face.
This article is written with the assistance of generative artificial intelligence based solely on Washington Times original reporting and wire services. For more information, please read our AI policy or contact Ann Wog, Managing Editor for Digital, at awog@washingtontimes.com
The Washington Times AI Ethics Newsroom Committee can be reached at aispotlight@washingtontimes.com.
This article is written with the assistance of generative artificial intelligence based solely on Washington Times original reporting and wire services. For more information, please read our AI policy or contact Ann Wog, Managing Editor for Digital, at awog@washingtontimes.com
The Washington Times AI Ethics Newsroom Committee can be reached at aispotlight@washingtontimes.com.











