The Case Against Optimizing Our Education to Death
Columbia is embracing AI. Students are paying the price.
Last semester, I took Calculus III. I submitted my first problem set without using AI, got a 57 percent, and was flummoxed at the class average of 97 percent. I asked a friend in the class for help, and he responded incredulously: “You didn’t run it through Chat?”
On February 11, I attended a dinner-table discussion sponsored by the Undergraduate Community Initiative and the CC-SEAS Integrity Advisory Board. The theme of the dinner was generative AI: How, why, and by whom it was being used. A good mix of students were in attendance, as well as Dr. Victoria Malaney-Brown and Dean Jonathon Kahn, director of academic integrity and dean of community and culture, respectively. As the goal of the discussion was to talk openly about the subject, I admitted that I had succumbed to the pressure. It was like a dam had broken; around the table, everyone admitted to using AI in their more technically-oriented classes, if not all of them.
My own use of AI had started as a way to catch up to the crowd of already-users—it wasn’t ill-intentioned, and it certainly did not begin as a substitute for doing my own work. After the Calc III incident, I ran all my problem sets through AI before submitting them. The line between using it to catch mistakes and using it to solve difficult problems soon blurred: The nature of AI was such that it quickly made me feel incompetent—it was just so fast, so good at spitting out exactly the right solution, given the right prodding. Eventually, I was using it as a crutch for all the problems I didn’t understand, rather than actually doing the hard work of untangling the solution myself.
It was a bit depressing, we all agreed, that so many bright students were outsourcing their thinking to large language models (LLMs)—despite our best intentions, not a single one of us had managed to avoid the sheer gravity of their usefulness.
There’s a certain disaffection reflected in how ubiquitous AI has become on Columbia’s campus. It’s not that students here don’t know how to think for ourselves—it’s that so many of us choose not to.
The wholehearted embrace of AI in many Columbia circles is not surprising. The process to get into an institution like this one incentivizes certain behaviors—upholding academic standards while excelling in extracurriculars while contributing positively to society while showing some creativity while, purportedly, trying to live a normal life as a teenager. These demands form a perfect storm of misaligned incentive structures: Many students aiming for the ivory tower begin to see life as a series of boxes to be checked and qualifications to be attained rather than a fluid and vibrant path to be experienced.
At Columbia, this manifests itself in the careerist pressures to figure out what you want in life as early as possible and to ‘get a head start’ on achieving those goals. If you don’t know what you want with “your one wild and precious life,” the jig becomes to juggle as many hats as humanly possible. The cost of ‘keeping your options open,’ of course, is time. Columbia students’ embrace of AI shortcuts is reflective of the time compression we experience because we live in a campus environment filled with ambient pressure to be good at everything. The first time you take the easy way out of a difficult assignment may feel like crossing a threshold—except the line was demarcated in chalk and, with each crossing-over, it becomes less and less defined.
We’d like to believe that building guardrails around AI usage in academia is merely a matter of willpower. Some students might claim they “never use AI” as a form of virtue-signaling, a way to mark their stance against the rapid encroachment of the technology on everyday life, but the truth is more complicated than that. When everyone around you is using it to get ahead, holding out might mean you score one or two standard deviations below the mean on every problem set, or you might be spending twice the amount of time on an assignment than your peers.
The efficiency conundrum is not limited to STEM classes—even in humanities classes, where close reading and deep reflection should be the norm, reading an AI-generated summary instead of the text can save students hours of work per class. These instances of ‘losing the race’ eventually add up to a punishment against non-users under the grading schemes that give preference to asynchronous work over synchronous learning. Such a sacrifice might be deemed ‘worth it’ on an ethical basis, but if we are to maintain that education is meant to develop curious, agile, and principled citizens, we would all be better off if our institutional metrics of success were redefined to better meet the demands of living in an age of ubiquitous cognitive outsourcing.
Columbia evidently doesn’t think so—the first sentence of Columbia’s Office of the Provost’s Generative AI Policy reads “Columbia University is dedicated to advancing knowledge and learning, and embraces generative AI tools.” Columbia’s 2026 Teaching and Learning Awards’ requests for proposals are “designed to support faculty looking to integrate new educational approaches and technologies into their teaching and learning practices.” The metafiction of AI has been so successful in selling itself as a learning tool that, instead of asking if AI belongs in “teaching and learning,” the metric for excellence seems to be earmarked for professors figuring out how AI belongs in “teaching and learning.”
Columbia is implementing all of this while researchers have shown that outsourcing writing tasks to AI decreases neural connectivity—leading to consistent cognitive underperformance—and that outsourcing human interaction to sycophantic AI models decreases prosocial and independent behavior. In a compelling op-ed for the Columbia Daily Spectator, Grace Kaste wrote that the University’s “unthinking embrace of AI” is “premised on a conception of the University as a corporation: demand for AI is growing, and if we don’t invest now, we will fall behind.”
As students are forced to play the ‘use-AI-or-not’ game on an individual level based on our fears of falling behind, Columbia is doing the same on an institutional level. Corporate influence aside, it would be irresponsible of Columbia to recklessly embrace generative AI as a panacea for some kind of ‘enhanced learning.’ Elite universities sell a vision of what conventional success looks like: with their power to shape societal narratives around what is ‘worth pursuing’ for the sake of success comes the responsibility to do so for society. As the world becomes increasingly digitized and it becomes harder to distinguish human work from that generated by technology, it has never been more imperative to shape our narrative of success around a vision of human flourishing.
New institutional standards of what constitutes ‘good work’ would realign incentives earlier in the education system so that children are taught to value curiosity, creativity, and actual learning—not just the mindless completion of certain benchmarks to distinguish themselves as worthy of entering the ivory tower.
In fall semester, my assigned Contemporary Civilization professor was, in effect, indifferent to AI. He proclaimed on the first day of class that this was going to be a lecture, not a seminar; said vaguely that he “could tell when something was AI”; and then lectured for an hour and a half on the Pentateuch. Discussion with peers was barely even an afterthought, and all of our (five) exams would be take-home essays. I promptly switched into a different CC section after the second class proved that his methodology was not, in fact, a practical joke.
The section I switched into had a radically different policy: Participation accounted for half of the course grade, and handwritten exams and reflections accounted for the rest. In the course syllabus, my professor wrote: “After a quarter century of assigning traditional essays, your cohort’s...enthusiastic embrace of?...rapid addiction to?...unthinking submission to?... gAI has forced me to change gears. So no traditional papers.” He assigned us in-class journaling and at-home reflections instead: “writing helps you think and understand the challenging ideas that you are grappling with in a course like this. I’m experimenting with this new approach; we’ll see together how it goes.”
Immediately, I could feel that my classmates were more excited about the work we were doing in comparison to my classmates from the previous section: During our class intermissions, discussions would continue because everyone involved actually cared. There were no complaints if class ran past schedule; people would only start leaving during an unfinished discussion when the professor who had the room next came in and kicked us out. This exercise in communal learning was living proof of Mills’s claim that “livelier impression of truth” is “produced by its collision with error.”
This experience made it clear to me that a university education should not be received in a vacuum. The promises that have been made about AI’s ability to democratize education by offering highly individual experiences are empty ones. The knowledge that AI has to offer—of the encyclopedic or formulaic kind—are what we as humans will, in the future, have the easiest time outsourcing. The truly inimitable wisdom and knowledge that is earned must come from engaging with other people, staying intellectually humble, and observing the world through a lens of perpetual curiosity.
Higher education of the kind Columbia and its ilk stand for was never meant to be frictionless or optimized for efficiency. The classes that have most shaped my worldview have all been discussion-dominant seminars in which I learned as much from my peers as from my professors. Even in larger lecture classes, the ones in which my professors interacted most with their students—like my Intermediate Microeconomics class, in which we’re often asked to do group work on chalkboards—were the ones I was least likely to skip.
Staying faithful to this Socratic mode of education requires Columbia to uphold a commitment of care to its students—one that it is actively undermining by increasing class sizes when resources are already strained, firing researchers, and embracing AI usage in its classes. Instead of encouraging the outsourcing of students’ learning to AI, the University ought to decrease class sizes, make learning a more interactive process, and incentivize professors to grade students in such a way that we can only succeed through actually valuing our work.
In her 1954 essay “The Crisis in Education,” Hannah Arendt wrote that “We are always educating for a world that is or is becoming out of joint, for this is the basic human situation, in which the world is created by mortal hands to serve mortals for a limited time as home” and “must be constantly set right anew.” The problem of education “is simply to educate in such a way that a setting-right remains actually possible.”
The rapid adoption of AI in society is a test of what exactly this “setting right” entails. The U.S. has been struggling with a numeracy and literacy crisis for almost a decade now. The convenience and easy accessibility of AI simply mean that it has never been more viable to get away with ignoring the downslide. As more parts of our education and employment systems integrate AI into their everyday functions (or outsource tasks to AI altogether), a significant portion of children will begin to believe that they don’t need to acquire any actual skills to ‘succeed’ in life. After all, it’s difficult to try and do something difficult that doesn’t have immediate payoffs when you know that AI could do it better.
The point, then, should be to show students why they ought to care about their education. The old adage is that education is about “teaching you how to think, not what to think.” In the age of AI, I propose that the most salient point of education—the “setting-right” that it is tasked with—is in proving its own utility by teaching us “why thinking matters.” The solution is for professors to design syllabi and grading standards that reflect the value of human creativity, ingenuity, and critical thinking. Turning a blind eye to the ways in which using AI to “offload” cognition might harm learning would be a betrayal of the purpose of education itself; the consequences of this apathy paint a bleak picture of the meaning(lessness) of human life, even supposing a benevolent, abundance-generating artificial general intelligence sometime in the future.
Arendt wrote that “education is the point at which we decide whether we love the world enough to assume responsibility for it and by the same token save it from ruin.” Reading it now feels like glancing into an eerily accurate trick mirror from the past, perhaps because the natural endpoint of an overreliance on AI is a total alienation from our own education, from our modes of knowledge production, from our very ability to think for ourselves—a total abnegation of our responsibility to understand.
Universities like Columbia have a responsibility to safeguard the future of education. There is immense institutional leverage to be found in bottom-up policies in syllabi. We ought to set incentives to better manifest a world that we can love—not one in which students are simply efficient cogs in the machine, but one that affirms our capacity as humans to understand what a meaningful life looks like.
Ms. Chen is a sophomore at Columbia College studying linguistics, economics and East Asian languages and cultures. She is the deputy editor of Sundial.
The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the Sundial editorial board as a whole or any other members of the staff.




This is a wonderful piece. Thank you, Ms. Chen. It validates the concerns and beliefs that I’ve had for some time about artificial “intelligence.” Its full-throated embrace by Columbia is very regrettable, and leaving it to faculty to ensure that ai is used appropriately, and consistently with Columbia’s educational mission, is, I believe, quite misguided.