Columbia Students Aren’t Talking About AI. That should concern you.
Failure to renew the AI debate will amplify our community’s shortcomings
As a senior, there is not much that confuses me about Columbia anymore. Since coming here, I’ve made peace with her quirks and peculiarities. Along the way, I let go of many of my early preconceptions. But one belief I continue to hold—my entire raison d’être for being at this institution—is that the liberal arts are the soul of Columbia. It’s the idea that undergraduates here, in some capacity, willingly embrace the Core’s ethos for the sake of living a more contemplative life.
With that assumption in mind, I cannot accept students’ silence—neither in defense nor in opposition—when it comes to the question of AI.
It seems that in certain subsections of the University, the debate has already been decided. According to Provost Angela V. Olinto, we are supposedly in “this great rush to develop AI and ensure it is developed responsibly, for the benefit of all of us on this wonderful planet.” The University’s AI Initiative called Columbia AI declared that it “is a University-wide initiative powered by the Data Science Institute to bring artificial intelligence to every field, academic discipline, and domain area.”
These statements use such sterile, unobtrusive language that, if I didn't know better, I would read as a threat. These statements leave no room for a debate on whether some departments—particularly those within the liberal arts—are just incompatible with AI. The question, at some point, quietly shifted from if to how. The inquiry is not about if the use of AI in certain fields is ethical. Rather, it's about how we can use it “ethically.”
When concerns are raised about AI, such as its potential to replace human ingenuity, they are often postponed or reframed as problems to solve later. For instance, Dean of Columbia Engineering Shih-Fu Chang stated, “The key question is: How do we use AI not to hinder or replace collaboration, but to enhance creativity, cooperation, and efficiency between humans and machines?” While it’s admirable that he acknowledges faculty concerns, this “key question” is treated as a minor nuisance to be addressed only after AI is fully embedded in University life. This radical acceleration creates the illusion that a broad consensus has already been reached on AI’s positive attributes.
And yet, dialogue on the merits and limits of AI has barely begun. Generative AI is still relatively new—ChatGPT, the most well-known and used artificial intelligence platform, launched only in late 2022. Columbia students took a brief moment to reflect on whether AI was beneficial for higher education in a special 2023 Spectator column titled “Opinions on AI”. These conversations were largely concerned with the theoretical question of AI. Now, just as AI has moved from the theoretical to being imposed on us every turn, our collective conversation has halted. Students preemptively surrendered the intellectual debate to our overlords, who insist that progress and efficiency are the supreme goals of the modern university.
My aim is not to be overly critical of AI at Columbia. Instead, it is to reflect on how the normalization of AI in our community, if left unchallenged, paints a rather sordid picture of the average Columbia student. Therefore, I have three main reasons why I think Columbia undergraduates seem far less critical of AI than I anticipated.
The first is our raging careerism.
For many Columbians, college is simply an intermediary step before attaining their desired job. The content of classroom material, or even the gratifying process of learning, is treated as an obstacle to higher pursuits: internship applications, job hunting, or networking emails.
The University does not make it any easier on the student. Columbia itself has embraced this pre-professional model. One only has to take a look at the language the Columbia communications team uses to market itself, the variety of accreditations offered by the University, or even our current governance of trustees.
The onus is not just on the students for their tendency to favor efficiency. Many arrived here under the naive belief that they should simply continue the rat race that began in high school. It was the University that failed them—failed to thwart this misplaced desire.
Thus, having long sacrificed the part of themselves meant to be students in favor of their professional alter egos, students turn to AI to lighten the burden.
The second reason is our commitment to social justice.
AI as a tool is often described as being “neutral.” That is, that there are no inherent ethics or morals to the use of AI itself. It matters solely how we use it. And thus, Columbia’s AI initiatives, like many others, rush to reassure us that they deploy it only in ways “responsible and reflective of the University's commitment to societal impact.” We are told that the Data Science Institute’s projects are committed to “climate change, education, energy, environment, healthcare, inequality, and social justice” like everyone else these days.
This language should strike the reader as ironic. Columbia is AI initiative partners with OpenAI, whose CEO Sam Altman has recently been courting the Trump administration. I don’t mean to suggest a grand conspiracy where right‑wing elites are colluding against the liberal elites (us). But I do believe most Columbia students, faculty, and staff would struggle to call these coincidences “responsible” or aligned with “societal impact”—at least if we take that term in the positive sense.
In a way, the phrasing tells on itself. If these data scientists and AI researchers all feel compelled to constantly stress their “commitment to societal impact,” they must implicitly know that this technology is not honorable. It must be continually defended so the public doesn’t start asking moral questions. So while Columbia’s AI initiatives profess to uphold the campus majority’s values, the claim is spurious at best.
More broadly, this is the same marketing strategy that pervades much of the profit-driven world: the tendency to align one's brand with the language of “social justice” or “community impact” in order to make a controversial product appear virtuous.
For many students, this language feels familiar. It mirrors the rhetoric of corporate advertising, NGO campaigns, and mass-culture activism. It uses terms like “societal impact” and “social justice” as a shield against accountability. And it draws on the 21st century habit of giving technology anthropomorphic qualities. This framing makes AI feel like an extension of the world we already know, and have to varying degrees, willingly embraced.
Steeped in the mores of the 21st century, the language of AI conceals its extraordinary nature. In turn, that familiarity makes it easy for AI to slip into the background. The result is a troubling complacency, where students subconsciously accept AI as just another feature of the institutional and cultural landscape they inhabit.
However, AI should not be accepted as just another routine technological advancement of our lifetime. Other technologies have sought to substitute aging ones with an elevated experience—the cellphone enhancing communication, or social media connecting people. AI’s gimmick, on the other hand, is to replicate human thought.
The third reason is our reverence for institutional prestige.
Columbia students—though they rarely admit it these days—still take pride in belonging to a well-known and accomplished institution. They have come to relish Columbia’s status as a research powerhouse, a pride that has taken on a more pitifully protective tone in light of Trump’s attacks on research funding.
Now, Big Tech and academia are locked in an “AI arms race,” both collaborating with and competing against each other. Columbia, as a research university, is capitalizing on and contributing to this race, producing advances in AI and excelling at one of the few things it still does well. Students treat this as one of the last vestiges of our crumbling prestige. Students may keep a long list of grievances against the administration. But Columbia’s ability to produce groundbreaking research and attract brilliant minds allows students to cling to that fragment of Columbia they once imagined to exist when applying.
While I regard the AI Initiatives and Columbia’s role in the Empire AI consortium with apprehension and distrust, many of my peers see these efforts as the purest expression of our institutional identity.
The Columbia I am describing—driven only by career advancement, accepting social norms with dubious morals, and being blindly loyal to prestige in whatever form it takes—should alarm any student. But it should also strike us as conspicuously incomplete. I do not mean to suggest that this is all Columbia students can amount to. Rather, it is precisely the opposite. Artificial intelligence seizes on the weakest parts of ourselves—vices we may tolerate for the sake of other ends, like securing an economically stable life—and weaponizes them to corrupt ourselves, our community, and what it means to be a Columbian. To put it another way, it is the little leaven which leavens the whole lump.
So to answer my own confusion, it’s not that students have or have not forgone the liberal arts. This itself is a misdirected inquiry. It's that AI illuminates our existing vulnerabilities, saying nothing of our strengths or convictions.
Our collective complacency may be troublesome, but not irreversible. We can and should push back on AI-optimism hegemony by expressing our concerns and collectively inquiring into both the positives and negatives of AI. This would send a clear message that this critical conversation is not over. It also allows us to keep sharpening our skills in dialogue and discussion—which is essential if one ever wants to truly “live the Core.”
No single corner of Columbia should monopolize the conversation on AI. Generative AI is, in some sense, our television, computer, and social media. We have the right, not to go in with presuppositions, but to meet AI with skepticism rather than blind deference.
Ms. Chaudhry is a senior at Columbia College studying history. She is a deputy editor for Sundial.
The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the Sundial editorial board or any other members of the staff.
For those interested in submitting a response to this article, please contact us at columbia.sundial@gmail.com.