The First Principles Approach
A reflection on the value of college
What if I’m wrong?
I’ve been asking myself this question a lot recently. My time at Columbia has led me to challenge one of my strongest-held beliefs: the indubitable importance of higher education.
My argument is not that college is useless; it’s a time of significant growth in which young adults learn how to be independent and begin to envision the life they want to live. I am also fully aware that the value of college is often realized outside of the four years we spend on campus, in the form of a rich alumni network and name-brand recognition. However, while a college education has been a requisite for a “successful” life over the past decades, I do not believe this is true anymore. Learning more information via the university system does not seem conducive to preparing students for our world today, where virtually all information is available on the internet.
This wasn’t a conclusion I reached overnight. I, too, was once an eager student, anticipating with great excitement the things I would learn within the ivory towers of academia. However, much to my chagrin, these idealistic images faded rather quickly as I progressed further into my college career.
My skepticism began with the challenges I faced in college golf. I was taught the McCord-O’Grady Research and Development (MORAD) system from the age of 12, a method of swinging the club developed by former PGA Tour player Mac O’Grady that addresses the perceived shortcomings of Homer Kelly’s The Golfing Machine, a highly technical, physics-based analysis of the golf swing that emphasizes the geometry and alignment of movement patterns. There is not much on the internet—nor in mainstream golf instruction—known about the MORAD project, and for good reason: Students of the MORAD school of golf are said to have the best-looking and most biomechanically efficient swings, so those in possession of this proprietary knowledge are highly protective of it. Unless one has the good fortune to receive instruction from O’Grady or someone who studied under him directly, as I did, there is no way to learn the MORAD swing in its entirety.
Throughout my junior golf days, I worked tirelessly to perfect my swing alignments, spending hours on the range meticulously following the drills my coach prescribed. However, as every economics major knows, few things escape the effects of diminishing marginal utility; by the end of high school, the 100th minor change that brought me even closer to the MORAD ideal did not improve my ball striking or score by much, if at all.
Thus, when my coach insisted that I “correct” part of my swing—despite the fact that I was striping the ball—I asked him, “Why am I making changes if I am hitting and playing well?”
His response was jarring: “We need to get the look of your swing right before you can learn how to score.”
I was confused. Yes, the MORAD system seems to produce a distinctive look among its adherents (see O’Grady himself, Robert Rock, and Grant Waite as examples). However, I didn’t realize there was an exact formula to shooting lower scores; if that were true, how did players notorious for their unorthodox swings like Scottie Scheffler, Matthew Wolff, and Jim Furyk succeed?
Despite these nagging doubts, I decided to trust my coach. After all, he was the expert, the authority figure, and knew what was best. If he believed that having a flat left wrist at P4 (the top of the swing) or maintaining a 6° head tilt at setup was the key to shooting lower scores because it achieved the “proper look,” then I would as well.
This mindset would prove to be disastrous. By the time I entered college, I had ingrained the habit of solving problems in my game by doing what had worked in the past or by relying on my coach’s instructions because he was the “expert.”
I continued to prioritize improving my swing alignments because I believed that was the way to get better; I read Columbia Business School professor Mark Broadie’s book Every Shot Counts—which revolutionized the way golf statistics are calculated—and, through my own research and efforts, gained 10-15 yards on all my clubs during the winter of my freshman year. In every sense, I was doing what was “right”: I followed instructions, made decisions based on well-researched data, and worked to achieve a certain goal.
However, the results suggested otherwise: seven out of my eleven rounds in my freshman spring season were in the 80’s, marking the worst stretch of golf I had ever played in my entire career. I felt an incredible amount of bitterness and resentment over these results, but, as is the nature of golf, I had no one to blame but myself. What went wrong? Not only did I do as I was told, but I also went above and beyond to improve my game. Was it wrong to expect progress from such efforts?
Up until that point, I had believed that hard work would always pay off. When that fundamental paradigm was disrupted, it caused me to question everything I had ever been taught.
This experience was my introduction to first principles thinking.
First principles thinking is a reasoning method that involves breaking down complex ideas, problems, or processes into their most basic, fundamental truths (their “first principles”) and then building solutions from the ground up. Instead of approaching a problem through reasoning by analogy—which relies on assumptions, traditions, best practices, or comparisons to what already exists—you start with what you know to be undeniably true and reason up from there.
Using this approach, I began with the most basic question: What is golf?
Golf is a game in which you try to hit a little white ball into the hole in as few strokes as possible. There. In one question, I had completely undermined the way I had been taught the game. Nowhere is the goal of attaining the “best” or “prettiest” swing a requirement to shoot low scores: It is simply a means to an end, which may or may not work for different players. Similarly, when it came to gaining distance, I had overlooked the most basic building block of research: the data. Simply put, the majority of the data used in Every Shot Counts is based on men’s golf, not women’s golf. Because of the vast differences between how men’s and women’s golf is set up, the insights in the book may not be directly applicable to the women’s game.
The realization that my approach to golf was fundamentally flawed—after years of practicing a certain way—was terribly unsettling. How could I have let myself spend so much time chasing things that were irrelevant to the actual goal? Naturally, I began to wonder how many other assumptions I held were categorically misguided.
My musings led me back to education.
The nature of school is such that we are taught what is known and what has worked in the past. We are rewarded for being “right,” as defined by our textbooks, curricula, and teachers. But who is to say they know definitively what “right” is? Our tests require us to know the way things are, but little opportunity is granted to us to consider why they are. It is implied that the material we’re being taught is the best and most complete idea in each respective field—but what if it isn’t? Through all this, we are constantly conditioned to think in terms of analogies, to trust what has worked in the past, rather than starting from the most basic principles of a problem and reasoning up from there.
The problem is not an indictment of elementary, middle, or high school. We are incentivized to follow instructions and do as we’re told from a very young age because there needs to be some structure in place for kids to learn how to read and do basic arithmetic. However, this system fosters a tendency toward blind trust, a trait that manifests itself most clearly at the university level in the form of our faith in the institution to provide us with something of value.
As I was relieving myself from the chains of total adherence to authority and reasoning by analogy in my golf game, I realized the same must be done for my education.
In Principles of Economics, we learn that the Federal Reserve controls inflation by changing interest rates—but were we ever taught why the Federal Reserve exists in the first place?
In Intermediate Macroeconomics, we learn about the Solow-Swan model to explain long-run macroeconomic growth, and in Intermediate Microeconomics, we learn the foundational utility maximization problem. Putting aside the many assumptions made in these models, my question is this: Is math truly the appropriate language to express the study of economics? I cannot be alone in thinking that much is omitted when we try to apply an objective medium like math to an extremely complex social science like economics.
In Frontiers of Science, we learn the scientific method—hypothesis, experiment, and falsification—and treat it as “the path” to advancement; it is a stable, objective procedure that rises above cultural bias and personal opinion. But consider Thomas Kuhn’s The Structure of Scientific Revolutions, in which he argues that science does not progress linearly. Instead, it periodically undergoes sudden, disruptive shifts in which entire frameworks for understanding the world are discarded and replaced. If the most consequential moments in scientific history were not the product of careful hypothesis-testing but of entire paradigm shifts, what exactly are we being trained to do when we learn the scientific method?
By no means am I an expert in any of these fields; however, I don’t think that expertise should bar anyone from challenging widely-held ideas. My issue, then, with the “education” we are receiving is not that there is an active agenda to stifle questions, but rather that there is no incentive or desire to ask them. Education, by definition, is “the knowledge and development resulting from the process of learning or being taught.” But is the acquisition of knowledge a productive activity when computers already house more knowledge than any human being could retain? It may have been in the past, but continuing to emphasize education is a mistake when the differentiating factor it provides is no longer abundantly clear. A more worthwhile pursuit would be to acquire the ability to reason through the practice of asking questions—that way, with information readily available at our fingertips, we are equipped to tackle even the most uncertain situations.
It might seem bold to critique findings that transformed the entire golf industry or question the foundations upon which an entire discipline is built. However, when you play a sport like golf, where you lose and fail constantly—even Tiger Woods has lost more times than he has won—it teaches you to leave no stone unturned in the pursuit of improvement.
Regardless of ideological affiliations, it is essential to maintain a constant, healthy skepticism, not only of the world around us, but of ourselves. Institutions remain stagnant amidst a world that changes a lot faster than we realize. To think that adding an AI minor will genuinely prepare students for a society where improvements in AI occur at blinding speeds shows that Columbia is still trying to apply slow, rigid systems to new, increasingly complex problems. Why wait for the University to teach us when we have the ability to learn anything right at our fingertips?
Ultimately, the value of our education comes down to whether we choose to ask questions and get to the truth of a matter, regardless of who and what we are questioning. Sundial has allowed me to put this into practice over the last two and a half years, shaping me into the person I am today. For that, I am grateful.
Ms. Shen is a senior at Columbia College studying financial economics and computer science. She is an editor-at-large for Sundial.
The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the Sundial editorial board as a whole or any other members of the staff.




