In Patrick Maloney’s classroom at Strath Haven High School in Wallingford, Pennsylvania, students can now write a for loop in seconds. Just type the request into ChatGPT, and the AI will spit out working code. Which begs the question: If the machine can do the assignment, what’s the point of teaching the basics?
Mr Maloney has an answer, reported by Lavanya Dixit in the school’s student newspaper, an answer that echoes across classrooms and corporate offices alike: “Anything that I teach, which is all very introductory in the grand scheme of computer science, AI could probably do. However, as someone in computer science, it’s not whether or not you can do it; it’s whether or not you can actually interpret it, because AI is not perfect.”
He compares the moment to math education a generation ago. “You spend your first years of school mentally learning how to multiply, how to divide, and then you can use a calculator to save you from doing that,” he explains. “Maybe all of the stuff that we’re learning in this class will be pushed to having AI do it, but if you don’t actually understand what it’s doing, then you’re not going to be a computer scientist.”
The High School View: Opportunity and Temptation
Students agree that AI is both helpful and risky. One junior at the school, who took AP Computer Science Principles, warns against using AI as a shortcut. “Just trying to get the quick and easy answer from ChatGPT instead of using it to learn [about coding] is useless,” he says. “Use it to your benefit by actually trying to understand the code rather than just getting an easy, quick answer.”
That distinction between outsourcing the work and using AI as a learning partner is already shaping policy higher up the education ladder.
Universities Move from “Ban or Allow” to “How and When”
At elite universities, the conversation has shifted away from blanket bans. Stanford University’s Teaching Commons now provides trategies for integrating AI responsibly, encouraging professors to clarify when AI tools can be used and how to cite them.
Harvard University’s Faculty of Arts and Sciences has advised instructors to establish their own policies on AI, while encouraging them to specify precisely what aspects of learning AI should and should not support.
The University of California, Berkeley, has even published example syllabus statements that range from outright prohibitions to conditional permissions.
Harvard’s flagship introductory course, CS50, has gone further by building an AI assistant into the class. According to a paper presented at the Association for Computing Machinery, the assistant can provide explanations, hints, and style suggestions, like an always-available teaching aide. It’s designed not to give away full solutions but to help students think through their problems.
Underlying all of these policies is the same principle Mr Maloney voiced at Strath Haven: AI can speed up syntax, but students still need fluency in algorithms, data structures, debugging, and systems.
These skills predict long-term success in computer science and allow students to check and correct AI output when it inevitably makes mistakes.
The Workplace: AI as Colleague, Not Replacement
The professional world is going through the same balancing act. A 2023 randomized controlled trial run by GitHub and researchers at MIT found that developers with access to GitHub Copilot completed a coding task nearly 56 percent faster than those without the tool.
Meanwhile, a 2024 Google Cloud report based on a survey of 5,000 IT and business professionals found that developers now spend about two hours a day using AI coding tools. The report noted that while AI reduces time spent on rote coding, it increases the need for skills in architecture, testing, and integration (the kinds of tasks that require human judgment and collaboration).
Yet companies like Microsoft and Meta have not simply replaced coders with AI. Microsoft’s layoffs in 2024 and 2025 were tied largely to restructuring after the acquisition of Activision and to refocusing resources toward artificial intelligence research, not to AI displacing programmers wholesale. Meta’s 2023 “Year of Efficiency” reduced more than 10,000 jobs, but at the same time, the company continued to hire aggressively for top-tier AI research teams.
Broader economic data supports this interpretation. In September 2025, the Federal Reserve Bank of New York published a survey of service firms in which only about 1 percent reported layoffs directly caused by AI in the prior six months.
That number was down from 10 percent in an earlier survey, though 13 percent of firms expected AI-related layoffs in the next six months. This means AI is beginning to reshape workflows, but so far, it hasn’t eliminated the need for programmers.
The Fundamentals Still Matter
Industry leaders and faculty agree that the fundamentals remain predictive of success. Google’s report emphasized that skills in debugging, testing, and system architecture are more valuable than ever.
University syllabi stress that students who understand core computer science concepts are better able to direct AI effectively, correct its errors, and use it responsibly.
That aligns with Mr Maloney’s classroom experience. AI can find missing semicolons and generate boilerplate code. But students who never learn to reason about their own code risk becoming button-pushers rather than computer scientists.
A New Kind of Literacy
The shift suggests that tomorrow’s students need two literacies:
- Classical computer science literacy — understanding how code works under the hood.
- AI literacy — knowing when and how to use copilots, how to prompt effectively, and how to evaluate machine-generated work.
For high schoolers, the advice is already clear: use AI to learn, not just to finish. For universities, it means designing policies and assignments that build both literacies. And for employers, it means hiring programmers who can collaborate with machines, not be replaced by them.
The calculator analogy remains the most powerful frame. In math, calculators didn’t end arithmetic; they made higher-level work more accessible. In computer science, AI won’t end coding. But like calculators, it will divide those who understand the foundations from those who simply press the buttons.














