As of March 2026, the “AI gap” in schools is rapidly closing, but the transition from “eerie” headlines to concrete classroom policy remains a work in progress
The video from Education Week provides a timely look at how the “AI glow” is transitioning into formal policy and the tension between innovation and critics.
The list of AI tools in use in our schools is long and growing: AI tutors built into literacy and math programs, AI teaching assistants built into popular education technology platforms, and AI that helps administrators plan bus routes and class schedules, Education Week reports.
But educators and experts are starting to raise serious concerns that schools may be embracing AI too quickly, because there are potential harms when it comes to using AI to help educate students and manage schools.
While federal agencies have historically focused on broad guidance, the heavy lifting of policy development has shifted to state departments and national educational consortia.
For schools looking to provide parents and the public with a clear, safe, and effective AI policy, the following frameworks represent the current gold standard in the field.
The Maryland Framework: A State-Level Model
The Maryland State Department of Education (MSDE) introduced a comprehensive K–12 AI Guidance framework on March 11. This model is particularly useful because it breaks down AI use into specific “risk tiers,” similar to how a school might classify physical safety protocols.
- Human-in-the-Loop: The policy mandates that AI never replaces human judgment, especially in student assessments or mental health support.
- The “Intake” Process: It requires schools to vet every AI tool for compliance with data privacy laws (FERPA and COPPA) before it reaches a student’s device.
- Transparency: Schools are encouraged to maintain an “approved tool list” so parents know exactly which AI systems (like Gemini or Copilot) are being used and how their child’s data is being protected.
National Resources for Policy Development
If a school district is starting from scratch, these two organizations provide the “blueprints” currently being adopted across the country:
- CoSN (Consortium for School Networking), made up of district leaders, has published The K-12 Gen AI Maturity Tool, which helps districts assess their readiness across leadership, legal, and ethical domains.
- TeachAI.org, made up of classroom teachers, provides a “foundational policy template” that can be customized to reflect a school’s specific values and community needs.
In addition, UNESCO offers a “Human-Centered” framework that emphasizes protecting teacher autonomy and preventing algorithmic bias.
Three Pillars of a Strong School AI Policy
To move beyond the “eerie” and toward the “educational,” a school’s public-facing policy should address three specific pillars:
To build a strong foundation for the 2026 school year, many districts are moving beyond simple “ban” or “allow” rules and instead structuring their public-facing policies around three specific pillars. These pillars, as seen in the MSDE and TeachAI frameworks, aim to balance the power of innovation with the necessity of human oversight.
Academic Integrity vs. AI Literacy
This pillar focuses on the “Green, Yellow, and Red” zones of classroom work, distinguishing between using AI as a crutch versus using it as a sophisticated tutor. Academic Integrity ensures that students are still engaging in the “productive struggle” necessary for cognitive growth, while AI Literacy teaches them how to critically evaluate the machine’s output for hallucinations or bias. For example, the St. Paul’s Schools in Maryland explicitly state in their student contracts that AI should support learning rather than replace original thought, requiring students to cite not just the tool used, but the specific prompts that led to the final product.
Data Sovereignty
Data sovereignty is the commitment to protecting a student’s “digital shadow” from being exploited for commercial gain. A strong policy must ensure that any AI tool used in the classroom complies with FERPA and COPPA standards, meaning student inputs are not used to train future versions of the global AI model. As seen in the Baltimore City Public Schools guidance, this pillar requires a rigorous vetting process for every “intake” of new software. By maintaining an approved tool list, schools ensure that a student’s early academic experiments or mistakes don’t become part of a permanent, searchable data profile that could follow them into adulthood.
The “Deepfake” Defense
As AI becomes capable of creating “eerie” and indistinguishable deepfakes of public figures or peers, schools are integrating a “Deepfake Defense” into their digital citizenship curricula. This pillar moves beyond technology and into the realm of ethics and safety, teaching students how to identify AI-generated misinformation and the legal consequences of creating unauthorized likenesses. Frameworks like those from TeachAI and CoSN encourage schools to use these “uncanny” technological moments as teachable points about consent and the importance of verifying information through “lateral reading” across multiple reputable news sources.

