Ethical Guidelines for AI

New York City Public Schools just released guidelines for the ethical and appropriate use of AI by educators, staff and students. As college professors, we welcome their framework. As researchers watching innovation quickly outpace necessary safeguards, we would like to see more institutions take a similar approach.

“AI tools cannot and should not replace the relationship between students and teachers,” the guidance reads, “the professional expertise of educators and school leaders, the trust and partnership with families and communities, or human-based instructional services and educational programs.”

The guidelines use a traffic light approach, designating what is and is not allowed using the colors red, yellow and green. A statement of commitments pertaining to protection, empowerment, collaborative decision-making, equitable access and knowledge and capacity building is also included.

The guidelines also include working questions, evaluation procedures, a four-phase approach for publishing the playbook, stakeholder input and a strategic plan.

This development points to a larger issue: schools now need clear rules for how AI should support learning without replacing it. AI is already shaping how students read, write and complete assignments. Because of that, schools cannot leave decisions about AI use only to the companies building these systems.

These questions become especially visible in the classroom, where the line between assistance and substitution matters most.

Students are using AI to ghostwrite their essays. Sometimes it is obvious because the writing style may not mirror the writing the student actually produces in class. This matters because writing is one of the primary ways students learn to develop and test their ideas.

Consider the scene: A student standing in front of a creative writing class giving a speech. As the student reads the paper aloud, their face becomes flushed, the pauses filled with confusion as they try to finish a paper on a subject they do not fully understand. This is what happens when you have AI do the work for you—when you take AI for granted. AI is not a dumping ground to do your work. It is a tool meant to assist your learning.

To be sure, AI tools can support learning when used thoughtfully. They can help students clarify ideas, receive feedback on writing, or better understand difficult concepts. The challenge is ensuring that these tools guide the thinking process rather than replace it. AI systems used in education, for example, should refuse certain requests that undermine learning and critical thinking.

Human thinking must remain central; therefore, students should be encouraged to set boundaries on how they use AI: refusing ghostwriting, using the tool for feedback or clarity, and being transparent about uncertainty. These boundaries protect learning.

Yet, relying solely on individual self-discipline may not be enough. Ethical AI use also depends on how these systems are designed and developed.

Envision an AI tutor that refuses to complete a student’s homework but instead offers guidance, feedback, or questions that help the students arrive at the solution themselves. Instead of replacing learning, the system would then reinforce it. In this way, refusal becomes part of the learning process rather than a barrier to it.

Designing AI systems that can appropriately decline unethical requests while redirecting users toward constructive alternatives may be one of the most important design challenges ahead. In other words, AI companies should build tools that support student learning rather than shortcut it.

A recent study from Carnegie Mellon University examines the concept of “refusal behavior” in large language models—the ability to recognize when a request should not be answered and decline appropriately. Yet current systems often struggle with this capability, sometimes refusing harmless queries while complying with problematic ones. Improving this selective refusal is therefore an active area of research in AI safety.

In education, we already accept ethical boundaries. Students are taught that copying answers during an exam is cheating, not simply because it breaks rules, but because it undermines the development of knowledge and critical thinking.

Likewise, the use of AI tools can reflect similar principles. When designed responsibly, they can encourage curiosity, reflection, and skill development rather than shortcutting the learning process.

The question remains: Who decides where these boundaries lie?

Schools should set clear norms. Teachers should define what acceptable use looks like in their classrooms. Companies should design systems that reinforce those boundaries. And students should use AI to deepen their thinking, not replace it.

No single group can govern AI ethics alone. But together, schools, educators, students and technology companies can create boundaries that protect learning while still allowing innovation. The goal should not be to limit AI’s potential, but to make sure it serves education well. AI systems that discourage academic dishonesty and guide students toward deeper thinking can strengthen rather than weaken their intellectual growth.

In the long run, the most valuable AI may not be the systems that do everything for us, but the ones that know when to refuse and help us become better thinkers.