HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
February 1, 2025
Vol. 82
No. 5

AI and Ethics: What School Leaders Need to Know

author avatar
author avatar
Five must-follow recommendations for navigating the complexities of AI in your building.

premium resources logo

Premium Resource

LeadershipTechnology
Illustration of a woman leader walking through a maze with symbols related to digital technology and security
Credit: Shutterstock AI / Shutterstock
As generative artificial intelligence (GenAI) emerges in education, the ethical significance of actions—not just words—has become increasingly vital. As activist and reformer Jane Addams asserted more than a century ago, “Action indeed is the sole medium of the expression of ethics” (1907, p. 273). Addams reminds us that while written principles defining right and wrong, fair and unfair, and acceptable and unacceptable are important, it is ultimately our actions as educators and school leaders that matter most. These actions will shape how ethically AI is used in schools and beyond.

Shaping AI in Schools

The actions of school leaders will help decide whether GenAI will promote productive and just outcomes in education or harmful and destructive results. As teacher educators and technology researchers who have been critically examining, researching, and using current and emerging technologies, including GenAI tools, for more than a decade, we offer five action-based recommendations for addressing key ethical issues related to GenAI in K–12 settings, while engaging teachers and students in thinking critically and making informed decisions regarding the use of GenAI for teaching and learning.

1. Read the technology’s privacy policy to uncover how it collects, uses, shares, and sells user data.

GenAI technologies collect a substantial amount of user data, ranging from user interactions to device information to geolocation data. The type of data collected and how it is used differs for every GenAI tool, making it essential for you, as educational leaders, as well as teachers and students, to read the privacy policy of any digital tool, app, or technology before using it. When reading the privacy policy, there are things you can do to spot red flags:
  • Search for (e.g., ctrl + F) the word “children” to see if the tool bans certain age groups. For instance, OpenAI has a section in their privacy policy stating that children under 13 are not allowed to use ChatGPT because if OpenAI knowingly collects data from this age group, they would violate the Children’s Online Privacy Protection Act (COPPA).
  • Check the date of the privacy policy. Policies that have been updated within the past few years show that the company cares about privacy.
  • Look for terms like sell, affiliates, partners, and third-party to see how user data is sold and shared with others (Hunter, 2022).
  • Find any mention of control to see what control users are given over their data (The Verge, 2018).
  • Search for terms that allow for exceptions, like such as or may, which give companies significant flexibility in data collection and sharing practices (Hunter, 2022; The Verge, 2018).
If you find that a privacy policy has a lot of red flags, it is worthwhile to pay for an upgraded license (Microsoft Copilot’s “for business” license offers more privacy protections than the free version) or weigh the pros and cons of using the tool—do the benefits of using the tool outweigh the costs of giving up your students’ data?
It is also important to examine how the company collects, sells, and shares user content data. User content data includes any data that a user uploads, inputs, or creates with the GenAI tool. This type of data collection can be problematic in educational settings. If a teacher uploads a spreadsheet of student names and grades to a GenAI chatbot and prompts it to write feedback emails to each student, this could violate the Family Education Rights and Privacy Act (FERPA) because the teacher would be giving away students’ educational records to the GenAI company without their permission. Or, if a student inputs sensitive information (like text about a personal trauma they experienced) into ChatGPT, for example, that data is also collected by OpenAI.
As a school leader, you can emphasize the importance of reading privacy policies before using any GenAI technologies and encourage teachers and students to collaboratively discuss whether the use of a GenAI tool is worth giving up their personal data.

The actions of school leaders will help decide whether GenAI will promote productive and just outcomes or harmful and destructive results.

Author Image

2. Do not accept what GenAI tools produce at face value.

GenAI chatbots are not new and improved search engines. Rather than search the web and return links to sources based on keywords, GenAI technologies derive their responses by guessing which letters and words go together to make the most plausible, human-sounding response (Furze, 2024). As a result, GenAI chatbots can and do make errors—producing misinformation, and in more cases than many people realize, generating outright lies known as “hallucinations.”
OpenAI (2024) recently launched a new benchmark (“SimpleQA”) to measure the accuracy of large language models; the company found that even its best model “o1-preview” correctly answered fact-seeking questions only 42.7 percent of the time, with its smaller model “GPT-4o mini” being correct only 8.6 percent of the time. OpenAI’s (2024) Terms of Use even states that “you should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice” (para. 20).
Given that GenAI technologies are known for making up information, you might encourage school community members to work together to establish shared practices for when it might be helpful to use these tools (e.g., for brainstorming ideas) and when it might be harmful to use these tools (e.g., searching for factually accurate information). You could also support teachers and students in developing their information literacy and critical media literacy skills so they can assess and critique the output of GenAI tools.
As a school leader, you can help your school community understand how these tools work and encourage students and teachers to become critical consumers of the content GenAI tools produce.

3. Critically examine the biased input and output of GenAI technologies.

GenAI technologies consistently reproduce cultural, racial, gender, language, and ability/disability biases. Technologies like ChatGPT and DALL-E have been trained on data scraped from the internet, and when AI tools like this are trained without attention to embedded biases, you get what Alex Hanna, director of research at the Distributed AI Research Institute, describes as a “deeply flawed process with harmful results for marginalized individuals and groups” (Lazaro, 2023, para. 7).
Gender roles and sexism are woven into the fabric of GenAI technologies. When asked to write a story about a boy and a girl choosing college majors, ChatGPT-4 had the boy pursuing engineering while the girl chose teaching, in part because she doubted she could handle the math and science demands of the other program of study (Equality Now, 2023). A recent UNESCO (2024) report indicated that the most popular GenAI chatbots “can reinforce stereotypes, biases, and violence against women and girls” (p. 5) and this can amplify gender discrimination and threaten human rights.
Racial bias is deeply entwined in GenAI systems, resulting in unfair and harmful representations of the histories, stories, and experiences of Black Americans, Native Americans, Latinos/Latinas, Asian American/Pacific Islanders, and other marginalized groups. A recent study found that when GenAI chatbots make hypothetical decisions about people based only on how they speak, they exhibit “covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded” (Hofmann et al., 2024).
There is also language and culture bias within GenAI technologies. Some 7,000 languages are spoken in the world today, but AI systems are trained almost exclusively on materials written in Standard English and just a few other languages, including Spanish, French, Mandarin, and German (Ta & Lee, 2023). This means that GenAI technologies produce responses based on Standard English representations of the world, leaving out diverse cultural narratives, heritage, and content (Rettberg, 2024). Rettberg (2022) noted that ChatGPT is “multilingual, but monocultural.”
As a school leader, you can guide teachers and students in understanding how biases come not just from the information in datasets and the way algorithms function, but from the inequalities built into the organizational systems and cultures that create AI technologies. You could do this through offering professional development workshops that engage school community members in uncovering the racial, gender, language, and other biases in AI systems and society; engaging school librarians in critically examining materials generated by AI; and supporting teachers and students in forming critical analysis teams to uncover the diverse histories that AI systems fail to present.

You can help your school community understand how AI tools work and encourage students and teachers to be critical consumers of AI content.

Author Image

4. Develop specific and transparent guidelines about GenAI use.

GenAI technologies have sparked concerns that students will use these tools to do their schoolwork for them. Even though researchers from Stanford University found that, at least so far, GenAI chatbots have not created any marked increases in cheating among students (Spector, 2023), there remains a widespread fear that these tools could lead to a decline in students’ critical thinking, communication, and problem-solving skills.
Students can use any GenAI chatbot (e.g., ChatGPT, Gemini, Copilot, Claude) to write text for them, and it is hard for teachers to clearly detect whether something was written by a student or GenAI. AI-powered writing tools, like Grammarly, can autocomplete sentences, rewrite entire paragraphs, and write a full draft. Microsoft and Google are embedding AI into their Office and Workspace suites, which will allow students to generate text in a Word document with the click of a button.
Some educators have turned to GenAI text detectors to distinguish between human- and AI-generated text, but these tools have proven to be notoriously unreliable, and especially problematic for non-native English speakers (Liang et al., 2023). Text detector tools often have disclaimers about their use. For example, the Sapling AI Content Detector homepage states: “No current AI content detector (including Sapling’s) should be used as a standalone check to determine whether text is AI-generated or written by a human. False positives and false negatives will regularly occur” (2024).
While every school and district has an academic integrity or honesty/dishonesty policy, very few have been updated for the age of AI. (However, there are some early adopter schools that can serve as models—CRPE has curated a database of these schools here: https://bit.ly/earlyaiadoptersdatabase.)
Teachers and students need classroom- and school-specific GenAI policies that indicate what technologies are allowed, when they can be used, and why these technologies can be used or not used (Trust, 2024). The “why” is very important to students: Simply writing, “GenAI chatbots cannot be used for summarizing readings,” is not nearly as influential or informative as, “Using GenAI chatbots to summarize readings can negatively impact your learning by taking away opportunities to critically examine the ideas of others before you formulate your own.”
When you draft your policies, consider how to create specific, yet flexible guidelines that encourage ethical, responsible, and legal use of GenAI technologies. Flexibility is key, as outright bans of GenAI technologies in schools can disproportionately affect students who rely on these tools for communication (such as disabled students and non-native English speakers) and students who do not have opportunities to use these tools at home (thus, exacerbating the digital divide).

5. Investigate the societal and environmental impacts of GenAI.

GenAI tools present complex ethical questions around the societal and environmental impacts of their use in educational settings. Many of the popular GenAI technologies were trained on copyrighted text and media without permission from, or compensation to, artists and authors (Chesterman, 2024), in addition to exploited labor from the global south (Perrigo, 2023) and the free human labor of users. These tools are also energy guzzling machines that are draining the power grid in the United States.
Considering the environmental impact and intellectual property right concerns associated with GenAI image creators, should students and teachers be using these tools to generate visuals when there are already millions of free Creative Commons and Public Domain images available? Do students feel comfortable using a GenAI tool that benefits off their free human labor and exploited human labor from other countries? How might students reduce the environmental impact of using GenAI tools?
Each of these societal and environmental considerations offers opportunities for school leaders, teachers, and students, working together, to critically engage with broader issues, consider the impact of GenAI tools on the world, and make informed decisions about the use of GenAI technologies.

Moving Forward with GenAI

The five action-based recommendations shared here can aid school leaders in moving from abstract ethical considerations to practical approaches to GenAI use in their schools. These approaches allow educators and students to do what they do well (teach and learn) and GenAI systems to do what they do well (support teaching and learning). Our AI and Ethics slide deck and sample AI Syllabus Policy Statement can serve as resources to propel deeper thinking and learning about the ethics of AI. As all of us create, analyze, invent, understand, express, and imagine ethically with GenAI, we can propel education forward in the digital age.
Editor’s note: The prompt for the AI-generated illustration that accompanies this article was: “A Black woman school principal walks through a stylized maze. The maze is constructed from symbols—lock icons, data symbols, key motifs, circuits, and other symbols representing AI security, privacy, and ethics. Each wall and pathway has a textured, grainy look with gradients of bright greens, soft blues, and warm yellows, giving a contemporary yet friendly feel. The figure appears thoughtful but determined, navigating with ease toward an exit lit in a gentle, welcoming light. The piece captures the challenge of ethical decision-making in AI, framed as an approachable, engaging adventure. Art style is a vector illustration with warm textures, rich gradients, and bright, approachable colors.”
References

Addams, J. (1907). Democracy and social ethics. The Macmillan Company.

Chesterman, S. (2024). Good models borrow, great models steal: Intellectual property rights and generative AI. Policy and Society, 1–15.

Equality Now. (2023, March 23). ChatGPT-4 reinforces sexist stereotypes by stating a girl cannot “handle technicalities and numbers” in engineering.

Furze, L. (2024, May 27). Don’t use GenAI to grade student work. Leonfurze.com.

Hofmann, V., Kalluri, P. R., Jurafsky, D., & King, S. (2024, August). AI generates covertly racist decisions about people based on their dialect. Nature, 633.

Hunter, T. (2022, July 1). How to skim a privacy policy to spot red flags. The Washington Post.

Lazaro, G. (2023, May 17). Understanding gender and racial bias in AI. Harvard Leadership Initiative Social Impact Review.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).

OpenAI. (2024, October 23). Terms of use. Retrieved from https://openai.com/policies/row-terms-of-use/

OpenAI. (2024, October 30). Introducing SimpleQA. Retrieved from https://openai.com

Rettberg, J. W. (2022, December 6). ChatGPT is multilingual but monocultural, and it’s learning your values. Jill/txt.

Rettberg, J. W. (2024, January 16). How Generative AI endangers cultural narratives. Issues in Science and Technology, 40(2), 77–79.

Perrigo, B. (2023, January 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. TIME.

Sapling. (2024). Sapling AI detector. Retrieved from https://sapling.ai/ai-content-detector

Spector, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Research Stories, Stanford Graduate School of Education.

Ta, R., & Lee, N. T. (2023, October 24). How language gaps constrain generative AI development. Brookings.

The Verge. (2018, June 25). How to read privacy policies like a lawyer [Video]. YouTube. https://www.youtube.com/watch?v=zZkY3MLBGh8

Trust, T. (2024). Five tips for writing academic integrity statements in the age of AI. Faculty Focus.

UNESCO. (2024). Challenging systematic prejudices: An investigation into bias against women and girls in large language models.

Torrey Trust, Ph.D. is a professor of learning technology in the College of Education at the University of Massachusetts Amherst. Her work centers on the critical examination of the relationship between teaching, learning, and technology; and how technology can support teachers in designing contexts that enhance student learning. Dr. Trust has received the University of Massachusetts Amherst Distinguished Teaching Award (2023), the College of Education Outstanding Teaching Award (2020), and the ISTE Making IT Happen Award (2018), which "honors outstanding educators and leaders who demonstrate extraordinary commitment, leadership, courage and persistence in improving digital learning opportunities for students."

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Leadership
What to Do When You Screw Up
Jim Knight
2 months ago

undefined
The Joy-Driven Principal
Damen Scott
2 months ago

undefined
The Power of a Growth Focus
Teresa D. Hill
4 months ago

undefined
The Curse of Certainty
Jim Knight
5 months ago

undefined
The Principal as Mentor and Coach
Jen Schwanke
5 months ago
Related Articles
What to Do When You Screw Up
Jim Knight
2 months ago

The Joy-Driven Principal
Damen Scott
2 months ago

The Power of a Growth Focus
Teresa D. Hill
4 months ago

The Curse of Certainty
Jim Knight
5 months ago

The Principal as Mentor and Coach
Jen Schwanke
5 months ago
From our issue
Issue cover featuring a red geometric apple on a white background overlaid with the the text "AI in Schools"
AI in Schools: What Works and What's Next?
Go To Publication