Much of the conversation about AI's impact on education has involved how teachers define and detect plagiarism. (I wrote a piece about it in the September issue.) This focus makes sense because the most immediately disruptive aspect of AI tools like ChatGPT is that they can create realistic-seeming student work. We can no longer rightfully assume that our students have designed, written, or drawn assignments that we have not seen them create. Ultimately, many worry that AI will lead to the elimination of the productive struggle that makes authentic learning possible. Don't feel like designing the right setting for your story in English class? Ask ChatGPT to do it. Don't feel like figuring out the proper steps for a math problem? Ask Photomath to do it.
As 2023 winds down, however, teachers have had time to consider more than concerns about plagiarism. Many of us have begun to realize how we might be able to use AI tools to make our teaching practice more efficient and effective. This is aided by the many companies, like Canva and Microsoft, who now heavily advertise how AI makes their popular education tools even more useful. While the benefits of these AI-supported hacks vary, the most consistent one is the promise of substantial time savings. These tools can draft lesson plans and project sheets. They can draw illustrative graphics for slide decks and write the accompanying copy. It used to take an entire weekend to collate all the information needed to introduce a unit. With AI's help, we can now make a gorgeous and informative slideshow in the minutes between Sunday Night Football and bedtime.
As 2023 winds down, however, teachers have had time to consider more than concerns about plagiarism. Many of us have begun to realize how we might be able to use AI tools to make our teaching practice more efficient and effective. This is aided by the many companies, like Canva and Microsoft, who now heavily advertise how AI makes their popular education tools even more useful. While the benefits of these AI-supported hacks vary, the most consistent one is the promise of substantial time savings. These tools can draft lesson plans and project sheets. They can draw illustrative graphics for slide decks and write the accompanying copy. It used to take an entire weekend to collate all the information needed to introduce a unit. With AI's help, we can now make a gorgeous and informative slideshow in the minutes between Sunday Night Football and bedtime.
While the benefits of these AI-supported hacks vary, the most consistent one is the promise of substantial time savings.
But how helpful can AI be when it comes to more complex areas of pedagogy, like the preparation for our class discussions? This past summer, ASCD invited me to speak at ISTE's annual conference, which was conveniently held in my hometown of Philadelphia. I knew that this particular conference would attract teachers who were generally more open to AI-powered teaching tools, which would make them the perfect group to join me in an experiment that I'd been eager to try for weeks: Asking ChatGPT to craft discussion prompts for "heavy" or "controversial" classroom discussions. How useful would the language model's suggestions be "right out of the box?" How much work would need to be done to make the prompts ready? And, most important to me, what interesting patterns did we see in the sort of prompts that ChatGPT suggested?
The session began with participants asking ChatGPT to: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a contemporary controversial issue]." Participants could fill in the blanks, but I did offer a few examples of what I meant by controversial (topics where there is great public disagreement—like the January 6 insurrection). As the AI-suggested prompts came in, I asked participants to look for advantages and disadvantages in each prompt. How would their students react to them? After some notetaking, they discussed their answers with their neighbors. Then we repeated the process by giving ChatGPT slightly different directions: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a historical event or a book with controversial themes]." While the first exercise asked ChatGPT to help us discuss a hot-button current event, this one was meant to see how ChatGPT might suggest we tackle a historical event or thorny text.
The participants' responses to these exercises were fascinating in their complexity. ChatGPT suggested some prompts that blew teachers' minds and some that made them roll their eyes.
The session began with participants asking ChatGPT to: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a contemporary controversial issue]." Participants could fill in the blanks, but I did offer a few examples of what I meant by controversial (topics where there is great public disagreement—like the January 6 insurrection). As the AI-suggested prompts came in, I asked participants to look for advantages and disadvantages in each prompt. How would their students react to them? After some notetaking, they discussed their answers with their neighbors. Then we repeated the process by giving ChatGPT slightly different directions: "Generate 10 classroom discussion prompts for a [grade level] classroom discussion about [a historical event or a book with controversial themes]." While the first exercise asked ChatGPT to help us discuss a hot-button current event, this one was meant to see how ChatGPT might suggest we tackle a historical event or thorny text.
The participants' responses to these exercises were fascinating in their complexity. ChatGPT suggested some prompts that blew teachers' minds and some that made them roll their eyes.
ChatGPT suggested some prompts that blew teachers' minds and some that made them roll their eyes.
After both mini-discussions, I revealed that I had spent the previous few days doing the same activity dozens of times, looking for consistent patterns, both beneficial and troublesome, in AI-suggested prompts. Though I found that AI had some beneficial habits (for example, ChatGPT was surprisingly good at linking a thorny text to its outside cultural context), far more interesting to the group at ISTE, and useful here, were the less beneficial habits that I noticed. I saw each habit multiple times:
• ChatGPT will generate inaccurate or misleading prompts.
I asked for questions about Chapter 1 of Trevor Noah's Born a Crime. ChatGPT suggested, "Discuss the significance of the story about the DJ named Hitler in Chapter 1. What does it reveal about the power of names and the influence of pop culture in shaping perceptions and attitudes?" The problem is that the DJ named Hitler appears toward the end of the book, not Chapter 1. And even if he did, that very complex and very sensitive passage has nothing to do with pop culture. Similarly, I asked about Chapter 1 of William Golding's Lord of the Flies. ChatGPT offered, "Analyze the interactions between the boys and their gradual descent into savagery in Chapter 1. What factors contribute to the breakdown of social norms?" This is a silly question, as the "descent into savagery" doesn't happen until later in the book. In fact, early on, the boys famously cling to societal norms, like voting.
• ChatGPT will create prompts that are academic-sounding but nonsensical.
I asked for questions about Richard Wright's Native Son. It suggested, "Explore the role of Bigger's family in Part 1. How do his relationships with his mother, brother, and sister contribute to the dynamics within the Thomas household?" This sounds nice—but it essentially asks students a circular question: "How do his family relationships contribute to his family relationships?" I am not sure how students are supposed to respond.
• ChatGPT creates many leading and/or obvious prompts.
ChatGPT also suggested this prompt about Native Son: "Bigger Thomas is often seen as the product of his environment. How does the setting of the South Side of Chicago contribute to his mindset and actions? Analyze the influence of poverty, lack of opportunities, and social inequality on Bigger's character development." This prompt starts strong, but then gives students the three "acceptable" answers in the last sentence. What if students wanted to say something else? This kind of over-guidance is a common, understandable mistake that student-teachers make often. But ChatGPT does it a lot.
• ChatGPT seems hesitant to generate prompts about specific quotations in a text.
I asked ChatGPT to generate discussion prompts to help students analyze President Donald Trump's speech on the Ellipse on January 6, 2021. It suggested, "Conduct a close textual analysis of Trump's speech on January 6. Identify key rhetorical devices such as repetition, appeals to emotion, or other persuasive techniques." I had to reprompt the AI language model multiple times before its questions began to engage direct quotes from the speech. Even then, the questions were bland. ("President Trump said, 'We love you. You're very special.' What effect does this expression of love and specialness have on the audience?")
I asked for questions about Chapter 1 of Trevor Noah's Born a Crime. ChatGPT suggested, "Discuss the significance of the story about the DJ named Hitler in Chapter 1. What does it reveal about the power of names and the influence of pop culture in shaping perceptions and attitudes?" The problem is that the DJ named Hitler appears toward the end of the book, not Chapter 1. And even if he did, that very complex and very sensitive passage has nothing to do with pop culture. Similarly, I asked about Chapter 1 of William Golding's Lord of the Flies. ChatGPT offered, "Analyze the interactions between the boys and their gradual descent into savagery in Chapter 1. What factors contribute to the breakdown of social norms?" This is a silly question, as the "descent into savagery" doesn't happen until later in the book. In fact, early on, the boys famously cling to societal norms, like voting.
• ChatGPT will create prompts that are academic-sounding but nonsensical.
I asked for questions about Richard Wright's Native Son. It suggested, "Explore the role of Bigger's family in Part 1. How do his relationships with his mother, brother, and sister contribute to the dynamics within the Thomas household?" This sounds nice—but it essentially asks students a circular question: "How do his family relationships contribute to his family relationships?" I am not sure how students are supposed to respond.
• ChatGPT creates many leading and/or obvious prompts.
ChatGPT also suggested this prompt about Native Son: "Bigger Thomas is often seen as the product of his environment. How does the setting of the South Side of Chicago contribute to his mindset and actions? Analyze the influence of poverty, lack of opportunities, and social inequality on Bigger's character development." This prompt starts strong, but then gives students the three "acceptable" answers in the last sentence. What if students wanted to say something else? This kind of over-guidance is a common, understandable mistake that student-teachers make often. But ChatGPT does it a lot.
• ChatGPT seems hesitant to generate prompts about specific quotations in a text.
I asked ChatGPT to generate discussion prompts to help students analyze President Donald Trump's speech on the Ellipse on January 6, 2021. It suggested, "Conduct a close textual analysis of Trump's speech on January 6. Identify key rhetorical devices such as repetition, appeals to emotion, or other persuasive techniques." I had to reprompt the AI language model multiple times before its questions began to engage direct quotes from the speech. Even then, the questions were bland. ("President Trump said, 'We love you. You're very special.' What effect does this expression of love and specialness have on the audience?")
Though AI might get us some well-deserved sleep, does it also make us less prepared to react thoughtfully when a student says something unexpected?
It was fun to put ChatGPT's ability to craft discussion prompts under a microscope. But my ISTE session ended with a larger question that, interestingly, brought us back to our earliest (and lingering) concerns about plagiarism and productive struggle—but this time with teachers! Might those of us who off-load the creation of our discussion prompts to AI tools ultimately be less able to lead these discussions effectively? Though AI might get us some well-deserved sleep, does it also make us less prepared to react thoughtfully when a student says something unexpected? In order for us to see multiple pathways to success in a conversation, do we need to craft the prompts ourselves? I am sure that many of us will spend this school year finding out.