November 1, 2024
•
•
Vol. 82•
No. 3“Hey Alexa, Show Us How You Work”
Credit: ALEXEY YAREMENKO / iSTOCK
Decades ago, a now-famous study demonstrated how readily people stop thinking for themselves and conform to what others say. When shown lines of clearly differing lengths, one-third of test subjects accepted that the lines were the same length when other members of their group unanimously said they were. Researchers recently replicated these findings with children surrounded by a new kind of peer group: humanoid robots. When a robot sitting with a child gave the wrong answer about line lengths, the child was swayed to give the wrong answer, too.
Such findings suggest kids may be ill-equipped to resist erroneous information coming from the artificial intelligence (AI) that grows ever more human-like and ubiquitous (in toys, wearables, smartphones, and search engines) every day.
For students to more thoughtfully and skeptically engage with AI, some research suggests they need to better understand its mechanics. A study presented by Druga and Ko suggests that after students learn programming and coding skills, they become more adept users of smart devices. The researchers observed students interacting with “smart agents” (in this case, robots and Amazon’s Alexa) in after-school programs. Initially, the children appeared to anthropomorphize these devices, agreeing that they “will remember me,” “like me,” and are “smarter than me”—and making such comments as, “I think he [the robot] cares about me because when I ask him something he listens . . .” and conversely, “she did not listen” when Alexa didn’t play a requested song.
Next, the children learned to program AI for themselves using a platform called Cognimates, through which they programmed computers to learn the difference between funny and serious messages; images of hands showing rock, paper, or scissors; and drawings of unicorns versus narwhals. Each task required children to work together to test and retrain AI to generate correct responses.
Afterward, the researchers tested students again on their perceptions of smart agents and found a marked shift, including a new awareness that these devices are only as smart as their programming. In the words of a 6-year-old, “At first, I didn’t really know that computers got taught. I thought computers, once they were invented, knew stuff.”
“Engaging children in programming with AI leads many children to replace conceptions of smart agents as intelligent with new conceptions of smart agents as fallible but helpful,” explain the researchers. Just because AI says something is true doesn’t make it so. And that’s a worthwhile lesson to learn—for children and adults alike.
End Notes
•
1 Asch, S. E. (1951). Effects of group pressure on the modification and distortion of judgments. In H. Guetzknow (Ed.), Groups, leadership, and men (pp. 177–190). Carnegie Press.
•
2 Vollmer, A. L., Read, R., Trippas, D., & Belpaeme, T. (2018). Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Science Robotics, 3(21).
•
3 Druga, S., & Ko, A. J. (2021, June). How do children’s perceptions of machine intelligence change when training and coding smart programs? In Proceedings of the 20th annual ACM interaction design and children conference (pp. 49–61).