top of page

How We Talk About Large Language Models (And Why I'm Paying Attention)

I once overheard two elderly women chatting (over their New Year’s Day collard greens and black-eyed peas at the K&W #IYKYK) when one of them said, “you can’t swing a dead cat around here without hitting a hot take,” which I thought was incredible as I associate the phrase “hot take” with influencer culture. In fact, I was so taken aback by this statement that I found myself checking my ageism, as I definitely did not expect either of these Southern ladies to be fluent in influencer speak. However, as the conversation went on, I realized that I had misheard her. Instead of hot takes, what she had actually been lamenting was an inability to swing a deceased feline “…without hitting a hotcake,” which, frankly, raised a whole different set of questions.


If you're not familiar with the Southern expression, “you can’t swing a dead cat without hitting _____,” it's an absolutely absurd way to express an overabundance of something. And not only do I absolutely love it, I use it all the time. Indeed, I’ve been thinking about it a lot lately in the context of Large Language Models (LLMs), or generative AI. For a while now, it’s been tough to have a conversation, both in and out of education circles, without hitting a chatbot squarely in its em dash. While, a couple of years ago, those conversations would have centered things like cheating and academic integrity, more recently these discussions have focused as much on current events as they do ethical quandaries (not that those two things aren't deeply connected).


In one recent conversation, a friend asked me about the way Twitter users have been asking Grok (Twitter’s embedded chatbot) to generate harmful, non-consensual altered images of women and girls. If you haven’t seen this story, the short version is simply that users on Twitter were prompting the bot to first “undress” photos of real people, including minors, and then share those images publicly. My friend, who is an educator but who doesn’t use Twitter or Grok, wondered if LLMs were really capable of “undressing” people in photos.


Another recent conversation, with a friend outside of education, revolved around the tragic killing of Renée Good in Minneapolis by a masked ICE agent. Again, if you’re not familiar with the story, here’s the short version: after the shooting, social media was flooded with LLM-generated attempts to “unmask” the ICE agent involved. In this case, the resulting images

varied so widely that people online were quick to point out that LLMs are not capable of unmasking, or indeed revealing anything, in the images they generate. During this conversation, my friend wondered why a tool like Grok seemed to be able to undress someone but not unmask someone else.


The answer, of course, is that these tools can’t do either of those things. And the fact that some of us think they can points to (at least) two problems, one being a misunderstanding of how these tools work. So, let’s break that down first:


Despite the fact that we often talk about what Large Language Models produce as “creations,” that’s not accurate. Generative AI cannot “create” anything. Only humans can create things. What it can do is use data to make predictions. These systems are trained on enormous amounts of data, much of it scraped from the internet, often without consent. When a human prompts a Large Language Model to generate text, an image or even a video, the technology isn’t creating something new or, in the case of the requests above, revealing a hidden truth. Rather, it is making a statistical prediction about what the requested thing should look like, based on patterns it has identified across its training data. (Although a bit dense, this paper does a good job of explaining this in greater detail).


In the case of women and children being “undressed” by Grok, the system analyzes visible details in the image, things like body shape, skin tone, age, and hair color, etc. It then predicts what that person might look like without clothing, drawing on training data from the internet. What’s more, because the internet is chock-full of pornographic images, those predictions can appear realistic. And, depending on the emotions we attach to them, they can feel real, too. To be very clear, these images cause serious harm. Still, it's important to remember that they are not actual photographs of those people without clothes. They are fabricated predictions.


The same is true of the so-called unmasked ICE agent images. Those are also predictions. The difference is that the Large Language Model has far less relevant data to draw from in this case. There are fewer examples of unmasked ICE agents online and even less reliable information about the specific individual under that mask. That’s why those images are inconsistent and contradictory.


In both of those conversations, something else happened that made me curious. Once we’d covered the particulars of each incident, both of my friends began chatting about the ways they use LLMs in their own lives and work. As they did so, I noticed some patterns. Both of my friends started referring to their chosen model as him or her. One even gave their chatbot a name. They described having “conversations” with the LLM, even arguing with it or apologizing for having to cut conversations off. Both of my friends wondered at the efficiency of these tools, describing the speed at which they “created” things as magical.


Not unlike Carrie Bradshaw, I couldn’t help but wonder: was this habit of projecting human traits onto systems that don’t think, don’t understand, and don’t have goals or intentions of their own something I’d noticed before? (Spoiler: yes.) And, if so, how does anthropomorphizing technology in this way change how we use it, how we trust it, how we recommend it, and, in some cases, how cede our power to it?


Category Is: Chatbot Realness

I’m guessing most people reading this already know that anthropomorphism is the habit of assigning human traits, emotions, goals, or agency to nonhuman things. This personification is nothing new. Many of us, at one time or another, have named our cars or other appliances that we rely on frequently. I often blame tech glitches on the tool having simply given up (I mean... who could blame it?) And of course, literature (for kids and adults!) is bursting with examples of talking animals, sentient machines, and other self-actualized technology. This very human inclination to think about nonhuman things as being more like us makes sense. We know that humans (even those of us who identify as introverts!) are social creatures who make sense of the world through story and connection. When we encounter something complex or unfamiliar, especially something that appears to respond to us when prompted or that helps us accomplish specific goals, our brains apply familiar framing to help us better understand it.


Large Language Models are especially good at triggering this instinct. (And I'm not the only one who thinks so. This paper more deeply explores this phenomenon). Not only do LLMs respond in natural language, they also remember context within a conversation. They adjust tone. They apologize. They explain themselves. They engage in sycophancy, complimenting and even agreeing with us, all while adapting their language to ours and mirroring our own phrasing. And we respond in kind. We include please and thank you in our prompts. We praise products that meet our expectations and we express our disappointment with those that don’t. Shoot! Even referring to these tools as “intelligent” or “smart” shifts our focus away from their technical function and toward an attribute that we value.


All of that makes our interactions with these tools feel like connection, despite the fact that there is no understanding, no belief, no goal, and no awareness behind the chatbot. There is only pattern recognition and prediction. Still, the pull toward anthropomorphism is strong, and this is by design, because the more we trust the tech, the more likely we are to use it and recommend it. And the less likely we are to question or criticize it.


My friend, John Spencer, theorizes that gender plays a significant role here as well. (And he's not the only one! This paper thoroughly examines these ideas). In a recent conversation, John reminded me that many of the most visible AI tools were introduced and sold to the public as virtual assistants. John went on to point out that, historically, assistant roles (both in life and in the media) have been filled by women. It’s no accident that the first versions of Siri and Alexa were programmed to be helpful and responsive while speaking in soft, accommodating voices. Again, this matters because gendered framing shapes trust. It shapes expectations. It shapes how comfortable people feel issuing commands, relying on outputs, or even excusing errors. A coded system framed as an assistant feels less like infrastructure and more like a subordinate helper or even a partner, even though it is neither of those things.


This urge to anthropomorphize also makes erasing the labor of the individuals whose creative works were scraped to train Large Language Models much easier. When we say that an LLM “created” text, images, or videos, we aren’t simply making an inaccurate statement. We’re also sidestepping the human creators whose work made those predictions possible.


Moving from Human to Super Human

If you’ve spent any time online lately, you’ve probably noticed another emerging shorthand for Large Language Models: the sparkle emoji. ✨ Across social media, people use the sparkle emoji ✨ to both flag the use of a Large Language Model in content and when describing a tool's features or abilities. With that in mind, at the risk of sounding like a librarian curmudgeon, bemoaning the way these young whipper snappers speak, I find this trend to be both interesting and, just a little, worrisome.


Sparkles carry cultural meaning. They signal magic and delight. They suggest that something special is happening behind the scenes, something a little mysterious and maybe even beyond human capabilities or understanding. Functionally, I believe this has the same effect as anthropomorphism.


When a tool feels magical, we are less likely to approach it with curiosity and ask how it works. We stop caring about how the proverbial sausage is made, focusing only on the outcome and its perceived benefits. We marvel at what it produces instead of questioning how it got there, what data it relied on, or whose labor made the product possible. Magic, like personhood, shifts our attention away from process and toward production. (And maybe that's the whole point, y'all? After all, in a capitalistic society what we produce is always seen as more valuable than how, or even who, produced it).


Why This Matters, Especially for the Learners Watching Us

At this point, it might be tempting to dismiss all of this as pedantic. And maybe it is, especially in the face of the other concerns people have about Large Language Models. But there’s also this: kids learn how to think about technology by watching how adults talk about it. So for those educators who are choosing to use these tools, or even for those who aren’t but know that their students are, here are some things I’ve been doing to be more intentional about my own language related to technology, and LLMs specifically.


I try to name the tool accurately.

You may have noticed that I’ve intentionally avoided the term Artificial Intelligence throughout this post. I’ve noticed that even this small change has had a profound effect on how I view these tools. They are not smart. They are not magical. They are prediction models. Naming them in that way helps signal both an understanding of what they are and of what they are not.


I try to use verbs that reflect the technical process rather than human-like agency.

Instead of saying a model “decided,” “thought,” or “created,” I try to describe what the LLM actually does. It predicts. It mimics. It generates. Again, this is a small shift, but it helps establish that we value human creativity.


I try to model curiosity.

For me, this looks like avoiding the urge to demonize the technology (because that, too, is a type of anthropomorphism) and instead approaching it with curiosity. I try to ask questions about how chatbot results are produced, whose labor might have been used, and whose voices might be left out.


I try to emphasize process over product.

As I work to better understand this technology, so that I can help others do the same, one thing I keep wondering about is the devaluing of creativity in the name of efficiency. That said, let’s be real, folks. Education has long had a problem with emphasizing product over process. But at a time when so much can be produced for us, the need to create feels more urgent than ever. And perhaps this is the most important idea for educators to chew on

when considering how, or if, we engage with Large Language Models in the context of our

work. In an educational system that lives and dies by assessment data, allowing young people to grapple with messy, unpredictable creativity is hard. But I wonder about the cost of not making time to help kids understand the value of ideation, iteration, failure, and of viewing that process as a marker of success, rather than whatever is produced as a result.


I'm holding onto my power.

I'm determined to remind myself that we are not living in Terminator 2: Judgment Day: the 80s cinematic masterpiece that explores what happens when a self-aware technology tries to wipe humans off the planet, leaving a leather-clad Arnold Schwarzenegger to step in and (somehow) save humanity from its own technological apocalypse. Large Language Models are not sentient. They cannot make decisions or, indeed, do anything without a human prompting them to do so. Humans are still in control and whatever happens next in regards to this technology, we will be responsible for it.


Acknowledgements

My thinking about the relationship between Large Language Models, language, and power continues to evolve. I believe some folks call this learning. And I am, if nothing else, a learner. That said, the ideas I’ve shared in this post were shaped by many conversations, but there are a few people I want to shout out specifically.


Casey Fiesler doesn’t know me, but her social media posts consistently help me think carefully about the ethical implications of emerging technologies, especially as they intersect with consent, labor, and online harm. Her public scholarship has been invaluable in giving context to what can sometimes feel like abstract discussions.


Darren Hudgins has also played an important role in shaping my thinking about this work. I'm not sure anyone has done more to remind me that technology is a human-centered discipline. Human behavior drives how these tools are designed and marketed. And human behavior must be the driver of how we help learners (of all ages) engage with them.


Ken Shelton continues to challenge me to view all of this work through an equity lens. His insistence that technology decisions are never neutral, and that they always reflect values and power structures, has deeply informed how I approach conversations about technology and how we use it to support kids.


John Spencer has been especially influential in helping me think about how design choices shape behavior, particularly around anthropomorphism and the gendered framing of technology. Our conversations continue to push me to notice not just what tools do, but how they are positioned and sold to the public.


And finally, I’m writing this post from 30,000+ feet in the air as I return from working with teacher librarians in Rockwood County, Missouri. Although our work focused more on mindful media habits in an algorithm-driven information ecosystem, there is really no way to discuss information literacy in 2026 without touching on AI. I appreciate their willingness to be honest, challenge my thinking, and grapple with uncertainty. We could all learn from their example.

 
 
black banner.png

Let's Connect!

  • Bluesky
  • instagram logo
  • gmail square
libgirlupandawayw_edited_edited.png
bottom of page