1002. This week, Mignon talks with author Martha Brockenbrough about the good and bad sides of using artificial intelligence for writing and education, including ethical concerns about using AI-generated content, strategies for teaching writing in the age of AI, and the potential effects on teachers' jobs.
1002. This week, Mignon talks with author Martha Brockenbrough about the good and bad sides of using artificial intelligence for writing and education, including ethical concerns about using AI-generated content, strategies for teaching writing in the age of AI, and the potential effects on teachers' jobs.
Martha's new book, "Future Tense": https://us.macmillan.com/books/9781250765925/futuretense
Martha's website: https://martha-brockenbrough.squarespace.com/
Martha on Instagram: https://www.instagram.com/marthabee/
| Edited transcript with links: https://grammar-girl.simplecast.com/episodes/martha/transcript
| Please take our advertising survey. It helps! https://podsurvey.com/GRAMMAR
| Grammarpalooza (Get texts from Mignon!): https://joinsubtext.com/grammar or text "hello" to (917) 540-0876.
| Subscribe to the newsletter for regular updates.
| Watch my LinkedIn Learning writing courses.
| HOST: Mignon Fogarty
| VOICEMAIL: 833-214-GIRL (833-214-4475).
| Grammar Girl is part of the Quick and Dirty Tips podcast network.
| Theme music by Catherine Rannus.
| Grammar Girl Social Media Links: YouTube. TikTok. Facebook. Instagram. LinkedIn. Threads. Bluesky. Mastodon.
MIGNON: Hey, it's Mignon. Just a quick note before we get started. The audio version of this interview is edited because there was a lot of cat noise, but you can find the complete interview on YouTube because cat noise is fine when you can see the cat, but not so much when you can't. Enjoy the show.
MIGNON: Grammar Girl here. I'm in Mignon Fogarty and today we are going to talk about what I think is one of the most important issues of our time when it comes to writing, which is artificial intelligence.
And we are so lucky to have Martha Brockenbrough here with us today because she is … you'll recognize her name because she was the founder of National Grammar Day.
We mention her, you know, every year, but she has gone on to do so many other things. She's written more than 20 books, fiction, nonfiction, picture books, you know, for young adults and children. And she has a new book out called "Future Tense, How We Made Artificial Intelligence and How It's Going to Change Everything." And it's fabulous. It's a book for young adults, but I read the whole thing.
I thought it was great for such a complicated topic.
I appreciated the level that it was at.
So, Martha, thank you so much for being here today.
MARTHA: Thanks so much for having me. It's great seeing you again.
MIGNON: I know you too. It's been a while. I mean, I see you online all the time. You know, I know writers who use and love AI.
And I know other writers who will, you know, virtually spit on you if you say you have if you mentioned using AI.
MARTHA: Yes, so let's start with how those tools are trained. The ones that generate text or generate images, those have all been trained on stolen material.
There is no way to use those to generate anything and have it be ethical. And I'm sorry that that's the case. But that's just what it is.
There are other kinds of AI that writers can use that are not trained on stolen material and that are not developed with stolen material so that people don't have to pay for those skills.
And it's funny because Mark Andreessen, who was the founder of Netscape, the first internet browser, said, "Well, if we have to pay for that, then it makes, you know, we will make much less profit." And yes, bootily hoo, Mark Andreessen, if you have to pay for things, you make less profit.
And that is how those of us who live in the real world and have to pay for our pens and pencils and all of our internet service providers, you know, that's just how business works. You don't get to say, "Well, we could make a lot of profit with the stolen stuff."
So if you are using it to generate text that you plan to go on using, just know that you are putting other writers and yourself out of business. Now, AI that doesn't use this, there's a tool that I think is very interesting called Authors.ai. And you can upload a completed manuscript novel. And it will look at the underlying mathematical relationships in your text.
So novels are patterns, you know, there's pacing, there's the overall shape of the story, there's the sentence link, there's the word choices, all of these extremely complex patterns that go into the creation of a novel, you can look at the underlying math. And so you can see the chart of the pacing of your novel. And I think that's interesting.
It doesn't fix the pacing for you. You have to do it yourself. But it's like having a friend who's really good at pacing say, "Hey, this part at around page, you know, 30 is very saggy." And you can fix that.
I do think there are some ethical ways to use the large language model generators. So ChatGPT and Bard and that. And that is when, for example, let's say you're working on a novel. And you type a synopsis of the novel and say, help me find the "Save the Cat" beats or the hero's journey beats. And those are sections of a novel or points in a story that are meaningful. And it can help you do that in a not, you know, it's pretty good at it. Because it knows what the hero's journey is, having stolen it. It knows what "Save the Cat" is, having stolen it.
And so you can help, you know, be insightful that way. There's another ethical use. My daughter is dyslexic. And so when she writes something, one of the things that can be hard for people with dyslexia is all of the proofreading and the mechanics, the things that make the fans of grammar girl go wild.
She puts her text into the large language model and asks it to correct for grammar and usage. And it will do that for her. And it's pretty good. If you don't have a problem with spell check or grammar check in your word processing software, then, you know, this is essentially the same thing. But for people like my daughter is a little bit better.
MIGNON: Yeah, so I'm just a little confused. So, you know, the talk, you talk about it being an ethical to use it for writing, because it's built on stolen material, but it's all built on stolen material.
So if it's ethical to use it for, you know, fancy spell check, grammar check, you know, why wouldn't it be ethical to use it for writing?
MARTHA: Okay. So generating writing is different from asking someone to proofread your paper. It's like it's the difference between buying a paper.
Remember when you could do that in college, buy an essay?
MIGNON: I heard actually heard that chatGPT is putting all those companies out of business that sell essays.
MARTHA: Right, right. You're not going to be able to sell essays anymore. And so that was never ethical. That was never okay. And if you're, I suppose if you say, hey, I used chatGPT to write this sonnet for you, or to, you know, write this marketing copy. If you've disclosed it, okay, it's ethical.
In terms of stealing, so the copies that Author AI used of the text, those are all legally obtained.
Because, you know, if I go to the library, if I type up something, I can do that. It's legal. It's not prohibited in the use of the book.
What's different about the generative tools is that they are taking the training and they are generating new material from it.
So you could say, all right, Julia Quinn, author of the Bridgertons — brilliant romance novelist, fan of grammar. You could train something strictly on her style and be able to generate in the style of Julia Quinn. That's not what her books are sold to do. They're, you know, they are sold to be read and enjoyed. You can study them as a writer, but to then be able to take that and create something in place of a person, that feels pretty sketchy to me.
MIGNON: Yeah, I hate that idea, like whether it's AI or not.
MARTHA: It's one thing, you know, writers learn to write by reading very often.
And that's where, you know, many of us have internalized rules of language and grammar from reading because that's how our brains work. Our brains are extraordinary pattern recognition engines. And, you know, what is grammar but a description of the patterns of language? And so we're really good at recognizing these patterns. All artists and writers learn by copying the patterns that they've seen, but they're also putting in their own human effort there.
And, you know, we know that plagiarism is bad. You know, certainly you can't be a Harvard professor, even president, rather with even the mildest versions of unattributed work when the mob comes for you. But, you know, there's a difference between training yourself and having the output, you know, inform just your work versus, you know, being able to produce something without the labor that, you know, machines did. It's just a very different kind of process.
MIGNON: I mean, I know editors who are worried about the kind of thing that your daughter is doing. That that sort of tool is going to put editors out of work.
But actually, talking about plagiarism is a super transition into another question that we had from a Grammarpaloozian — the people who support the show are called Grammarpaloozians.
A Grammarpaloozians named Linda said that she works at a school, and a lot of the teachers at her school are very concerned that students will just use AI, ChatGPT, and the like to produce their work instead of writing it themselves. And I know that the detect … the AI detectors that a lot of schools are using are terrible and are flagging things that aren't AI.
You know, what are your thoughts on how to teach students writing when they have this very tempting tool right there that can do the work for them?
MARTHA: It's such a great question. And teachers are so good at figuring out solutions to things.
You know, if I were still teaching high school, I would have my students write in class and hand it in at the end. Also, one of the amazing things that ChatGPT does is generate nonsense and hogwash that seems real. And so let the students, you know, go through something that's been written by one of these tools and find the errors in it. Find the hallucinations.
I love this name for it. So when something is entirely made up, like a court case citation, several lawyers have tried to use ChatGPT to generate documents to present to judges and have found oops, there was no such case that ChatGPT mentioned.
And so I would absolutely have students read these things and correct them and figure out, like, how do they know that the large language model has given them good information because, you know, there's lots of false stuff that is on the internet to begin with. And then, you know, as long as this court case sounds like the pattern of other court cases, it's going to be perfectly acceptable in the non-sentient mind of ChatGPT, but in the real world, absolutely not.
And this is one of the most powerful things and one of the most important things teachers can be working on is showing students you cannot trust what comes from these sources.
MIGNON: Right. So teachers can have their students write in class a lot, obviously, but in, you know, they can't always do that. So do you have any advice on how to how to motivate students, I guess, to, I guess you have to motivate them to want to do the work themselves, because they're always going to be able to turn to this tool.
And we're probably never going to be able to detect if they have. So what's the message to give to students to motivate them to do their own work?
MARTHA: It's a really good question, and each student is different, and many of them are very motivated.
One of the problems that we have is putting so much pressure on students that they feel that they can get better results if they don't actually do the work themselves. So parents, reduce the pressure on your kids, you know, teachers too. Nobody likes to perform when they're under that kind of pressure all the time.
Second thing is how to make it interesting to students. You know, what's not interesting is, well, this is my own, this is my own pet peeve about how we teach history and how we teach, you know, the art of reading fiction. Nobody writes books … nobody writes stories so that someday some poor child might have to write a five paragraph essay about them.
And likewise, you know, nonfiction is meant to, you know, inform you, to fill you with a sense of wonder. Those are really good emotional experiences for kids. And if we can make the response as interesting emotionally as the source material is, then, I think we're going to be in really good shape.
And so have students write letters to characters or have them come up with advertising slogans for technology that was invented in the past that calls out, you know, that … things that demonstrate their understanding, but don't feel like the grind that we so often make our kids endure.
MIGNON: And what about using AI as a tool for writing as a partner to, you know, evaluate points you've missed or to brainstorm topic ideas? Or, you know, I mean, I think everyone agrees it's not good for them to just generate their essays out of a whole cloth.
But I think that this, whether you like it or not, this technology is here. It's going to be in the workplace. It's going to be in our lives, like you said, like electricity.
So what about teaching them to sort of use it in a way that isn't plagiarism or unethical? Like, however you want to describe it, we can agree that it's bad, but it's still going to be there, and they're going to have to use it in the future, I think.
MARTHA: Oh, absolutely. In the same way, remember, you know, going to the library to research things and using the card catalog or going to, what was it, like there was that, I can't even remember what it was, like this, um, there were…
MIGNON: Microfiche?
MARTHA: There was microfiche, but also there was a, um, there were these books that had, you know, summaries of everything that had been in magazines and you could go see, oh, oh, you know, anyway, it used to be a lot harder to get information.
And we've certainly, the internet has made it way easier. And we've taught students how to evaluate the sources. So really what we're wanting to do is scaffold these human beings, as they develop skills that they're going to depend on for their lives. Writing and critical thinking, those are the most important academic skills there are. And so if they're using it to refine their points or what did I miss, that's totally appropriate scaffolding. Where it's not is when students stop thinking.
And so the whole goal of all of these things is to get our students to use their minds and to be able to focus and make connections and from their observations and the connections they've made, make powerful arguments that will tilt whatever corner of the world they're in.
MIGNON: Yeah. And then I think, you know, to finish up this gets to Linda's question too. A line that I highlighted is I was reading in "Future Tense" was about teachers losing their jobs to AI. And you seem especially concerned about that. Can you talk about, sort of, what you think the risks are?
MARTHA: I'm absolutely concerned about that. We have teachers who are wildly underpaid and grossly overworked. And so, you know, what is the solution that capitalism is going to come up with for that? Well, it's to reduce their numbers. And, you know, every sentence in this book was written in the context of how does capitalism plus AI affect humanity? It's a very, very bad combination.
Capitalism says profit is good. And the number one way to boost profits is to cut your workforce or make people do more with less. And so this is absolutely going to affect teachers. It's going to affect our students.
There are going to be ways that AI could be useful in classrooms, you know, for quick assessments or understanding, you know, things like math, there's steps to math problems. And if you don't do the steps in the order, you're not going to get them. And so if you have little tools that help kids, you know, nudge them until they have internalized the patterns of those steps and are proficient, sure.
But I am extremely concerned that the very important job of teaching is going to be hard hit by these technologies that people say will make everything so much better. And they really don't. Human beings learn when there are human connections being made, when they feel safe, when they feel valued, when they feel interested. And I don't see that putting our children in front of a screen is going to accomplish any of that.
MIGNON: Right. Yeah. You know, I posted something the other day about … there was an AI tutor that was launched, and it gave kids the wrong answers. Not all the time. It was 98, 99%, you know, like up there in accuracy, but it occasionally gave wrong answers, and everyone was making fun of it. I was joining in on the making fun of it.
But some people came back and said, "Well, okay, teachers make mistakes too." So, neither of them are perfect.
And wouldn't the world be great if every student had a dedicated tutor that was always there to answer questions, you know, that these AI tutors would actually make things so much better? And it was an interesting point.
I'm just not sure, you know. Honestly, I'm not sure how I feel about that. Like, what do you think about that?
MARTHA: I would flip it. I would hate to have a tutor who answers questions. I used to be a high school journalism teacher. And one of the reasons that I quit was that the people who ran the school wanted me as the advisor to have the last say over what went in the newspaper. And I said, as soon as you give me the last say, the students are going to stop thinking. And I am here to support their thinking and to then help them through the consequences of their thinking and their writing.
So, if you've got a tutor and a student is asking a question and the tutor answers it, what you have taught the student is that the tutor is the source of answers. And we definitely don't want that. Imagine, I mean, let's, let's, you know, whip out our tools of fiction. You've got a tutor that is deliberately misinforming a student, teaching them things like there was no moon landing, you know, the shadows are wrong. You know, it could happen.
We also have a large population of people who think that doing your research means googling it. Research, especially scientific research.
MIGNON: Google's got a lot worse lately too.
MARTHA: Oh, absolutely. Absolutely.
But, you know, it's, that's not research. And so if you have a tutor who's asking questions, then it becomes more interesting. All of a sudden we've got a little Socratic mentor, you know, our digital Socrates, iSocrates, who's there. And that could be interesting.
But again, you know, do kids need instant feedback or is part of the struggle in not knowing and having to wrestle it and having to deal with the emotional discomfort of uncertainty? We cannot optimize every human process because in the end, we are all made of meat. We are all these cells, and they work in a certain way.
And the best way to make a new neural connection is through play. It takes repetition. And neural connections are made more quickly through play than other forms. And so if we really want to help kids learn, and if we really want to help ourselves have more joy in life, the question is how can we be more playful? So if these tools are gamifying things, I'm interested. If they are just answering questions, I am horrified.
MIGNON: You know, I think that that sums up how I feel about AI. I am both very interested and very horrified at times.
So I think that's a good place to end.
Thank you so much, Martha, for being here.
The book again is "Future Tense, How We Made Artificial Intelligence and How It Will Change Everything." And I will tell you, it is a great book. If you want to get up to speed on both the history and where things currently are with AI, it's just, it's expansive. This book, it's clear and expansive. So I highly recommend it if you're looking to get up to speed on AI.
Martha, you also do school and library visits and host writers retreats. I saw that on your website. So where's the best place for people to find you?
MARTHA: MarthaBrockenbrough.com is a good place. But if you want to see cat pictures, I'm on Twitter, sorry, on Instagram as marthabee. Marthabee.
MIGNON: Well, thanks so much again for being here, marthabee.
MARTHA: Thank you. It's so good to see you again and have a great day.
MIGNON: You too.