"Robots doing Storytime?!" Conversation Transcript

Speakers: TSLAC CEC consultants Henry Stokes and BW with Julia Sufrin

 

Special guest Julia Sufrin is currently a master’s student at the University of Texas at Austin School of Information, and studying among other things, artificial intelligence and the ethics of artificial intelligence. She also has a background in English with an undergrad degree in narrative theory and literary theory – both relevant, as you will see, in our conversation.

 

Henry (00:00):

Hello and welcome. This is Henry Stokes, Library Technology Consultant at the Texas State Library Archives Commission. We're having a conversation today about an emerging technology - artificial intelligence or AI, its potential for storytelling, and its relevance to youth services in libraries. I've got two special guests today who are going to be joining me in conversation. First is our friend BW, TSLAC’s Youth Services Consultant. Bethany is joined with our special guest, Julia Sufrin. Hi Julia. How are you doing today?

Julia (00:31):

Hey, I'm great. Thank you for having me.

 

Henry (01:00):

Great. So our topic today is artificial intelligence, which is when computer systems are able to perform tasks that normally require human intelligence and can often think and learn as they do. So we [at the Texas State Library's CEC team] heard about a special project that you took part in last summer. Can you tell us a little bit about that because I think there's some interesting implications to it?

 

Julia (01:21):

Sure. So, last summer I had the opportunity to intern at an artificial intelligence research company and they were doing a specific project. Normally they do what you would think artificial intelligence research you would do, which is work with the military - and work with strategizing decisions, war games and such. But they had this passion project which was generating children's stories using AI. They had this idea that they wanted to bring in someone with a literary background to join their group of software developers and computer engineers. So the intern team was me and two others, two experienced artificial intelligence PhD students…  and me. [chuckles] And it was quite an interesting activity to start from nothing and build, at the end of the 10 weeks, a machine that could generate children's stories based on some preferences that you gave at the beginning and using its artificial intelligence,

 

Henry (02:23):

So you actually brought in your narrative training, right? So talk to me about that, how your studies in that were used in this application.

 

Julia (02:33):

So there are a couple of different principles of storytelling that are easily translated to a machine. A lot of children's stories are similar in structure, a little repetitive, and that's something that has been analyzed for years and years by literary theorists. So there's a concept called Freytag's pyramid, who is a German literary theorist. He made that triangle shape that you might've seen in early English class where there is Exposition - you set the scene in the story, say who the characters are there, you show what's at stake. Then Inciting Action or initial conflict where something goes wrong. Rising Action is the left side of the pyramid where the stakes are getting higher, the conflicts are developing. And then there's the Climax of the peak of the triangle, which is the most suspenseful part of the story, a certain turning point for the main character.  And then the falling action on the right side. And then the Denouement, which means tying up loose ends at the end. And so, by teaching that structure, we were able to identify story beats that we wanted to hit, which became some sort of organizing elements for the project. And then we went a little deeper, which is thinking about narrative structures that are repeatable. There are different theorists that believe that all stories could be described by one story, like the Ur story. Joseph Campbell is a famous example of that. He is a historian, a religious studies professor; he's brilliant. And he came up with the Hero's Journey, which has been used to describe plenty of mainstream media that we know like Harry Potter, Star Wars, The Matrix. There's tons of stories where we see this Hero's Journey where there's a Call to Adventure. The hero begins down a road to the new world where things are strange.  There's a guardian or a mentor that helps them walk the way. There are these story beats that you hit, and you know there's this ‘turning point’, the ‘rebirth’ and all that. So, taking those principles and then teaching them to a computer, you can see how it makes a little more sense how a computer could tell a story because we think of that as such a human thing, telling stories. But actually, by using these principles and things like a story of star-crossed lovers, a story of rags-to-riches, a story of overcoming-a-monster. All of these things are repeatable, and they have different functions that occur that you can actually teach a system.

 

Henry (05:12):

Okay, I see that. So the idea here is that you kind of crack the code on stories and then teach that code to the computer and it can generate a story. I can see that this company saw it as a viable product. [Thinking ahead,] I can imagine children wanting to use [this AI] on Peppa Pig or some of their favorite characters, and saying, “tell me a story about that, Alexa!” (or whoever). And then they would start to hear [the story]. We have similar types of things [currently with digital assistants], like Choose Your Own Adventure style stories. But what you're talking about here is totally auto-generated, on the fly - based on the child’s interests or preferences, right?

 

Julia (05:56):

Yeah, exactly. So it was a little difficult at first to decide which preferences we wanted to leave open as variables. First, we just wanted to create a space where the computer could understand and interpret, or at least make decisions, about different elements in the story. And then we thought, “Okay, well. then we'll decide what to take out afterwards.”

One of the interesting things about the way we did our project was was that we used Artificial Intelligence Planner, which is a very specific kind of artificial intelligence. It's different from some of the kinds you might've heard of like: machine learning, deep learning, neural networks.  And AI Planner is used for planning things: something as seemingly simple as airline schedules or pilot schedules, when someone’s on or someone’s off.  They're actually very automated systems. That's why it's so massive and complex, made easy to use. And so the way we used Planner was the first thing you do is define a Domain. So we established that there is a world in which things happen and there are elements within this world that exist and have relationships with each other. So very bare bones beginnings. There are things called Places, things called Characters, and things called Objects. And then you set Predicates, which is the relationship between those things. So a Character can care about an Object, and then you create ‘if-then’ statements where if the Character cares about an Object, then they will want to have it and they can have an action. That Character gets Item. So it's all about establishing these relationships, this network of relationships, and then you can understand how it would get more complex and have things like revenge, for example.  if a Character is wronged by another Character, they will want to get Revenge. And what is Revenge? It's when a Character who has been wronged gets revenge on the person. So we were able to fire what are called Axioms within the Planner. If a certain sequence was hit by the AI, it would say, “Boom! Revenge occurred”. From there, the storytelling is a lot less about trying to mimic children's stories that we fed [it]. That's what a machine learning tool might use. If you fed it a million children's stories, it would learn how to mimic the children's stories. What we wanted it to do is different. We wanted an intelligent agent that could strategize and make decisions and form a plan for you that would be different each time depending on what you chose. So, ultimately, we were able to get three different characteristics: There was a Sneaky Hero, a Clever Hero, and a Brave Hero. And then, there were different plots that you could use. So there was a revenge plot, there was a love plot – like a romance plot, there was Overcoming the Beast plot – different versions of that. It was only 10 weeks but you can imagine how much more complex you can get if you start modeling other characteristics beyond just like brave and clever.

 

Henry (09:28):

I guess that's what really sparks my imagination. If this continues to develop, or if it uses machine learning, I'm trying to imagine the potential for this technology as far as it being used by kids. For example, I could see how you might have kids using this at home to have stories read to them. If there are no grownups with a talent for storytelling around them, maybe they could use this tool to have the computer tell them a story, and [how about] the fact that they could offer their own input and interact with it? So I'm seeing Bethany shake her head.

 

Bethany (10:07):

Yeah, I'm going to have some concerns with having artificial intelligence reading stories to children because it takes out the interactivity piece between parents and adults. There's no opportunity for dialogic reading, which really allows the adult to prompt, evaluate, expand, and repeat what the child is saying  - to prompt them with questions, to interact with them so that they're understanding how a conversation works, the back and forth of the conversation. And [the child is] being asked questions and prompted to speak during the story. I don't know how you would program the AI to do that. Even the lack of the person's mouth to see how words are formed is missing as well, if you don't have another person involved. And that's important when you're building those early literacy skills for children before they begin the process of learning to read. So that’s what I have to say about that!! [chuckling]

 

Julia (11:13):

That's really interesting. I hadn’t thought about seeing the mouth form words because that's so embodied. And I was imagining when you're saying prompting questions, and things like that, like, “Oh, sure, we can teach the computer to do that. We could write an algorithm for engaging and interacting with the child.” Listening to his story probably wouldn't be very good at this point in time, but yeah, the, the human aspect. And I do agree with you, I don't think that humans will ever be phased out of the storytelling relationship with children. I think it's just so vital to our species to spend time together as humans with children and to help them through those periods of time. But I do think that we are seeing children interacting with technology from such an early age now that it's inevitable that there will be some screen time. In many ways, unless you make a very serious effort to keep those things from your children, they'll interact with intelligent agents just by picking up, you know, mom or dad's phone. Because Siri is pretty sophisticated artificial intelligence. I actually just found out today that Siri was developed by the U.S. military and then Apple acquired it. So that'll just give you a sense of how sophisticated that technology is. And so kids are interacting no matter what. And the way I think about it is, sometimes a kid gets parked in front of a TV screen while the parent has to do something else. And in those instances, wouldn't it be nice as an alternative to TV, or as a supplement because there are interactive TV shows now, Netflix is experimenting with kids' content, especially, to have some kind of intelligent agent that's stimulating [the] child like you said, [making the child] ask questions? Some of the stuff that's on the market right now, for example, it's all about the interaction. It will put a pause in the story. Alexa is a popular example. Amazon's echo. Alexa has a ton of different apps for storytelling. Specifically, you could say: “I want a spooky story, I want a funny story.” And then she'll pause, she'll start story and then say, “What's your name? What's your best friend's name? Name a favorite food.”  And then those will be incorporated into the story later. It's a little bit like MadLibs. And that was something that we were explicitly trying to avoid when we were doing our project over the summer because it's easy to create a template with a fill-in-the-blank sort of model, and that felt a little stale. We wanted that dynamic aspect of having an intelligent agent moving through the space. But with machine learning, for example, that is where I think that a lot of the technology that's available right now comes from because it will look good.  I think something really attractive about machine learning for storytelling is the natural language processing factor. The sentences will be sentences that look like sentences in a story. If you teach a machine learning algorithm how to tell Aesop’s Fables, for example, the sentence structure that it can generate will look like Aesop’s Fables. So we will have that structure, the dialogue, it won't understand it, but you can add some tags to it to make it say, and this is called sentiment analysis : This is sad, this is happy, this is this, this is that.

 

Bethany (14:25):

I have a question though. Is there an ability for the AI to elaborate on some of those concepts, like when the child asks, “what does sad mean?”, will the AI have the ability to explain it? Because I know one of the early literacy concepts is taking a new concept or a new word and likening it to something that the child has experienced. So a parent or a caregiver would be able to do that better than an AI would. And then there's the opportunity for new vocabulary words in books. You're going to hear dozens more words than you would hear in a daily conversation, like new words, like complex words. And that's part of the story writing process for authors – it’s to incorporate as many new words and concepts into those storybooks as possible.  I think you could probably program the AI to do that easily. It's the background knowledge piece that I'm wondering about. Like if the kid had a question…

 

Julia (15:25):

Exactly. And a parent would have those examples that are contextualized for the kid. Right. So ‘sadness’:  "How did you feel when your puppy got sick?" Or something like that. Context is important there for structuring the learning. Definitely. And I imagine that something on the market right now, like an Alexa telling stories, if it did have that capability built in, like definitions, it would be relying on some database that exists like Wikipedia, or I don't know, an Amazon proprietary dictionary. It would have a training data set that it's learning from. Whereas, an actual human, of course, their data set is all of their experience, all their lived experience, which the child is enmeshed in. So the idea of deep neural networks, which is sort of the deepest and most sophisticated and most mysterious part of AI research right now - the idea is that by modeling artificial intelligence after the way a brain works, actually setting up relationships between neurons that fire, we're doing two things: We're creating artificial intelligence that is way more sophisticated and able to answer dynamic questions and do things that are much more complex than say, a Planner. But at the same time, we're making it that much more difficult to understand how it's making its decisions. So there's always that 'black box' element with artificial intelligence. Something goes in, something magical happens inside the black box, and something comes out, and we don't exactly know how. And so in the stories that we were writing over the summer, if the hero got a sword from the witch in the swamp, I knew exactly why it did that because I programmed it. I  wrote the code that taught it that it can go get a sword from the witch in the swamp.  With other algorithms that are more sophisticated and they're taking in a lot more knowledge, what's happening with the deep neural network is actually it's observing and learning and teaching itself. And so it becomes near impossible for some really sophisticated decisions for anyone to explain why it did what it did. Coders can't even look at it. The engineers who wrote the code can't tell you why it made those decisions.

 

Bethany (17:44)

Can you ask it why it made those decisions?

 

Julia (17:45)

Sure. And that's actually a huge initiative right now, [it’s called] Explainable AI. The idea of trying to create something that can explain why it made its decisions, but it's very complicated and actually pretty existentially hard to even consider that we've made machines that can make decisions we can't understand. And some people liken it to asking a human why it made a decision. A human will say, you know, some rational things and you can infer some emotional reasons, but ultimately we don't know why a human made that decision. It was just a human acting from its experiences, its life, its context, everything it knows. And so it's strange. You see the similarities and the differences.

 

Henry (18:28):

I know there's been some AI that, when set loose, it ended up generating racist or misogynistic content. Is that kind of what you're talking about, or is that a little different?

 

Julia (18:38):

That is a little different. Equally troubling. And that is actually because the machine is just learning from the data set that we provide. And so, for example, the one that was on Twitter that became super racist and white supremacist immediately within like 24 hours just because… whose voice gets amplified on the internet? Like trolls, people with those tendencies…

 

Henry (19:04):

That’s the data we gave it so…

 

Julia (19:05):

Yeah, that's the data we gave it.  ‘The internet is dark and full of trolls.’  And so when we set AI lose and say, “Hey, learn how to be like a human on the internet.” It learns how to be like a human on the internet with our content. Yeah. Twitter shut it down pretty immediately. There's also that interesting story from Facebook. They created a chatbot that apparently created its own language with another chatbot that we couldn't understand. So they were communicating with each other in their own little made-up language. Facebook shut it down. And I imagined if it was, like, that terrifying, we wouldn't know about it., right? This is only the development of research that the public gets to learn about, but even so, pretty strange. Those decisions are more explainable. Why did the Twitter bots start talking about Hitler? Because it learned from Twitter.

But artificial intelligence,  when it makes decisions we can't understand, It's almost like going back to look at the code is like taking an image of part of your brain with electrical signals and saying, "what's happening here when these signals are firing?” it's pretty difficult to explain what's happening there based on that information and we're kind of doing something similar with neural networks. And so, if those were to be able to start telling stories, I imagine they would be much more interesting. They'd also maybe be troubling, like if you look at the art. Google had a tool that was released, I'm not sure if it's still up, but DeepDream. It's a deep learning visual sort of tool where you could see what the images are, what images the algorithm was picking up on, and then using to identify things. And they're these weird nightmarish renderings of like -  if it's a bird, like a beak and feathers pooling out of each other and pulling, then there's something weirdly uncanny about them, which is also a huge element of artificial intelligence. It's the idea of the Uncanny Valley, which is this theory or phenomenon which is when we see something that is like a human but not. Especially children's characters, avatars [from video games/movies] : big eyes, tiny mouth -  like strange non-human things. We can think they’re adorable, we like them. [But] there's like this sort of Valley that we reach where it looks almost like a human, almost like a human, almost like a human, then TOO much a human, and then it just reverses. We are not attracted to it. We are repulsed by it. We become scared of it. There's something strange and uncanny about robots that look too much like humans. And so I imagine any kind of robots that come on the market, which will start happening definitely very soon, especially for children, they'll be like animals, like a Teddy bear. They won't look like little people. Because people don't like robots that looks too much like people.

And so as far as storytime goes, I can imagine how something like a Teddy bear that can tell stories would be interesting. But there's so much that we don't understand. I think about the way technology gets released and I compare it to how like new medication gets released, and medication goes through several rounds of double-blind testing before it ever goes on the market. And Apple invents a new watch and suddenly we're putting it on our wrists and there's no long-term research. We don't understand what is actually happening. And so when it comes to such a vulnerable group like children, and in a space that's so special, like the library, I imagine we would want to exercise a lot of caution.

 

Bethany (22:57):

A lot of caution. I think you're going to run into a lot of issues with AI as a storyteller. I mean, I just touched on a couple of them and I'm not an expert by any means. and the information that I have is from Supercharged Storytimes, which is of course you can take here free through WebJunction-  but it teaches you how to weave the early literacy concepts into your storytimes or into your storytelling. And I'm seeing issues with trying to do that with AI. Interactivity is one of the pillars of Supercharged Storytimes. And the interactivity that we're looking for is also related to building a relationship with a parent/caregiver on a child. And that wouldn't happen with an AI.  But then, on the flip side, you're talking about the programming pieces, and I see a lot of opportunity for older children, maybe they don't want to write the story themselves, but they're really interested in coding. And if they could create a story that way, they'd be totally on board. So is this something that an older child could do?

 

Julia (24:01):

Sure, yeah. That's actually a really good question. I imagine there's a lot of potential for teaching kids to code through highly developed artificially intelligent tools. Something like a story builder would be really interesting because you could show through the components of a story how to organize a logical system - the if/then’s that I was sort of talking about. It might help a kid sort of frame [the story]. But at the same time, backend coding is so different from the stuff that you do once you have a graphic user interface. Once you have tools to manipulate, and I'm speaking from over the summer, I would watch this sort of magic that my wizard computer engineer colleagues would do, and I would be amazed. And I could do a lot using the visual studio editor where a lot of the code is already behind the scenes. So, as things get better, and the things behind the scenes become easier to use, I'm sure an older child or young adult could do something interesting to learn to code. I'm also thinking about young adult literature as something that's highly mechanized, you know?  Like if you look at the Hunger Games or any such really popular tween media, those have repeatable elements as well. And that could be a fun way to get a young adult interested in storytelling, in character development, because it could become more like a modular exercise, like, develop a setting, develop a  character, develop a hero, develop a villain, and then put it in our system and watch the story be created for you. They create the elements.

 

Bethany (25:51):

I have a question on that as well. So once they reach kind of middle school, high school, they really start working on social emotional learning and the development of those skills as well. So is there a way to teach AI about social emotional skills and, and feelings and emotions? If they're going to build a character to build something very complex, they would have to have those feelings and emotions. I mean, that can also be a learning tool for teaching that to middle school and high school students since that's a big pressure in schools right now. So, is that a possibility?

 

Julia (26:22):

Certainly a possibility. But I do think that some of the things that we're talking about are pretty far in the future. I think that there's a lot to do with artificial intelligence and understanding 'what is intelligence?'. I read something recently [Jeff Clune, University of Wyoming] that it might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual. I do think it is programmable. And that's actually kind of a philosophical question. There are some AI engineers who will say, “No, it's not. These are dumb systems. They are computers. They’re zeros and ones. They will never ever achieve something like human intelligence.”

So it could tell you like, what is sadness, what is social and emotional learning, based on connections that we show it, based on metadata, tags, ways that we relate things to each other, Like those Predicate relationships I was describing. Like, if you care about someone, you'll want to protect them. If they are your family, you [will]: X, Y, Z.  I think that's only so repeatable or reliable. There's an aspect of trust that I think we'll have to develop once we get close to something like artificial consciousness. That's how I kind of distinguish -I think of intelligence as it can mean these different things. They make tools - it can make tools for us to use. Consciousness is like a whole different realm. And the potential for companionship- that’s what I think is the most emotionally interesting part of this whole paradigm: children and adults creating real substantial emotional relationships and companionship with artificially intelligent agents. There's actually a lot of really interesting work going on in the elder care industry. It's kind of funny how  child care and elder care are spaces where these kinds of tools are really popular, and with elder care, there's the potential to stimulate the mind and provide that important oxytocin hit that can come from social interaction. There's a lot more that can come from a human touch, but there are people trying to mimic human touch in robotics. I thought of it is like a physiological Turing Test. If you give a robot to someone with AI and they have the same physiological reaction,  the same neurons in their brains, same dopamine, oxytocin release, does it really matter? Is there really a difference if the effects are the same. And those things are pretty interesting to think about.

 

Bethany (29:03):

That is interesting. So you said, the decisions made in the black box, you're not really sure how those are made. But it's safe to say that they're free of emotion?

 

Julia (29:15):

Yes, exactly. That is definitely…

 

Bethany (29:19):

… scary.  That could probably help you reason out a little bit how it was made. If it was all based on logic and not emotions or experiences, I guess.

 

Julia (29:28):

Well, that's why there's this whole field now that's finally being taken seriously. I think computer engineers, computer scientists have been saying this for years. It's only just being taken seriously, which is:  the need for ethicists, for ethical programming. And can we program an ethical machine? Is that something we can even do? And then, let’s really look at ethics, right? There's so many systems of ethical thought. There's consequentialism. There's social contract theory. There's Buddhist ethics. There’s Confucianism… There are so many different systems. And why is that? Because humans can't figure out how to program ethics. We have a million different programs and none of them work. It's sort of intuition, the lived experience - what we think is right and wrong. And so it's pretty scary then if you think about, you know, weapons, surveillance tools, things that the military are developing that will be maybe making life or death decisions...  It’s strange that we got here from children's stories… [laughs]

 

Bethany [30:35]

But let’s go back to children’s stories. I mean the ethics piece would be really important before you would let any child interact with an AI. You want to make sure that those ethical principles are in place.

 

Julia [30:44]

 Yeah. One thing that's really interesting and compelling is this particular example: AI Buddy. You can look it up online. It's a project that's developing AI companions and friends tools. It’s a little avatar on a device, and AI Buddy is developed for children whose parents are in the military, who are on leave, and it's supposed to help them through some of the emotional challenges and developmental landmarks that happen in a child's life without the parent.  There's some telepresence aspect. You can connect with your loved one through it, but it's also just like your buddy. It's somebody who travels with you. At first I thought it was strange, but then the person who told me about it said, “Well, I actually had a student who was the child of military personnel and they said that they traveled around so much, they started new schools all the time, always had to pick up and find new friends. And that's really, really hard.” And to have some kind of continuity that comes with you. (It’s like a Teddy bear that comes with you everywhere. But this one can talk back and remember things about you and remember facts about your family and be there for you in a kind of different, more emotionally complex way.) And so this young woman was saying like, “I would've loved to have that. It would've been great for me.” So you know, there's so many dark scary things you can think about, and the ethics of it,  what if a child expresses real concern?, you know, what if there's something traumatic or dangerous that a child is expressing? What protocol do we have in place for a system that's recording or keeping and analyzing this data? Do they have to alert someone? What are the ethical obligations in that situation?

 

Henry [32:27]

Of course privacy and security are huge with data being collected about children.  We’ve already seen that happening already with children's toys that record your voice and then play it back to you. But they were actually recording everything that happened in the child's bedroom and uploading it to the Cloud and not putting any kind of password on that so anybody could just go up to the Cloud and get people's private recordings of their bedrooms.

 

Julia (32:52):

Yeah, there is a lot of interesting information coming out about the way these sort of always-on technology tools work because it's sort of easy to forget. But in order for Alexa, for example, to know her name, in order to have her, when you say, “Alexa”, she lights up and says “Yes, Julia?”, she has to hear everything else that isn't “Alexa” in order to know what is Alexa. And so Alexa, if you don't disable the setting, is always listening, and that's why these tools are increasingly being called upon to be witnesses in criminal cases. There was a Fitbit that was used within a trial, a murder trial, because when the Fitbit stopped recording, the heart rate of this person helped situate them in time. But again, in that situation  and in others the technology isn't perfect. It's flawed. A Fitbit sometimes think you're cycling when you're just walking or something. And so how do we call upon them to be objective, representations of the truth or fact?  But with kids, the stakes are, of course, always higher because you want to protect them. You want to protect their information more than anything. And [the companies making] devices that are designed for kids better be thinking about that.

 

Bethany [34:122

Absolutely. This conversation really went all over the place! [laughs]

 

Henry (34:18):

There's one aspect of this project and its potential that I wanted to come back to. I think we agree that there's a sacredness to storytime and to grownup caregivers reading stories to children, [i.e.] picture books. [With regard to the AI project being by the company of Julia’s internship:] Besides being able to generate the words for the stories, it can also generate illustrations perhaps in the future? So you can imagine some kind of picture book – and of course if they invent digital paper -  you can [imagine] a kind of blank picture book that generates the story for you in front of you as you turn the page - with all of its illustrations. So, of course, this is a cool concept. We wouldn't want it to replace anything going on in libraries or with the early literacy work from caregivers. But I do feel, as a fun thing that kids can use, it has some merit. But do you have any other thoughts about the potential for that concept of just recreating a picture book on the fly? Any other warning flags about that? As long as it doesn’t replace storytime, that is?

 

Bethany 35:38):

You'd want the person there. And this is for Littles, so this is for like 0 to 5, before they're learning to read. Once they're able to read, the game changes a little bit. But when you're trying to build the scaffolding, which are the early literacy concepts that they're going to use when they begin sounding out words, when they begin actually going through the process to learn to read - [then] you want that human contact, that person there, to prompt with the questions and to point out certain things on the page. I think if you're talking about an interactive storybook with the digital paper, that'd be super cool. Especially if you could change or have actions happen, fully customized/personalized on the page. I think we could certainly incorporate something like that in a storytime. Most definitely. You could certainly tie that to some of the early literacy concepts and utilize it.

 

Henry [36:39]

 Especially if the words were printed and read aloud.

 

Bethany [36:41]

Well, part of learning about how a book works is to follow along, put your finger under the words so that the kids understand how text works. So even to have the words starting to appear under the finger, like, [the caregiver could say] “it's going this way” -  so [the child] can see it visually appearing on the page and that direction. That could be something that could be used. Or to mimic phonetics, the sounds. So sometimes the words are bigger so you make your voice bigger, or sometimes they make the word bounce look like something that's bouncing so kids can understand the concept of bounce, for example. I think an interactive storybook like that would be really cool.

 

Julia [37:25]

What you're describing also opens up something else that's going to be increasingly valuable in teaching children about how the world works:  digital literacy and computer literacy. Because how do you explain that to a child? Like, this image is appearing out of nowhere, the words are coming out of nowhere. It seems like magic. You can say “magic!”. It is magic depending on your definition of magic. But it's also about teaching them to separate, and if they say, “who's writing that?” for example: That's an interesting opportunity to teach them about subjectivity, about identity. We’re so quick as humans to anthropomorphize everything. Even in this conversation, I've probably talked about Alexa like a person a hundred times. It's so easy to, you know, gender them, give them thoughts, say it's thinking 'this', those kinds of things. And so, for a child, that is a strange and interesting opportunity to talk to them about, you know, “Oh, no one's writing it. It's a computer.”

 

Bethany [38:32]

Yes, that interactive piece. And it ties in with background knowledge, which is one of the literacy concepts, like teaching them about the world around them. So the more opportunities you have to do that in a storytime, the better. The more they know going in, the more words they know going in, the more concepts and experiences they have going into the learning-to-read process, the easier it is for them to sound out the words and recognize them once they've sounded them out. And then the comprehension piece, which is an issue right now in some places.  Yeah, it would definitely provide opportunities for the background knowledge.

 

Henry (39:07):

So developers should be working with libraries for sure. This is our domain. I feel like it may make us appreciate human stories even more, too, because we often see new things happening in stories that humans create.   So far, AI tends to look back more than it looks forward. Because it's looking back at what's already been written, only taking the data that we give it, which means [we get] types of narrative structures that are already tried and true. Been there, done that. I think the really exciting things that are happening are going to happen from humans. Would you agree with that?

 

Julia (39:49):

Absolutely. I think over the summer I kept waiting for the system to surprise me, and that was always what I was most excited about was, “Oh, when is it going to surprise me and do something new and unexpected?” and unfortunately, after only using 10 weeks and a Planner, I wasn't ever really surprised. There were like two or three things where it made a choice, and I was like, “Huh, where did that come from?” But, you know, the more sophisticated it becomes, those are kind of the exciting elements. But ultimately, you're right, it's coming from the past. It's coming from the data we gave it before. Unless it's a neural network, in which case it's observing and learning in the moment.

 

Henry (40:24):

Although [a neural network] is still making predictions based on the past, right?

 

Julia (40:31):

Yeah, I will say that after spending the summer doing this project and thinking really hard about: ‘What are stories? What do they do?’ How do we make them good? What is a good story? What's a satisfying story?’,  I left wanting to, you know, spend more time writing. It stimulated me creatively, in my own sort of storytelling capacity. And because the entire internship was such a good story: All the people I met and the tools we used. But yeah, I didn't walk away from it fearing AI would corner the publishing industry and start generating all the new stories. I think it's possible to generate the way that 'Babysitter's Club' books are generated by ghost writers and stuff. Because there's a formula. [Hey, you want] Formulaic stuff? Sure, AI can do it. I think that kids are a little bit more clever. I think kids want new things. They want things that speak to their context and their moment. And children's authors do that already with picture books and the stuff they include. So I think that AI will increasingly be used as another tool or medium for very talented humans to express their creativity.

 

Bethany [41:50]

I think you just nailed it with that. It’s not about the AI, it's how it can be used to further some things. So it is the tool. It's the vehicle, the vehicle for creating.

 

Henry [42:05]

 I think that's a good place to end it.

 

Bethany [42:10]

This was great, Julia. Thank you so much. I learned so much today.

 

Julia [42:15]

Of course. Yeah, it's always fun to have these conversations.  It think it’s fascinating stuff. Thanks for having me. Thank you. Bye.

Page last modified: July 9, 2021