HHH: Robots doing Storytime?!

Logo for Henry's Hightech Highlights

One of the most crucial, and IMHO, sacred services provided by a school or public library is storytime. I know I don’t need to elaborate on this further for my readership. 🙂

With the technology of artificial intelligence (AI) rapidly becoming more and more sophisticated, it’s no surprise that an AI’s abilities would start to approximate aspects of a library worker’s or storyteller’s talents.

We know the huge value of reading picture books frequently to, and most importantly *with*, preschool children. But what would it mean if AI helped perform this function? Is it an adequate replacement for a parent / caregiver or library staff member reading them a story?

Let’s dig in to the topic! Today’s Highlight is AI and Library Storytime.


I recently learned that local UT iSchool student Julia Sufrin participated in an unusual internship last summer where she helped create an AI that could tell a customizable story to a child. I thought it would be great to moderate a conversation with Julia and ask her some of my questions. Seeing as this is an area where both emerging technology and youth services intersect, I asked my fabulous co-worker Bethany Wilson, our Youth Services Consultant here at TSLAC, to join us.

You can play the video (embedded below) to hear our full, unedited 40 minute conversation, but I’ve also summarized our discussion for today’s Highlight (see below), or if you’re not into the whole brevity thing, you can even read the full transcript. *

* processed into text, I might add, with voice recognition brought to you by an AI using machine learning


Highlights from our Conversation:

Julia has an undergraduate degree in narrative theory and literary theory, and it was this particular background that got her the summer internship with the AI company. She joined a group of software developers and computer engineers to spend ten weeks building a machine that could generate children’s stories based on some preferences that you gave at the beginning and using its artificial intelligence.

Julia explained to Bethany and me that a lot of children’s stories are formulaic in their structure, which might make them possible to teach a computer system how to generate.

(Julia offered a lot more info about narrative theory and how this all worked with the AI tools she used, so if this interests you, please check out the transcript to read more).

Although still in its infancy stage, the potential for such a tool might mean children could use an AI to have personalized stories read to them, which could be especially useful to have if there are no grownups with a talent for storytelling around them.

At this, a skeptical Bethany shook her head and voiced her concerns:

“It takes out the interactivity piece between parents and children. There’s no opportunity for dialogic reading, which really allows the adult to prompt, evaluate, expand, and repeat what the child is saying  – to prompt them with questions, to interact with them so that they’re understanding how a conversation works, the back and forth of the conversation. And [the child is] being asked questions and prompted to speak during the story. I don’t know how you would program the AI to do that. Even the lack of the person’s mouth to see how words are formed is missing as well, if you don’t have another person involved. And that’s important when you’re building those early literacy skills for children before they begin the process of learning to read.”

Photo of a caregiver reading a picture book to an infant.

Julia agreed with Bethany, but didn’t think we should avoid the idea altogether:

“I don’t think that humans will ever be phased out of the storytelling relationship with children. I think it’s just so vital to our species to spend time together… and to help [children] through those periods of time. But I do think that we are seeing children interacting with technology from such an early age now that it’s inevitable that there will be some screen time. Unless you make a very serious effort to keep those things from your children, they’ll interact with intelligent agents just by picking up, you know, mom or dad’s phone… And the way I think about it is, sometimes a kid gets parked in front of a TV screen while the parent has to do something else. And in those instances, wouldn’t it be nice as an alternative [or a supplement] to TV… to have some kind of intelligent agent that’s stimulating the child like, [as] you said, ask questions?”

Julia then pointed out that this interactivity is already being incorporated into existing products such as Alexa’s apps for storytelling – where it prompts you for terms like MadLibs and incorporates them into story for you.

Bethany then asked if the AI would have the ability to answer a question from a child such as, ” What does sad mean?,” because, as Bethany put it:

“…one of the early literacy concepts is taking a new concept or a new word and likening it to something that the child has experienced. So a parent or a caregiver would be able to do that better than an AI would. And then there’s the opportunity for new vocabulary words in books. You’re going to hear dozens more words than you would hear in a daily conversation, like new words, like complex words. And that’s part of the story writing process for authors – it’s to incorporate as many new words and concepts into those storybooks as possible.  I think you could probably program the AI to do that easily. It’s the background knowledge piece that I’m wondering about.”

Julia agreed that it would indeed be difficult to do, as humans have the advantage to draw from their shared lived experience, which the child is a part of. This turned the conversation to talk about sophisticated deep neural networks and the black box aspect of how they work: With artificial intelligence at this deep level, Julia explained:

“…Something goes in, something magical happens inside the black box, and something comes out, and we don’t exactly know how. And so in the stories that we were writing over the summer, if the hero got a sword from the witch in the swamp, I knew exactly why it did that because I programmed it. I wrote the code that taught it that it can go get a sword from the witch in the swamp. With other algorithms that are more sophisticated and they’re taking in a lot more knowledge, what’s happening with the deep neural network is actually it’s observing and learning and teaching itself. And so it becomes near-impossible… for anyone to explain why it did what it did. Coders can’t even look at it. The engineers who wrote the code can’t tell you why it made those decisions.”

We discussed the initiative called Explainable AI which seeks to create AI that can explain why it made its decisions, and then we took a detour to talk briefly about another equally troubling aspect to AI: its current problem of unexpectedly generating misogynistic or racist content because the only data we fed it to make its decisions is from Twitter or the Internet at large – where the most amplified voices are often misogynistic and racist. The AI is only as good as the data it is given. We humans are the ones to blame.

Julia then wondered what would happen if deep neural networks were able to start telling stories themselves. She imagined they would be interesting but also troubling. She mentioned that Google had a visual tool called DeepDream that would render imagery but what it produced came out as weird nightmarish renderings of animals pooling out of each other. Would their stories be equally alien and bizarre?

A comparison that shows a photo after run through a DeepDream filter. Weird animal shapes appear.
Example of DeepDream: A photo I took on the left, and a photo after it’s gone through the DeepDream filter on the right
Photo of realistic human robot to illustrate the concept of Uncanny Valley

She also touched upon the concept of the Uncanny Valley, a phenomenon where humans can sense something is wrong with a simulated human. It’s why she thinks humans won’t really allow robots to be around our kids, that it’s more likely we’ll make them look like teddy bears – something as far removed from being human-like as we can get it.


Julia thought we should exercise a lot of caution:

“There’s so much that we don’t understand. I think about the way technology gets released and I compare it to how.. new medication gets released, and medication goes through several rounds of double-blind testing before it ever goes on the market. And Apple invents a new watch and suddenly we’re putting it on our wrists and there’s no long-term research. We don’t understand what is actually happening. And so when it comes to such a vulnerable group like children, and in a space that’s so special, like the library, I imagine we would want to exercise a lot of caution.”

Bethany brought us back to the topic of AI as a storyteller:

“I think you’re going to run into a lot of issues with AI as a storyteller. I mean, I just touched on a couple of them and I’m not an expert by any means. and the information that I have is from Supercharged Storytimes, which is of course you can take here free through WebJunction –  but it teaches you how to weave the early literacy concepts into your story times or into your storytelling. And I’m seeing issues with trying to do that with AI. Interactivity is one of the pillars of Supercharged Storytimes. And the interactivity that we’re looking for is also related to building a relationship with a parent/caregiver on a child. And that wouldn’t happen with an AI.  But then, on the flip side, you’re talking about the programming pieces, and I see a lot of opportunity for older children, maybe they don’t want to write the story themselves, but they’re really interested in coding. And if they could create a story that way, they’d be totally on board.”

Logo for" Supercharged Storytimes"

Julia agreed that coding would be a fun way to get a young adult interested in storytelling and in character development.

We then talked about the capability of AI to learn social emotional skills, feelings and emotions – which led us to a philosophical discussion of the nature of intelligence, whether its instinctual and programmable, how ethics could be taught to an AI, considering how humans already have so many systems of ethical thought themselves.

Julia mentioned an app called AI Buddy – a set of animated characters powered by AI that give support to kids of military personnel deployed to war. Because these children often have to travel around so much, start new schools, make new friends, having the consistency of AI Buddy provides a level of continuity as it talks to the child, remembers things about her and her family – forming something of a friendship. This possibility of AI to provide companionship was of particular interest to Julia.

Talk then turned about the ethical obligations to protect children’s privacy. I mentioned that we’ve already seen horrifying incidents such as CloudPets, a children’s toy that automatically uploaded the child’s voice recording files, as well as personal photos, to a place online that was accessible to anyone without even password protection. We also discussed persistent recognition systems, how they’re being used as witnesses in criminal cases. Bethany joked that : “This conversation really went all over the place!”

I then envisioned the possibility of a picture book that could not only generate and read a personalized story to child, but also fashion illustrations to accompany the text – all completely on-the-fly, with AI and digital paper.

Photo of an infant reading a book with illustrations.

Bethany liked the idea, and thought it could maybe be incorporated into a storytime, still tying it to and utilizing some of the early literacy concepts. She continued:

“Part of learning about how a book works is to follow along, put your finger under the words so that the kids understand how text works. So even to have the words starting to appear under the finger, [the caregiver could say] “It’s going this way” –  so [the child] can see it visually appearing on the page and [in] that direction… Or to mimic phonetics, the sounds. So sometimes the words are bigger so you make your voice bigger, or sometimes they make the word ‘bounce’ look like something that’s bouncing, [which would help] kids… understand the concept of bounce, for example.”

Julia also thought it would make for a great opportunity to teach children digital literacy – about who exactly is writing and reading them a story, the differences between an AI and a human, and about identity and subjectivity – providing background knowledge, which, as Bethany repeated, is a big part of early literacy best practices.

As an aside, I made the point that AI developers should really be working directly with library staff. So much of this is our domain.

Furthermore, I argued, the best stories will always come from humans anyway.

Julia agreed:

“Over the summer I kept waiting for the system to surprise me… and unfortunately… I wasn’t ever really surprised. It’s coming from the past. It’s coming from the data we gave it before.

I will say that after spending the summer doing this project and thinking really hard about: ‘What are stories? What do they do?’ How do we make them good? What is a good story? What’s a satisfying story?’,  I left wanting to, you know, spend more time writing. It stimulated me creatively, in my own sort of storytelling capacity. And because the entire internship was such a good story: All the people I met and the tools we used. But… I didn’t walk away from it fearing AI would corner the publishing industry and start generating all the new stories. I think it’s possible to generate the way that Babysitter’s Club books are generated – by ghost writers and stuff. Because there’s a formula. ‘[Hey, do you need] formulaic stuff? Sure, AI can do it.’ I think that kids are a little bit more clever. I think kids want new things. They want things that speak to their context and their moment. And children’s authors do that already with picture books and the stuff they include. So I think that AI will increasingly be used as another tool or medium for very talented humans to express their creativity.”

Bethany gave the final thought:

“I think you just nailed it with that. It’s not about the AI, it’s how it can be used to further some things. So it is the tool. It’s the vehicle, the vehicle for creating.”

A big thank you to our special guest Julia for the fascinating talk!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.