Generative Artificial Intelligence is slowly entering children’s lives

A leak from Amazon gave us a glimpse of what they have planned for youngsters - and parents. How much personal data do we (or should we) accept handing over when it comes to our children?

Story

25 September 2023

#aiact #education

You may have talked to ChatGPT in your browser and may even have thought that it was quite fun to casually interact with an AI chatbot. Presumably, you were fully aware that the entity you were interacting with was not human. 

But would you have the same enthusiasm if preschool children were able to use a device powered by large language models (LLMs) to ask for things? This scenario is not far from becoming reality. In the wake of ChatGPT and Bard, Amazon plans to equip its Alexa Echo Device with its in-house developed LLM and incorporate a feature that turns an Amazon Echo smart speaker into a storytelling companion.

Amazon was the first company to release a home voice assistant smart speaker. The functionalities of the latest Amazon Echo include connecting to a variety of smart home devices, making calls and creating shopping lists. The device can also read books or tell preexisting stories to children. 

Bedtime stories

The upgraded Alexa should have, according to a leaked document, features focused on home entertainment with the working name "Alexa LLM Entertainment Use Cases." Creating bedtime stories from scratch – for instance about "Mittens, the first cat to ever go to the moon," based on a child’s wish to hear a fairy tale about “a cat and a moon” – is expected to be one of the upcoming options. Furthermore, if a child is playing with an Olaf toy (the character from the animation movie Frozen), an in-built camera in the Amazon Echo Show would be able to spot that activity and add the character to the made-up story.

“Alexa generates a novel, five-minute story with the provided context and configured defaults using its LLM, and saves the context of the story for future interactions," the document states.

This is not the first attempt to integrate elements of generative AI into Alexa. Amazon launched "Create with Alexa" in November 2022, allowing children to co-create stories with Amazon Echo by selecting themes, protagonists, and adjectives like "silly" or "mysterious" for their stories. Building on the child’s prompts, an AI engine creates a five-scene scenario complemented by artwork and background music. The animated tale is then projected on the Echo Show gadget's screen.

The current focus on the development of Amazon’s in-house LLM and its implementation into Amazon Echo is very likely driven by Amazon’s effort to catch up with its competitors and to grasp at straws to imitate the success of OpenAI’s ChatGPT. 

In fact, Amazon’s "Alexa division" – responsible for Echo smart speakers but also other devices – failed to deliver the expected results and has been struggling financially. In recent years, it has lost more than $5 billion annually. In late 2022, around 2,000 from this division were laid off.

Moreover, Alexa was unable to compete with Google and Apple, which invested more in voice-assisted technology. Google Assistant presently has around 81.5 million US users, followed by Apple Siri with 77.6 million. Alexa ranks 3rd with 71.6 million users.

Privacy concerns

In the United States, earlier this year, Amazon faced a Federal Trade Commission investigation for illegally collecting children’s data without parents’ consent through the Alexa voice assistant, violating Children’s Online Privacy Protection Act. At the end of March 2023, Amazon settled to pay a $25 million fine. 

Amazon Echo has a long history of privacy issues, ranging from sending voice recordings to the wrong user to sharing data with law enforcement – plus paying their employees or external contractors across the world to listen to recordings obtained by Alexa to enhance the accuracy of the device. 

In spite of the controversies, Amazon’s move to introduce the storytelling tool for children should not come as a surprise. Companies like YouTube, TikTok, or Snapchat previously introduced offerings specially geared towards children.

The datafication of children's lives 

“If we start to understand what technology is really capable of, when it comes in the form of the connected toy, it is by no means innocent,” says Katriina Heljakka, a toy researcher from the University of Turku in Finland. “We've come to an era of digital connectedness where toys are so much more than they used to be. They used to be the play partners with which we shared our secrets, but now they might be the ones who are telling our secrets onward because of this technological development.”

According to Giovanna Mascheroni, an Associate Professor of Sociology of Media and Communication at Università Cattolica del Sacro Cuore, Milan, and author of the book Datafied Childhoods, one reason for why parents consent to kids’ excessive use of algorithmic devices is that “they don't see a value in children's data.” Many parents, she says, have told her, “‘well, I don't think they even collect data from my child, because what is it worth?”. She adds, “they are unable to see that companies are collecting data from the moment the child is born, or even sooner.”  

Moreover, Mascheroni says, “[parents] tend to see the benefits of being profiled. And there’s a lot about the convenience of not having to search for anything because what you need is suggested to you.” 

Parents’ algorithmic educational burden

The extensive processing of children’s personal data tends to have a quite simple intention: “Much of this processing today is actually underpinned by commercial interests,” says Ingrida Milkaite, a postdoctoral researcher from Ghent University in Belgium, who is an expert in human rights law, privacy, data protection, and children’s rights issues arising in the digital environment.

But it is not just the convenience that makes parents ambivalent about their children’s digital use. “We are often talking about the need to educate parents and children, increase data literacy,” says Milkaite. “But we should not lose sight of the need to ensure that the actors who process children’s personal data are held accountable. Many of the technical issues behind data processing are actually quite impossible for the general public to understand.”

Unknowingly giving the green light to companies to gather sensitive data may have inadvertent and unforeseeable effects: “One of the most important potential consequences of extensive data processing of children's personal data essentially relates to profiling and targeted advertising. And this can be quite problematic, because in the long run it might lead to, for example, discrimination of children, labeling of children and sorting them into groups that a certain algorithm decides they belong to,” Milkaite adds.

Communicating those far-reaching impacts is definitely one way to go but according to Gilad Rosner, privacy and information policy researcher, and the founder of the non-profit Internet of Things Privacy Forum, this action alone will not be sufficient: “I have limited faith in the value of informing people about a product’s potential harms.”

“The theory behind such a notice is that people will then rationally decide to purchase or not based on recognition of the long-term harms of buying the toy for the child,” he adds. “This often doesn’t happen because people don’t read the notice, understand the harms, think long-term, or are under various pressures to buy the toy irrespective of what it says. When products are harmful the government has a special role in steering people away from them, preventing the harms, or banning the product.”

Legislative push for regulation

Currently, lawmakers inside and outside of the EU are working on the introduction of future-proof requirements for general AI technologies. Some requirements might also have an impact on technologies primarily pitched for kids. 

In 2021, the United Nations Committee on the Rights of the Child published General Comment No. 25 (2021) on children’s rights in the digital environment. Last year, the Council of Europe adopted the EU Strategy on the Rights of the Child that is based on the United Nations Convention on the Rights of the Child. 

Outside of the EU, there are the UK and California Age-Appropriate Design Codes. Both documents require companies to take the best interests of children into account when designing their online services. The US also has the Children's Online Privacy Protection Rule (COPPA), which sets certain restrictions on operators of websites or online services that are targeted at children under the age of 13 and gather their personal information.

The Artificial Intelligence Act (AI Act) recognizes children as a vulnerable group and acknowledges the potential for manipulation and exploitation of them. The Act largely makes use of the Precautionary Principle, which emerges from environmental law. “What it means is that we shouldn't wait for harm to occur in the absence of hard scientific proof of harm. That absence should not prevent us from regulating, we should regulate in a precautionary way,” adds Rosner. Basically, you put in fire safety rules and do not wait for the house to burn down.  

The amended version of the AI Act, which was adopted by the European Parliament earlier in June 2023, bans remote biometric identification systems. This may lead to the potential Olaf toy feature in Amazon Echo, but also other AI toys that may be using face and voice recognition, being prohibited from being marketed in the EU.

Nathalie Koubayová (she/her)

Former Fellow Algorithmic Accountability Reporting

Nathalie is a PhD student with an academic interest in chatbots. She holds a research master’s degree in Communication Science from the University of Amsterdam. Her current research revolves around users’ responses to different framings of disclosure of customer care chatbots’ identity. During her fellowship at AlgorithmWatch, she looked into the use of chatbots in mental health, automatic fact-checking, and the digitization of the agricultural sector.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.