Full title: Research Summary: Laughter is Scary, but Farting is Cute: A Conceptual Model of Children's Perspectives of Creepy Technologies. Originally published on the Montreal AI Ethics Blog in the Summer of 2020.
Mini-summary: Designing AI technologies for children often focuses on adult concerns for children rather than examining the problem from a child’s perspective. In this recent paper from researchers at the University of Washington, in-depth user research led to the development of a conceptual model that provides important considerations and insights for parents and designers. By focusing on solely how children view technology as creepy or trustworthy, the research team pointed out problematic designs in apps or toys designed for children.. By using dark patterns, designers create technologies that children will trust when they shouldn’t and create applications that simply frighten children in ways that adults do not consider. Using this model as a basis, designers, parents, and advocacy groups can more formally define ethical considerations in technological designs for children.
Full summary:
Much of HCI design for children’s technologies focuses on adult concerns for children rather than examining children’s perceptions on them. A team from the University of Washington chose to research why children view some AI technology as “creepy”, a common term child use to describe their negative feelings towards technologies. The researchers sought to answer the following questions:
What do children consider disturbing or unsettling about the technologies they encounter?
What properties signal to a child that a technology does not deserve their trust?
When children find a technology creepy, what is it that they are worried about?
The researchers decided to base this commonly used term of “creepiness” on prior research where the term is defined as “the anxiety aroused specifically by ambiguity surrounding a potential, but uncertain threat,” in contrast to “scary”, which is more immediate and certain. This definition of creepiness has been researched more thoroughly in adults in depth with regards to the Uncanny Valley phenomenon and privacy concerns.
Over the course of four sessions, the researchers worked with a group of eleven children between the ages of 7 and 11 who had previously participated in UW research studies. The four sessions consisted of the following:
Participants and researchers prototyping a “creepy” technology together with low-fidelity materials such as crafting supplies
A discussion of potentially creepy technology scenarios with the children
Survey of creepiness on ten different technologies ranging from cuddly bots to Amazon Alexa Kids Edition
Adults and children prototyping technologies that were trustworthy and acting out of a scenario to show how these technologies evoked trust for users
Finally, the researchers interviewed eight of the children to ask them direct questions about creepiness in technologies in their daily lives and their families’ usage of technology.
Results
The researchers evidenced eight themes in their work: two fears, five signals of creepiness, and one mediation. The two fears were Physical Harm and Loss of Attachment. While the definition of physical harm is obvious, the loss of attachment is more detailed regarding children’s fears and technology. The children reported they were afraid that technology would try to take them from their parents, a child’s ultimate refuge of safety. Another part of this fear was that a technology would try to take over a child’s life evoking a fear of mortality. Finally, children were afraid that technology would try to imitate their parents.
Central to the research, the researchers delved into details regarding what signaled creepiness to a child. The five below are the high-level signals they found:
Deception versus Transparency
Ominous Physical Appearance
Lack of Control
Unpredictability
Mimicry
Some anecdotes more clearly illustrate children’s perceptions regarding technology. Children were not comfortable with the ambiguous answers voice assistants such as Siri or Alexa give. One child asked a digital voice assistant if it would kill them as they slept. The assistant replied that it couldn’t answer, leading the child to believe that the assistant COULD kill them as they slept.
A creepy appearance is also a signal for children. While the K2-SO robot toy on the far left was seen as threatening to a child’s safety, the LuvaBella doll on the far right was too cute (and possibly too human, suggesting the Uncanny Valley), insinuating it could be hiding something sinister. However, many of the children wanted to own a Woobo because it was not too cute or human, nor threatening.
Children are also uncomfortable when they felt technology was out of their control. Amazon Echos often activate when someone says a keyword or refers to “Alexa” near them, which made some children fear the device had a mind of its own. Children also worried when an app would report user information from the child to their parents, violating a child’s personal sense of privacy.
Children, like adults, have an expectation for the behavior of technologies. When a voice assistant laughs in response to a joke, children find it threatening, as laughter can have multiple motivations, some of which can be cruel. However, the children found the sound of flatulence was not threatening, as deliberate fart noises only have one motivation – humor, suggesting that children possibly interpret the different between a fixed task versus more random machine learning output as a less disturbing feature.
Finally, researchers discovered that children’s perceptions of technology are mediated by their parents’ perceptions. For children, if a parent uses a piece of technology – a laptop, iPad, smart phone – a child will trust it, even if these do possess some of the creepy design properties as noted earlier.
Implications
As children indicate they base their trust and attitudes on a particular technology based on what their parents say or do, the parents should discuss a range of topics regarding technology and trust with their children. They should talk about the technologies they use on a daily basis and how they know if they are trustworthy or not and the associated tradeoffs. By understanding their own children’s fears, parents can reconsider how their own passive use of technology models behavior. Furthermore, adults should explain how technologies work and gather data, as well as what safeguards their children should take with regard for their use of technology.
Most importantly for designers, there are multiple ethical considerations that arise from this research. Designers should only create a trustworthy interface for children only if the design IS trustworthy, creating technology with no hidden cameras or other recording devices. Additionally, some current interfaces are too effective in suppressing children’s fear, such as the cuddly Woobo toy that has an embedded microphone, Internet connectivity, and data collection, violating the privacy concerns that children share with their parents. Apps designed for children on smart phones and tables collect data when they are closed and many children are unaware of this, believing the app to be “off” as another example of the disconnect between front-end implications and back-end deception. The researchers hope the conceptual model presented in this paper can serve as a baseline for creating ethical AI technologies for children and for judging creepiness from the perspective of the young users of what they develop.
Original paper by Jason C. Yip, Kiley Sobel, Xin Gao, Allison Marie Hishikawa, Alexis Lim, Laura Meng, Romaine Flor Ofiana, Justin Park, Alexis Hiniker: http://bigyipper.com/wp-content/uploads/2019/02/Yip-et-al-2019-paper073.pdf
Comments