Illusion Theatre: Derailing the Sense of Agency
Using an interactive storytelling machine to deceive a user’s sense of agency.
This project has been developed within the research institution Cross Labs, which is located in Tokyo and focuses on pushing fundamental research towards a better understanding of the nature and artificial environments.
For this year’s ALIFE 2020 Conference held virtually from Montreal, we created an art piece to showcase our work in interactive storytelling, and the role agency plays within interactive interfaces. Our research focuses on manipulating, either enhancing, augmenting, deceiving, or controlling, an agent’s sense of agency.
Sense of agency can be defined as the feeling of control we have over our actions and the resulting consequences of those actions. To play with this sense of agency, we use emerging media technologies to alter the interaction between humans and machines.
In this interactive storytelling, the content is not predetermined at the beginning of the experience but rather is determined by the agents’ interactions within the experience itself.
These human-computer interactions depend on the user’s emotions, which are classified from a facial expression recognition the emotions happiness, surprise, sadness, anger, and a neutral baseline . The deception and manipulation of the user’s sense of agency are carried out by reflecting the users their emotional states.
Experiment 1: The first experiment’s objective consisted of investigating if it was possible to create a positive-emotional loop between the machine and the user. For that purpose, we created an interface in which a baby responded to the user’s happy emotions with a laughing response and to the rest of the user’s emotions with an idle response. The baby’s selection as the interactive agent was based on the positive reaction generated by adults when hearing a baby laugh .
Experiment 2: The second experiment is closely linked to operant conditioning studied by B. F. Skinner in the 1960s . This learning method consists of the control of behavior through the manipulation of rewards and punishments in the environment. In our experiment, we studied the behavioral response of the user with an unconditioned stimulus. The interaction was throughout an anime character with four different poses. Three of those poses were linked with specific user’s emotions while, the fourth one, was randomly generated. Participants were then asked to generate this pose at least 5 times in 5 minutes. However, participants were unaware that this pose was randomly generated.
The first experiment’s results show that participants innately react with a smile or laugh to the baby’s laughing state, generating an emotional loop between the machine and the participant that sometimes lasted up to several seconds. Even though the participant was the one controlling the first decisions of the interface, those results indicated that our innate reaction towards specific stimulus can be used by the machine to lead users towards a specific emotional state.
The second experiment’s results pointed out that people tend to associate some specific facial expressions with the randomly generated pose of the character. This experiment and its results are just one example of the range of applications in which the machine can deceive the participants’ sense of agency and induce a new behavior.
In conclusion, both experiments are examples of how machines can use the self-reflection of emotions detected from facial expressions to deceive the user’s sense of agency, which, in the context of interactive storytelling, refers to spectators’ awareness of controlling the character’s actions and their impact on the unfolding events in the story .
Content creators can take advantage of this manipulation and lead spectators to a specific emotional state or behavior while still maintaining spectators’ sense of agency. Besides, this same approach can be extended to other fields such as artificial intelligence, creating machines that are apparently fully controlled by humans or other artificial agents when, instead, the machines are the ones manipulating the other agents’ sense of agency.
 Arriaga, O., Valdenegro-Toro, M., and Ploger, P. G. (2019). Real-time convolutional neural networks for emotion and gender classification. ESANN 2019 — Proceedings, 27th EuropeanSymposium on Artificial Neural Networks, Computational Intelligence and MachineLearning, pages 221–226.
 Cekaite, A. and Andren, M. (2019). Children’s laughter and emotion sharing with peers and adults in preschool. Frontiers in Psychology, 10:852.
 Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York:Appleton-Century-Crofts.
 Hammond, S., Pain, H.,and Smith, T. J. (2007). Player agency in interactive narrative: Audience, actor & author. AISB’07: Artificial and Ambient Intelligence, (July):386–393.