This project aims to create “synthetic dreams” and VR compassion mind training based on the captured data of a wearable, which detect and films individuals’ emotional moments during daily life.
People have been fascinated by and also puzzled by a dream for centuries. Many artists have been trying to visualize the bizarre dream world through various media. However, there isn’t an artwork or research yet to explore the possibility of making highly customized dreams for individuals.
This research project aims to address this research void by making a wearable device that can monitor an individual’s emotional variation during the day and video record the stimulating events when emotional arousal is detected. The captured stimuli will then be uploaded to a web server in real-time, which can be viewed by the audience at any time. Moreover, we will train an AI video editing system that allows the audience to feed with different videos from the web servers and generate a highly customized dream. This dream simulation project is created based on the widely acknowledged theory of dream that dream consists of recent emotional salient memories, past memories, current concerns, and social-cultural elements. During our first project, we will first explore how to simulate dreams based on the recent emotional salient memories and current concerns of individuals.
Except for dream simulation, this device will also be used in the study of performing VR compassion focus therapy on people with excessive self-criticism. The recorded video from previous stage will be used as a peronalized video stimulli to elicit audience’s emotions relate to self-blamce. After the stimulation process, audience will be guided to deliver compassion to self in an Virtual Reality Narrative.