I & AI: Mirror Data Privacy Statement

What Data is collected as part of I & AI: Mirror installation 

1) Voice Data: Captured using a microphone. This microphone data is not saved but translated into text (for accessibility and also saved with a time stamp), and simultaneously translated in real time using Unreal Engine and ‘Co-STEAM Diffusion’ (a local model developed from Diffusion open source model) which generates the image, allowing for real time interaction. 
2) Motion Data: Captured motion data during the recording process using a depth sensor captures facial data points only and not video with recognisable facial features, therefore anonymizing this data at the point of capture. This 2D depth data (maps) are used in real time by Unreal Engine to create the resulting interactive animation enabling real time participation and visual feedback from the visitor. Motion data is translated in real time only, no files or data is saved on any of the devices. 

Voice Data Processing  

1) Capture: Unreal engine uses the microphone to text data for image generation. This transcript is the only data saved locally. 
2) Tracking/Recording: Unreal Engine tracks/records spoken voice in the form of a transcript only and the associated timestamp. 
3) Data Generation: This tracking/recorded data is converted into text, which can be streamed in real-time by a combination of Unreal Engine and CoSTEAM Diffusion. 

Motion Data Processing  

1) Motion Capture: The Depth sensor captures motion as 2D data maps. These depth maps are then used to generate the animation. 
2) Tracking/Recording: These captured 2D depth maps, along with voice transcript data, is used to generate the interactive experience 
3) Data/File Generation: This motion data is imported into Unreal Engine which is operated locally on this computer device. 

How the Data is Used  

1) Voice Data: In real-time mode, Unreal Engine streams transcript (voice) data directly to the display device, allowing for immediate interaction with the character through sound also display as text (subtitles) to enhance access to the work. 
2) Motion Data: In real-time mode, Unreal Engine streams image data as animation directly to the display device, allowing for immediate interaction with the character through visuals. 

Both voice and video/motion data are stored locally only and the only identifier logged is a time stamp. The time stamp is not recorded to be linked to any individual person, but this enables an individual to request withdrawal of their data by providing their relevant time stamp. All data is therefore aggregated for the purposed of creating a collected, co-created interactive experience for visitors that uses this data as one whole rather than individual identifiable data points, purposefully blending all individually shared data together, making this even lower risk.   

Withdrawal of Consent 

To request data removal from this interactive work, visitors must contact the organiser directly at Jiarong.Yu@ed.ac.uk to exercise their right to data erasure. To do this you will need to either provide the timestamp of your recording and/or information on the content you shared so that we can identify your specific personal data for extraction and deletion from the dataset, as all data captures goes through anonymisation at the point of collection. 

Instructions: Speak to exhibition staff or email the artist: Jiarong.Yu@ed.ac.uk 
Clearly state: that you are requesting the removal or erasure of your personal data from I & AI: Mirror exhibition 

Data shared during this exhibition will contribute to the artwork for the duration of the exhibition only, and all contributed data will be deleted after the exhibition. Withdrawal of consent is therefore only relevant for the duration of the exhibition.