Zoom Obscura Artist | Martin Disley

Martin Disley is one of the artists that responded to the expressions of interest call in September 2020.

How They Met Themselves uses methods of visual practice to investigate how algorithmic and human schemas of facial identification, verification and perception differ and how these differences can be leveraged to control how our identity is coded in the images we put online. A bespoke software tool, called Deepfake Doppelgänger, was developed to exploit some of these differences.

The application generates a bespoke avatar based on the user’s uploaded portrait image that preserves the likeness of the face in the uploaded image whilst obscuring the biometric data linking the avatar to the user.

The software advances an adversarial approach to countering digital privacy threats by pitting generative machine vision against inferencing machine vision. The software utilises facial verification systems in the production of the avatars to ensure a specific and desired result is produced when that avatar is later fed into a similar system.

Within human schemas of identity perception and verification of the face sit within a larger assemblage of signals used to distinguish between individuals. Machine vision systems do not have such rich, adaptive and complex schema of identity and often rely on binary classification algorithms to distinguish identities.

These algorithms necessarily delimit a hard border at the edge of a category, an inflexion point where subjects that are under observation are rolled into two distinct categories depending on which side they fall. This software locates the border of a facial verification systems’ categorisation. It maps the extremity of a category in order to find the shortest distance between the original image and some image that falls into that category.

The software generates an avatar, or doppelgänger of the user by “projecting” their uploaded image into the StyleGAN2 FFHQ model, a generative machine learning architecture capable of producing endless images of convincing ‘fake’ faces.

When an image is projected in, the model searches for the closest match it can find to the input face by tuning its parameters at each step. This creates a continuum of difference between the closest match to the input face the network can produce, and the median face at the centre of the network’s distribution.

Somewhere along this continuum, a facial verification algorithm comparing the face at each stage of the process to the input face will switch between outputting a positive and negative result. In other words, it will switch from classifying these faces as belonging to the same person to classifying these faces as belonging to two different people.

Given a stream of continuous data, such as the morphing faces produced here, a binary classification algorithm reveals the hard border at the inflexion point between the categories of its classification. When we examine the images that surround this border, the location appears arbitrary; the images found on either side of this border are nearly indistinguishable.

On one side of this border, there is a face that shares a likeness with the uploaded image and is identified as the same person by a facial verification system. On the other side, there is a nearly identical face but one that the facial verification system identifies as an entirely different person. Once the process is complete, the user can then download the image found on the other side to use as an avatar.

The software also provides a free and open-source method from controlling the doppelgänger with a webcam via Avatarify. Being too computationally intensive to run on a commercial laptop without a GPU, cloud computing is needed to run Avatarify. To handle the computation the software spins up a rendering server. The avatar can then be brought into a video conferencing platform with the aid of a virtual camera.

By providing users with a method for decoupling their likeness from their biometric data, Deep Fake Doppelgänger has the potential to reassert agency and disrupt the narrative framing of deep fake technology that renders us both helpless and passive.

You can access the Deepfake Doppelgänger source code here: https://github.com/martindisley/DeepFake-Doppelganger

Credit & Thanks

  • Concept, Software, Soundtrack, Production: Martin Disley
  • Voice Over: Alice Carr.

Martin Disley is an artist, researcher and creative technologist based in Edinburgh, Scotland. His visual practice centres around an ongoing critical investigation into machine learning. His work has focussed on the machine learning model and the map-territory relation, feedback loops in inference, behavioural conditioning and training and machine learning in states of incoherence. His work seeks to manifest the internal contradictions and logical limitations of artificial intelligence in beguiling images, video and sound.

Martin was recently artist-in-residence at the National Library of Scotland and has received commissions from The Institute for Design Informatics at the University of Edinburgh, The Indeterminacy Research Group at the University of Dundee and Extinction Rebellion among others. His work has been exhibited at the V&A Museum (Dundee, Scotland), Summerhall (Edinburgh, Scotland), The Centre for Contemporary Arts (Glasgow, Scotland), Guterhallen Gallery (Soligen, Germany), Sala Aranyo (Barcelona, Spain) and Kunstencentrum Vooruit (Ghent, Belgium).

The research that informs his work has also contributed to academic publications including the forthcoming Resonance: Axiologies of Distributed Perception (Routledge 2021), edited by Natasha Lushetich and Iain Campbell.

You can connect with Martin on Twitter.