Faking It

For not very good reasons, I seem to have acquired—along with an embarrassing tabloid headline—a reputation as someone who knows something about deepfakes and crime. Which means I get requests for quotes and background from journos (to buttress their scare stories) and students (their dissertations). I tend to ignore the hacks, though some of those are students too: what a dilemma.

In any case, here’s my recent reponse to one student’s queries, which—if there’s ever a more respectable venue in the vicinity—I might eventually expand into an FAQ sort of thing to point random enquirers to, but in the meantime can just gather dust here and stop wasting space in my brain.

1. How do deepfakes work (would you say it’s easy to carry out) and how might they be seen as a criminal threat?

Deepfakes are algorithmically-generated synthetic media. They are created with the aid of “Deep Learning” computational models — systems that have been trained on large bodies of real media and learned to reproduce some of their characteristics.

They are probably best thought of as an offshoot of movie visual effects technology, and indeed the most prominent examples (Jordan Peele’s fake Barack Obama for Buzzfeed, Channel 4’s alternative Xmas message, the recent Tom Cruise TikToks) are pieces of entertainment produced by teams of media professionals.

At less polished levels, software for rudimentary deep fake production is freely available. Very basic apps like Wombo require no expertise to use and can be run on your phone, producing predefined “on rails” video clips that are cartoonishly entertaining but never going to fool anyone into thinking they are real. For anything beyond that, the requirements in terms of hardware, technical expertise and (especially) source data are at present quite limiting. Internet communities exist to help and advise would-be fakers, to share their creations and collect data sets of celebrities for use in deepfake production. Some of these are quite pernicious — it’s hard to get reliable statistics, but it’s generally accepted that the vast majority of deepfakes are pornographic — but for the most part they are unlikely to represent a widespread threat to the general public.

By far the most common faking technique involves swapping one person’s face onto the body of another. In most cases, the products of algorithmic face swapping have enough evident mismatches to be unconvincing and require additional skilled tweaking and retouching to smooth away the incongruities. Completely de novo video creation is currently rare and unthreatening (though fully synthetic still images have become relatively common for things like the social media avatars of bot accounts).

2. What types of crime do deepfakes empower?

Many crimes involving impersonation — identity theft, fraud, blackmail, defamation, etc — could potentially be augmented or even committed entirely using deepfakes (though obviously they are much less relevant for activities that require in person physical presence). In practice the obstacles to deepfake creation are currently high enough that their use will probably by limited to cases where the rewards are correspondingly large. It is certain that the technological barriers will lower in the future, although it’s unclear how susceptible some of the bottlenecks are to technological advancement — notably the need for representative data on the target.

An important factor affecting the viability of deepfake crimes is the social context and the norms of trust within which potential victims operate. Widespread publicity and a degree of media hysteria around deepfakes is likely to make targets more wary. Within this context, it is possible that greater threats may come from the exploitation of such fears — undermining trust in evidence and institutions, motivating people to disbelieve their eyes and so on.

3. What are some tactics for recognising artificially made media? How does a deepfake differ from the real/normal

Deepfakes lack the physical coherence of real images. This inauthenticity may manifest in numerous ways. Large scale mismatches in lighting, skin tone, texture etc may be visible to the naked eye. Frame-to-frame movements can look unreal because they are generated superficially from the flow of pixels without capturing the underlying mechanical constraints of bones, muscles, skin etc. The deepfake may not perfectly distinguish foreground from background and wind up doing things like moving fragments of a tree as if it were part of a face, etc. The algorithm might act well locally but produce mismatches at greater distances, like one earring being different from the other. (These kinds of more noticeable artefacts are part of what a human cleanup artist would probably fix.)

In addition, there are low-level cues in images that may be used to distinguish real from fake, such as noise characteristics of the light detectors in digital cameras. Images generated algorithmically do not have the same kind of (mostly imperceptible) patterns in their pixels, and may have other (also imperceptible) patterns that mark them out as fake. We can expect at least some of these kinds of distinguishing cues to become less useful in future, as the fakers add them into the generating models.

4. Can AI technology developed to counterattck deep fakes be successful in terms of security and defence?

Certainly for detection. Many existing deepfake detection techniques make use of deep learning. I’m not sure what else might be involved in a “counterattack” — perhaps the term is a bit melodramatic?

5. Can the technology behind deepfakes ever be of benefit to all of society to an extent in the present of even near future or will they always be dangerous?

There is alway scope for beneficial uses. The connection to VFX has already been noted. One might debate the social value of movies and TV, but it’s at least good for Disney’s bottom line to be able to put [redacted] in The Mandalorian. Deepfake tech has been used to protect the identity of at-risk witnesses and there are attempts to use it for generating anonymised patient data for medical research. Vocal fakery offers improved communications to people with disabilities, and visual fakes can help people with various mental health issues overcome barriers to interacting socially online.

Leave a Reply

Your email address will not be published. Required fields are marked *