Faking It

For not very good reasons, I seem to have acquired—along with an embarrassing tabloid headline—a reputation as someone who knows something about deepfakes and crime. Which means I get requests for quotes and background from journos (to buttress their scare stories) and students (their dissertations). I tend to ignore the hacks, though some of those are students too: what a dilemma.

In any case, here’s my recent reponse to one student’s queries, which—if there’s ever a more respectable venue in the vicinity—I might eventually expand into an FAQ sort of thing to point random enquirers to, but in the meantime can just gather dust here and stop wasting space in my brain.

1. How do deepfakes work (would you say it’s easy to carry out) and how might they be seen as a criminal threat?

Deepfakes are algorithmically-generated synthetic media. They are created with the aid of “Deep Learning” computational models — systems that have been trained on large bodies of real media and learned to reproduce some of their characteristics.

They are probably best thought of as an offshoot of movie visual effects technology, and indeed the most prominent examples (Jordan Peele’s fake Barack Obama for Buzzfeed, Channel 4’s alternative Xmas message, the recent Tom Cruise TikToks) are pieces of entertainment produced by teams of media professionals.

At less polished levels, software for rudimentary deep fake production is freely available. Very basic apps like Wombo require no expertise to use and can be run on your phone, producing predefined “on rails” video clips that are cartoonishly entertaining but never going to fool anyone into thinking they are real. For anything beyond that, the requirements in terms of hardware, technical expertise and (especially) source data are at present quite limiting. Internet communities exist to help and advise would-be fakers, to share their creations and collect data sets of celebrities for use in deepfake production. Some of these are quite pernicious — it’s hard to get reliable statistics, but it’s generally accepted that the vast majority of deepfakes are pornographic — but for the most part they are unlikely to represent a widespread threat to the general public.

By far the most common faking technique involves swapping one person’s face onto the body of another. In most cases, the products of algorithmic face swapping have enough evident mismatches to be unconvincing and require additional skilled tweaking and retouching to smooth away the incongruities. Completely de novo video creation is currently rare and unthreatening (though fully synthetic still images have become relatively common for things like the social media avatars of bot accounts).

2. What types of crime do deepfakes empower?

Many crimes involving impersonation — identity theft, fraud, blackmail, defamation, etc — could potentially be augmented or even committed entirely using deepfakes (though obviously they are much less relevant for activities that require in person physical presence). In practice the obstacles to deepfake creation are currently high enough that their use will probably by limited to cases where the rewards are correspondingly large. It is certain that the technological barriers will lower in the future, although it’s unclear how susceptible some of the bottlenecks are to technological advancement — notably the need for representative data on the target.

An important factor affecting the viability of deepfake crimes is the social context and the norms of trust within which potential victims operate. Widespread publicity and a degree of media hysteria around deepfakes is likely to make targets more wary. Within this context, it is possible that greater threats may come from the exploitation of such fears — undermining trust in evidence and institutions, motivating people to disbelieve their eyes and so on.

3. What are some tactics for recognising artificially made media? How does a deepfake differ from the real/normal

Deepfakes lack the physical coherence of real images. This inauthenticity may manifest in numerous ways. Large scale mismatches in lighting, skin tone, texture etc may be visible to the naked eye. Frame-to-frame movements can look unreal because they are generated superficially from the flow of pixels without capturing the underlying mechanical constraints of bones, muscles, skin etc. The deepfake may not perfectly distinguish foreground from background and wind up doing things like moving fragments of a tree as if it were part of a face, etc. The algorithm might act well locally but produce mismatches at greater distances, like one earring being different from the other. (These kinds of more noticeable artefacts are part of what a human cleanup artist would probably fix.)

In addition, there are low-level cues in images that may be used to distinguish real from fake, such as noise characteristics of the light detectors in digital cameras. Images generated algorithmically do not have the same kind of (mostly imperceptible) patterns in their pixels, and may have other (also imperceptible) patterns that mark them out as fake. We can expect at least some of these kinds of distinguishing cues to become less useful in future, as the fakers add them into the generating models.

4. Can AI technology developed to counterattck deep fakes be successful in terms of security and defence?

Certainly for detection. Many existing deepfake detection techniques make use of deep learning. I’m not sure what else might be involved in a “counterattack” — perhaps the term is a bit melodramatic?

5. Can the technology behind deepfakes ever be of benefit to all of society to an extent in the present of even near future or will they always be dangerous?

There is alway scope for beneficial uses. The connection to VFX has already been noted. One might debate the social value of movies and TV, but it’s at least good for Disney’s bottom line to be able to put [redacted] in The Mandalorian. Deepfake tech has been used to protect the identity of at-risk witnesses and there are attempts to use it for generating anonymised patient data for medical research. Vocal fakery offers improved communications to people with disabilities, and visual fakes can help people with various mental health issues overcome barriers to interacting socially online.

Learning Experience

As the latest in a very long line of marginal career shifts winding up in unintended places, I seem to have agreed to become module lead for the COMP0088 Introduction to Machine Learning course. This is not completely out of the blue—I am already, in some partial & impermanent fashion, counted among the UCL Computer Science teaching staff—but so far I’ve been able to get away with just doing things like project supervision and tutorials and (ugh) marking rather than actual full blown lecturing. No longer, it seems.

Obviously I am ambivalent about this; I’m ambivalent about everything. On the one hand, AAAARGH! I feel like the absolute last person who should be teaching anyone about anything. On the other, I Have Opinions about how it should be done for this subject in particular, and it seems I can no longer just smugly hold those from the sidelines but must actually put them to the test. Eek.

One consequence of the prospect of having a corpus of actual students is an inchoate but subtly nagging sense that perhaps I should have a less disreputable—and possibly even up to date—web presence, in case the little bastards google me. I don’t know whether that might take the form of a direct successor blog—WT5—or some separate work-specific thing, and of course doing nothing also remains a popular option. So for the moment I’m just noting that the question exists, and while it simmers on the back burner I might try to post a bit more often than once a year, if only to push the notably disreputable previous post off the front page. (Though in point of fact, that post is legitimately work-related in ways that may or may not be clarified soon.)

Proof By Example

More than 10 years ago I wrote this:

It’ll be a dark day for the city if the Outer London anti-Ken contingent propel his gormless, cloth-eared, crypto-fascist mophead buffoon of an opponent into power. We’ll be a laughingstock.

Obviously true, but also: Christ, I had no fucking idea.

The thing that vexes me most about the bowel-voiding embarrassment that is our current Prime Minister—and literally everything about him is vexing—is that he, like his tartrazine twin in the White House, stands as a powerful lesson, written in letters of fire a hundred metres tall for all to see. And that lesson is:

Shitty people get rewarded for doing shitty things.

Not only that, they’re encouraged to think they deserve those rewards. Incentivised to wallow in their shittiness, to proclaim it loudly far and wide, to revel in and celebrate their utter lack of redeeming features. Being a repugnant narcissistic self-serving mendacious shitbag is a fantastic career choice, they prove by example. Sign up at Shitbag University right away! Be the shitbag you truly want to be!

The very existence of these smirking entitled pricks sullies our public discourse, debases society, pollutes the Earth. And the fact that so many people line up to give the shitbags exactly what they want, knowing full well they don’t merit it, that doing so is an abject surrender to the forces of actual evil—well, that debases us more.

I hope to live to see these particular exemplars cast down, despised and rejected and acquainted with grief, stripped of every single trapping of success their rotten hearts ever desired, crawling in the gutter, drowning in sewage, torn apart by angry mobs. But even if that eventually happens—and SPOILER ALERT it won’t—it’s too fucking late. The lesson has been taught, and you can be quite sure it’s been learned. There are plenty more shitbags where these ones came from, and hordes of eager enablers ready to fawn and cringe and grease their paths to power.

So that’s something to look forward to, eh?

Slides

Yes, some time has passed. But I’m currently doing something that entails revisiting this talk content, and that reminded me I said I’d post the slides, so here they are. As PDF, so a few animations and timing gags are missing. What can I say, you had to be there.

For context, this was a ‘briefing’ given at a workshop on the subject of AI and Future Crime, which is why it ends with suggestions of stuff ‘to consider in your groups’. Feel free not to consider anything, in groups or otherwise.