In 2014, a form of machine learning called Generative Adversarial Networks or GANs were first introduced to the world by a computer scientist named Ian Goodfellow and his colleagues.  Ian Goodfellow also nicknamed the “GAN father” was described by MIT Technology Review as the man who is given machines the gift of imagination.

GANs are basically programs that are pitted against one another in a contest that learn from a dataset of “training inputs” and produce new examples that are like the “provided inputs”. The training inputs can range from numbers, digits, different languages photographs and even videos. Hence the abuse potential that needs to be highlighted is the one that involves human image synthesis generated through a dataset of photographs and videos.

In February 2019, an article came out in VICE about a website that used GANs to generate faces of people who do not exist (thispersondoesnotexist.com). This website employed GANs that had the ability to generate convincing human faces that never existed in the first place. It was as simple as clicking the refresh button on the browser and a new face was generated. Another website that emerged soon after was These Nudes Do Not Exist (TNDNE)(thesenudesdonotexist.com). The website created an algorithmically generated nude photo of a woman as if she is posing for a mugshot and sold those images for $1. The founders of TNDNE were not so open about the datasets they used to train their GANs, but it was assumed from the results that it involved 20-40-year-old Caucasian women. There are many other examples of AI that has already been used in the past to create realistic videos that show people doing or saying things that they have not done so in real life.

The above-mentioned examples are already very malicious and abusive but here under is a hypothetical scenario that employs a combination of all these examples that highlights these in a security context. What if the “inputs” of photographs and videos provided to the GANs were of most wanted criminals or terrorists.  GANs could then potentially be used to generate the faces of the worlds next most wanted insurgents, who can be seen on video generating threats or taking responsibilities for some terrorist action happening across the world. GAN generated insurgents can also be made to look as if they belong to a certain country of target, forming the basis or a recipe for a potential false flag that involves the target country.  And since these individuals do not exist in real life, the blamed country would be helpless in pinpointing or acting against them.

It is therefore absolutely necessary for countries, especially Pakistan, to study the implications of such technologies that can be employed against it and also work towards developing countermeasures to deal in a scenario in which such technologies are employed.

Author: Daniel Khan

Edited By: Talha Ahmad (Editor in Chief PSF)

Note: The views expressed in this article are the author’s own and do not necessarily reflect the editorial policy of Pakistan Strategic Forum.

#TeamPakistanStrategicForum

LEAVE A REPLY

Please enter your comment!
Please enter your name here