Avatar

Update July 9, 7:55 p.m EST: GitHub removed the DeepNude source code from its website. Read more here.


After 4 days on the market, the creator(s) of DeepNude, the AI that “undressed” women, retired the app following a glut of backlash from individuals including leaders of the AI community:

Although DeepNude’s algorithm, which constructed a deepfake nude image of a woman (not a person; a woman) based on a semi-clothed picture of her wasn’t sophisticated enough to pass forensic analysis, its output was passable to the human eye once the company’s watermark over the construced nude (for the free app) or “FAKE” stamp in the image’s corner ($50 version of the app) was removed.

However, DeepNude isn’t gone. Quite the opposite- it’s back as an open source project on GitHub- making it more dangerous than it was as a standalone app. Anyone can download the source code. For free.

The upside for potential victims is that the algorithm is failing to meet expectations:

The downside of DeepNude becoming open source is that the algorithm can be trained on a larger dataset of nude images to increase (“improve”) the resulting nude image’s accuracy level.

If technology’s ability to create fake images-including nudes- well enough to fool the human eye isn’t new, why is this significant?

Thanks to applications such as Photoshop and the media’s coverage of deepfakes, if we don’t already question the authenticity of digitally-produced images, we’re well on our way to doing so.

In the below example, Photoshop is used to overlay Katy Perry’s face onto Megan Fox’s (clothed) body:

DeepNude effectively follows the same process. What’s significant is that it does so very quickly via automation. And instead of overlaying 1 person’s face onto 1 (other) person’s body, because it’s a machine learning algorithm trained on a dataset of over 10,000 images of nude women, reverse-engineering the output images to its component parts would be nearly impossible.

All this begs the question- how should we respond? Can we prevent victimization by algorithms like these? If so, how?

What role does Corporate Responsibility play? Should GitHub, or Microsoft (its parent company), be held accountable for taking down the DeepNude source code and implementing controls to prevent it from reappearing until victimization can be prevented?

Should our response be social? Is it even possible for us teach every person on the planet (including curious adolescents whose brains are still maturing and may be tempted to use DeepNude indiscriminately) that consent must be asked for and given freely?

Should we respond legislatively? Legally, creating a DeepNude of someone who didn’t provide consent could be treated as a felony similar to blackmail (independent of the fake image’s use). The state of Virginia thinks so. Just this month, it passed an amendment expanding its ban on nonconsensual pornography to include deepfakes.

If the response should be legislative, how should different countries and regions account for the global availability of DeepNude’s source code? If it becomes illegal to have or use the algorithm in one country and not another, should the code be subject to smuggling laws?

Given that an AI spurred this ethical debate, what about a technological response? Should DeepNude and other AIs be expected or required to implement something like facial recognition-based consent by the person whose image will be altered?


What do you think? How should we as human beings respond to DeepNude’s return- and the moral hazards it and similar AIs create? How should we protect potential victims? And who is responsible for doing so? Join the conversation and leave your thoughts below!