• Stars
    star
    5,168
  • Rank 8,026 (Top 0.2 %)
  • Language
    Python
  • Created over 5 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DeepNude's algorithm and general image generation theory and practice research, including pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, ALAE, mGANprior, StarGAN-v2 and VAE models (TensorFlow2 implementation). DeepNude的算法以及通用生成对抗网络(GAN,Generative Adversarial Network)图像生成的理论与实践研究。

DeepNude-an-Image-to-Image-technology

中文版 | English Version

This repository contains the pix2pixHD algorithms(proposed by NVIDIA) of DeepNude, and more importantly, the general image generation theory and practice behind DeepNude.

This resource includes the TensorFlow2 (Pytorch | PaddlePaddle) implementation of image generation models such as pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, VAE, ALAE, mGANprior and StarGAN-v2, which can be used to systematically learn to Generating Adversarial Network (GAN).


Content of this resource

  1. What is DeepNude?
  2. Fake Image Generation and Image-to-Image Demo
  3. DeepNude Algorithm: Normal to Pornography Image
  4. NSFW: Pornography to Normal Image, Pornographic Image Detection
  5. GAN Image Generation Theoretical Research
  6. GAN Image Generation Practice Research
  7. DeepNude to DeepFakes
  8. Future

This repository is being sponsored by the following tool; please help to support us by taking a look and signing up to a free trial.

GitAds


What is DeepNude?

DeepNude uses a slightly modified version of the pix2pixHD GAN architecture, quoted from deepnude_official. pix2pixHD is a general-purpose Image2Image technology proposed by NVIDIA. Obviously, DeepNude is the wrong application of artificial intelligence technology, but it uses Image2Image technology for researchers and developers working in other fields such as fashion, film and visual effects.


Fake Image Generation Demo

This section provides a fake image generation demo that you can use as you wish. They are fake images generated by StyleGAN without any copyright issues. Note: Each time you refresh the page, a new fake image will be generated, pay attention to save!

Image-to-Image Demo

This section provides a demo of Image-to-Image Demo: Black and white stick figures to colorful faces, cats, shoes, handbags. DeepNude software mainly uses Image-to-Image technology, which theoretically converts the images you enter into any image you want. You can experience Image-to-Image technology in your browser by clicking Image-to-Image Demo below.

Try Image-to-faces Demo

Try Image-to-Image Demo

An example of using this demo is as follows:

In the left side box, draw a cat as you imagine, and then click the process button, you can output a model generated cat.


🔞 DeepNude Algorithm

DeepNude is a pornographic software that is forbidden by minors. If you are not interested in DeepNude itself, you can skip this section and see the general Image-to-Image theory and practice in the following chapters.

DeepNude_software_itself content:

  1. Official DeepNude Algorithm(Based on Pytorch)
  2. DeepNude software usage process and evaluation of advantages and disadvantages.

👍 NSFW

Recognition and conversion of five types of images [porn, hentai, sexy, natural, drawings]. Correct application of image-to-image technology.

NSFW(Not Safe/Suitable For Work) is a large-scale image dataset containing five categories of images [porn, hentai, sexy, natural, drawings]. Here, CycleGAN is used to convert different types of images, such as porn->natural.

  1. Click to try pornographic image detection Demo
  2. Click Start NSFW Research

Image Generation Theoretical Research

This section describes DeepNude-related AI/Deep Learning theory (especially computer vision) research. If you like to read the paper and use the latest papers, enjoy it.

  1. Click here to systematically understand GAN
  2. Click here to systematically image-to-image-papers

1. Pix2Pix

Result

Image-to-Image Translation with Conditional Adversarial Networks is a general solution for the use of conditional confrontation networks as an image-to-image conversion problem proposed by the University of Berkeley.

View more paper studies (Click the black arrow on the left to expand)

2. Pix2PixHD

DeepNude mainly uses this Image-to-Image(Pix2PixHD) technology.

Result

Get high resolution images from the semantic map. The semantic graph is a color picture. The different color blocks on the map represent different kinds of objects, such as pedestrians, cars, traffic signs, buildings, and so on. Pix2PixHD takes a semantic map as input and produces a high-resolution, realistic image. Most of the previous techniques can only produce rough, low-resolution images that don't look real. This research has produced images with a resolution of 2k by 1k, which is very close to full HD photos.

3. CycleGAN

Result

CycleGAN uses a cycle consistency loss to enable training without the need for paired data. In other words, it can translate from one domain to another without a one-to-one mapping between the source and target domain. This opens up the possibility to do a lot of interesting tasks like photo-enhancement, image colorization, style transfer, etc. All you need is the source and the target dataset.

4. UGATIT

Result

UGATIT is a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. UGATIT can do both image conversions that require Holistic Changes, and image conversions that require Large Shape Changes. It can be seen as an enhanced version of CycleGAN, a more efficient general image conversion framework.

5. StyleGAN

Result

Source A + Source B (Style) = ?

StyleGAN can not only generate fake images source A and source B, but also combine the content of source A and source B from different strengths, as shown in the following table.

Style level (from source b) Source A Source B
High level (coarse) all colors (eyes, hair, light) and details facial features from Source A inherit advanced facial features from Source B, such as posture, general hair style, facial shape and glasses
Medium level posture, general facial shape and glasses come from source a inherits the middle level facial features of source B, such as hair style, open / closed eyes
High level (fine) the main facial content comes from source a inherits the advanced facial features of source B, such as color scheme and microstructure

StyleGAN2

Without increasing the amount of calculation of StyleGAN, while solving the image artifacts generated by StyleGAN and obtaining high-quality images with better details, StyleGAN2 implements a new SOTA for unconditional image modeling tasks.

6. Image Inpainting

Result

In the image interface of Image_Inpainting(NVIDIA_2018).mp4 video, you only need to use tools to simply smear the unwanted content in the image. Even if the shape is very irregular, NVIDIA's model can “restore” the image with very realistic The picture fills the smeared blank. It can be described as a one-click P picture, and "no ps traces." The study was based on a team from Nvidia's Guilin Liu et al. who published a deep learning method that could edit images or reconstruct corrupted images, even if the images were worn or lost pixels. This is the current 2018 state-of-the-art approach.

7. SinGAN

ICCV2019 Best paper - Marr prize

Result

We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.

8. ALAE

Result

Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures.

9. mGANprior

Result

Despite the success of Generative Adversarial Networks (GANs) in image synthesis, applying trained GAN models to real image processing remains challenging. Previous methods typically invert a target image back to the latent space either by back-propagation or by learning an additional encoder. However, the reconstructions from both of the methods are far from ideal. In this work, we propose a novel approach, called mGANprior, to incorporate the well-trained GANs as effective prior to a variety of image processing tasks.

10. StarGAN v2

Result

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines.

11. DeepFaceDrawing

Result

Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.


Image Generation Practice Research

These models are based on the latest implementation of TensorFlow2.

This section explains DeepNude-related AI/Deep Learning (especially computer vision) code practices, and if you like to experiment, enjoy them.

1. Pix2Pix

Use the Pix2Pix model (Conditional Adversarial Networks) to implement black and white stick figures to color graphics, flat houses to stereoscopic houses and aerial maps to maps.

Click Start Experience 1

2. Pix2PixHD

Under development... First you can use the official implementation

3. CycleGAN

The CycleGAN neural network model is used to realize the four functions of photo style conversion, photo effect enhancement, landscape season change, and object conversion.

Click Start Experience 3

4. DCGAN

DCGAN is used to achieve random number to image generation tasks, such as face generation.

Click Start Experience 4

5. Variational Autoencoder (VAE)

VAE is used to achieve random number to image generation tasks, such as face generation.

Click Start Experience 5

6. Neural style transfer

Use VGG19 to achieve image style migration effects, such as photo changes to oil paintings and comics.

Click Start Experience 6

..........................................................................

If you are a user of PaddlePaddle, you can refer to the paddlepaddle version of the above model image generation model library paddegan.


DeepFakes (Promotion of DeepNude)

DeepFakes can be seen as an upgraded version of DeepNude, which uses a deep learning model to generate a series of techniques that can be faked, such as fake images, fake audio, and fake videos.

MyVoiceYourFace

Using deep fake machine learning to create a video from an image and a source video. Related paper: First Order Motion Model for Image Animation

Speaker's Video + Any Image = Fake Video

click to try MyVoiceYourFace Online!

Realistic Speech-Driven Facial Animation with GANs

One photo + One audio = Composite Video

We propose a temporal GAN capable of producing animated faces using only a still image of a person and an audio clip containing speech. The videos generated using this model do not only produce lip movements that are synchronized with the audio but also exhibit characteristic facial expressions such as blinks, brow raises etc. This extends our previous model by separately dealing with audio-visual synchronization and expression generation. Our improved model works on "in-the-wild" unseen faces and is capable of capturing the emotion of the speaker and reflecting it in the facial expression.

Interested in DeepFakes?

Click to start systematic learning DeepFakes


Future

Click read more...

More Repositories

1

Entity-Relation-Extraction

Entity and Relation Extraction Based on TensorFlow and BERT. 基于TensorFlow和BERT的管道式实体及关系抽取,2019语言与智能技术竞赛信息抽取任务解决方案。Schema based Knowledge Extraction, SKE 2019
Python
1,218
star
2

Machine-Learning-Book

《机器学习宝典》包含:谷歌机器学习速成课程(招式)+机器学习术语表(口诀)+机器学习规则(心得)+机器学习中的常识性问题 (内功)。该资源适用于机器学习、深度学习研究人员和爱好者参考!
Jupyter Notebook
1,031
star
3

BERT_Paper_Chinese_Translation

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 论文的中文翻译 Chinese Translation!
679
star
4

BERT-for-Sequence-Labeling-and-Text-Classification

This is the template code to use BERT for sequence lableing and text classification, in order to facilitate BERT for more tasks. Currently, the template code has included conll-2003 named entity identification, Snips Slot Filling and Intent Prediction.
Python
468
star
5

Multiple-Relations-Extraction-Only-Look-Once

Multiple-Relations-Extraction-Only-Look-Once. Just look at the sentence once and extract the multiple pairs of entities and their corresponding relations. 端到端联合多关系抽取模型,可用于 http://lic2019.ccf.org.cn/kg 信息抽取。
Python
346
star
6

Schema-based-Knowledge-Extraction

Code for http://lic2019.ccf.org.cn/kg 信息抽取。使用基于 BERT 的实体抽取和关系抽取的端到端的联合模型。
Python
284
star
7

Machine_Learning_bookshelf

机器学习深度学习相关书籍、课件、代码的仓库。 Machine learning is the warehouse of books, courseware and codes.
Jupyter Notebook
189
star
8

Multimodal-short-video-dataset-and-baseline-classification-model

500,000 multimodal short video data and baseline models. 50万条多模态短视频数据集和基线模型(TensorFlow2.0)。
Jupyter Notebook
125
star
9

Theoretical-Proof-of-Neural-Network-Model-and-Implementation-Based-on-Numpy

This resource implements a deep neural network through Numpy, and is equipped with easy-to-understand theoretical derivation, mainly for the in-depth understanding of neural networks. 神经网络模型的理论证明与基于Numpy的实现。
Python
77
star
10

Find-a-Machine-Learning-Job

找一份机器学习工作(算法工程师),需要提纲(算法能力)挈领(编程能力),充分准备。 本人学习和在找工作期间受到了很多前辈们的帮助,目前已经找到心仪的工作,撰写此文献给那些在求职路上有梦有汗水的人们!2020秋招算法,难度剧增!没有选择,只能迎难而上。
66
star
11

fan-ren-xiu-xian-zhuan

凡人修仙传(fanrenxiuxianzhuan)的资源汇总,谨献给“凡友”们。
Python
52
star
12

XLNet_Paper_Chinese_Translation

XLNet: Generalized Autoregressive Pretraining for Language Understanding 论文的中文翻译 Paper Chinese Translation!
50
star
13

Slot-Filling-and-Intention-Prediction-in-Paper-Translation

槽填充、意图预测(口语理解)论文整理和中文翻译。Slot filling and intent prediction paper collation and Chinese translation.
49
star
14

SMP2018

SMP2018中文人机对话技术评测(ECDT)
Jupyter Notebook
47
star
15

Image-Captioning

CNN-Encoder and RNN-Decoder (Bahdanau Attention) for image caption or image to text on MS-COCO dataset. 图片描述
Jupyter Notebook
35
star
16

ELMo

ELMo: Embeddings from Language Models. Using, visualizing and understanding EMLo by examples!
Jupyter Notebook
33
star
17

Text-generation-task-and-language-model-GPT2

solve text generation tasks by the language model GPT2, including papers, code, demo demos, and hands-on tutorials. 使用语言模型GPT2来解决文本生成任务的资源,包括论文、代码、展示demo和动手教程。
29
star
18

Transformer_implementation_and_application

The 300 lines of code (Tensorflow 2) completely replicates the Transformer model and is used in neural machine translation tasks and chat bots. 300行代码(Tensorflow 2)完整复现了Transformer模型,并且应用在神经机器翻译任务和聊天机器人上。
Jupyter Notebook
26
star
19

CPlusPlus-Programming-Language-Foundation

《CPlusPlus编程语言基础》又称为“C加加知识树”,用树状思维导图的形式展现C++从业人员必备的所有C++基础知识。
22
star
20

yuanxiaosc.github.io

个人博客;论文;机器学习;深度学习;Python学习;C++学习;
HTML
21
star
21

Keras_Attention_Seq2Seq

A sequence-to-sequence framework of Keras-based generative attention mechanisms that humans can read.一个人类可以阅读的基于Keras的代注意力机制的序列到序列的框架/模型,或许你不用写复杂的代码,直接使用吧。
Python
18
star
22

Deep_dynamic_contextualized_word_representation

TensorFlow code and pre-trained models for A Dynamic Word Representation Model Based on Deep Context. It combines the idea of BERT model and ELMo's deep context word representation.
Python
16
star
23

Path-Classification-Experiment

Introduction to Data Analysis: Path Classification Experiment. 本资源以选择最优路径为例详细介绍了如何解决一般的分类问题,包括原始数据的探索、模型的构建、模型调优和模型预测分析。包含前馈神经网络(Keras)、机器学习模型(sklearn)和绘制数据图表(matplotlib)的基础使用。
Jupyter Notebook
12
star
24

Deep-Convolutional-Generative-Adversarial-Network

Tensorflow 2. This repository demonstrates how to generate images of handwritten digits (MINIST) using a Deep Convolutional Generative Adversarial Network (DCGAN). 深度卷积生成对抗网络
Jupyter Notebook
9
star
25

NLPCC2019-Conference-Materials

NLPCC2019会议资料分享:论文投稿信息总结、NLP当前研究内容和趋势、学者演讲、海报、公司介绍和招聘信息。
7
star
26

Image_to_Text

Taking the image description task on the MS-COCO data set as an example, the template code of Image_to_Text is shown.
Jupyter Notebook
6
star
27

Slot-Gated-Modeling-for-Joint-Slot-Filling-and-Intent-Prediction

Code parsing and paper parsing for "Slot-Gated Modeling for Joint Slot Filling and Intent Prediction"
Python
5
star
28

Seq2Seq-English-French-Machine-Translation-Model

Seq2Seq English-French Machine Translation Model
Python
5
star
29

Hands-on-chat-robots

There are a variety of out-of-the-box chat bot codes here.
Jupyter Notebook
3
star