Generative Adversarial Networks

Part
01
of one
Part
01

Generative Adversarial Network Repos

Open-source repositories for implementing the various facial attributes on-demand in a photo-realistic way include the code repository used to learn Generative Adversarial Networks, recently published by Packt and the Apchenstu repository. Another open-source repository is the run-youngjoo/SC-FEGAN repository.

Code Repository for Learning Generative Adversarial Networks, published by Packt

  • The Generative Adversarial Networks repository compiled by Packt is accessible via this link.

Examples and Results

  • The code repository used to learn Generative Adversarial Networks, recently published by Packt, has five chapters (chapters 2 to chapter 6) published in separate repositories called code files.
  • The source code implemented by Pack is open-source and available to the public.
  • An analysis of the source code of the Generative Adversarial Networks published by Packt indicates that it monitors the facial attribute of an uploaded file object and produces a response image by adding the desired attribute such as "Confidence" and other expressions to the face.
  • The code contains the syntax "face_response = rekognition.detect_faces" used to detect the current facial attribute and the syntax "output['Face']=face['Confidence']" is used to add expression to the input image (the face of an individual).

Pros and Cons

  • As a pro, all datasets used in the repository are open-source and are available to the public "free of charge."
  • As a con, the user will need to install Python as well as other additional Python packages utilizing pip to run the code samples seamlessly.
  • As a con, the training example procedure takes much time when running on a central processing unit (CPU). As a pro, it is better to set up TensorFlow on a graphics processing unit (GPU) and run on it. The GPU saves time.

Timing Estimates for Implementation

  • The actual timing estimate is not available to the public but is dependent on the processor type/processor architecture. The training examples/procedure require much time when running on a central processing unit (CPU). It is better to set up TensorFlow on a graphics processing unit (GPU) and run on it.
  • The keep-alive/timeout interval during implementation is set at 5 seconds in the code using the syntax "keepalive_timeout 5." The keep-alive range/interval dictates how often the app "sends data packets to the server" for maintained connectivity. Any consecutive failure of transmission from the app to the server within this time frame terminates the implementation process.

Apchenstu facial details synthesis repository

  • The Apchenstu facial details synthesis repository is accessible via this link.

Examples and Results

  • The Apchenstu facial details synthesis repository contains code for synthesizing detailed facial attributes from a single input image. The directory/repository has five individual parts, namely: DFDN, landmark detector, emotionNet, proxyEstimator as well as faceRender.
  • The code successfully generates a proxy mesh through the use of expression/emotion prior. The single-image 3D face synthesis mechanism handles challenging facial expressions and recovers "fine geometric details" and also renders "realistic details."
  • Files used to compile Aapchenstu Generative Adversarial Networks systems are called open-source (available to the public).
  • The GAN repository includes syntax that can sense and alter eye positions. The syntax "vec3 EyePosition = vec3(0, 0, 400)" references a vector that specifies the positions of the eye.

Pros and Cons

  • The code used in the repository is capable of recovering challenging facial expressions and restoring "fine geometric details" that are absent/lost in an image.
  • As a con, to use the source codes, a user needs the "Windows version of Anaconda Python3.7" as well as PyTorch.
  • Users who wish to implement Aapchenstu code are required to install TensorFlow and keras to use emotion prior (emotionNet).

Timing Estimates for Implementation

Run-youngjoo/SC-FEGANRepository

  • The run-youngjoo/SC-FEGAN repository is accessible via this link.

Examples and Results

  • The run-youngjoo/SC-FEGAN repository contains Generative Adversarial Networks (GAN) resources that can generate eyes and other facial attributes that were either closed, veiled, or concealed in original images.
  • The GAN allows users to edit face images using deep neural networks. The code enables consumers to edit various facial images through intuitive inputs like sketching and coloring, from which the system/network "SC-FEGAN generates" high-quality synthetic images.
  • The sketch and color SC-FEGAN code can interact with Google cloud ("Google drive"), implement "Face restoration," "face editing" (earring), and other similar facial amendments to an image (in a photo-realistic way).

Pros and Cons

  • As a pro, the program can implement face restoration using "sketch and color" only.
  • As a con, the program has several dependencies, namely: TensorFlow, NumPy, Python3, PyQt5, OpenCV-python, and PyYAML.
  • The software should be used for educational as well as "academic research" purposes only. The provided model, as well as sample codes, are available through a "non-commercial creative commons license." It may not be used for commercial purposes.

Timing Estimates for Implementation

  • The program does not include values for the estimated time interval related to the SC-FEGAN implementation.
  • The graphical user interface (GUI) has a color button, which, when clicked on for the first time, requires a user to select a color from an available palette.
  • The program code references "inference time" (time from start to end), start time ("start_t"), end time ("end_t"), and other timelines but does not provide actual estimates of duration.

Research Strategy

The research has investigated additional open-source repositories for implementing the changing of facial attributes on-demand in a photo-realistic way. Several code files and directories contained in repositories used to implement various Generative Adversarial Networks (GAN) systems were reviewed and are in the study. The study assumes that open-source repositories make codes available to the public "free of charge." A study of the uncovered Generative Adversarial Network repositories indicates that they utilize high-level language (HLL). High-level languages are "closer to humans" and were easier to understand. Hence, the meanings of various syntax are inferred to describe the included GAN repositories without consulting programming language dictionaries.
Sources
Sources