Search for a command to run...
The rapid advancement of Generative AdversarialNetworks (GANs) has enabled the creationof highly realistic synthetic human faces that are indistinguishable from real images. This paper presents a web-basedsystemforgeneratingnonexistinghumanfaces from a single reference image using a pretrained style- based generative model. The proposed system leverages NVIDIA StyleGAN2 for latent space inversion and controlled face synthesis, enabling the generation of multiple realistic face variations while preserving structural similarity to the input image. Furthermore, an Anime-style transformation module based on a pretrained AnimeGAN model is integrated to provide artistic stylization options. The system is implemented using Python, Flask, PyTorch, and React to ensure scalable backend processing and interactive user experience. The proposed framework eliminates the need for large-scale training by utilizing pretrained models, making it computationally efficient and practical for real-time applications. Experimental evaluation demonstrates that the system generates high-quality, identitydistinct synthetic faces while maintaining visual coherence. Quantitative and qualitative assessments indicate that latent space manipulation enables controlled diversity without compromising perceptual realism. Additionally, the integration of stylization techniques enhances creative flexibility while preserving core facial attributes. The solution has potential applications in digital media, entertainment, privacy-preserving data augmentation, and creative AI systems.
Published in: International Journal for Research in Applied Science and Engineering Technology
Volume 14, Issue 3, pp. 4092-4101