Synthesizing three-dimensional objects from single or multiple two-dimensional views has been a challenging task. To combat this, several techniques involving Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), and Recurrent Neural Network (RNN) have been proposed. Since its advent in 2014, there has been a tremendous amount of research done in the area of Generative Adversarial Networks (GANs). Among the various applications of GANs, image synthesis has shown great potential due to the power of two deep neural networks—generator and discriminator, trained in a competitive way, which are able to produce reasonably realistic images. Formulation of 3D-GANs—which are able to generate three-dimensional objects from multiple two-dimensional views with impressive accuracy—has emerged as a promising solution to the aforementioned issue. This paper provides a comprehensive analysis of deep learning methods used in generating three-dimensional objects, reviews the different models and frameworks for three-dimensional object generation, and discusses some evaluation metrics and future research direction in using GANs as an alternative for simultaneous localization and environment mapping as well as leveraging the power of GANs to revolutionize the field of education and medicine. © 2020, Springer Nature Singapore Pte Ltd.