Book Image

Generative AI with Python and TensorFlow 2

By : Joseph Babcock, Raghav Bali
4 (1)
Book Image

Generative AI with Python and TensorFlow 2

4 (1)
By: Joseph Babcock, Raghav Bali

Overview of this book

Machines are excelling at creative human skills such as painting, writing, and composing music. Could you be more creative than generative AI? In this book, you’ll explore the evolution of generative models, from restricted Boltzmann machines and deep belief networks to VAEs and GANs. You’ll learn how to implement models yourself in TensorFlow and get to grips with the latest research on deep neural networks. There’s been an explosion in potential use cases for generative models. You’ll look at Open AI’s news generator, deepfakes, and training deep learning agents to navigate a simulated environment. Recreate the code that’s under the hood and uncover surprising links between text, image, and music generation.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

What this book covers

Chapter 1, An Introduction to Generative AI: "Drawing" Data from Models, introduces the field of generative AI, from the underlying probability theory to recent examples of applied products of these methods.

Chapter 2, Setting Up a TensorFlow Lab, describes how to set up a computing environment for developing generative AI models with TensorFlow using open source tools – Python, Docker, Kubernetes, and Kubeflow – in order to run a scalable code laboratory in the cloud.

Chapter 3, Building Blocks of Deep Neural Networks, introduces foundational concepts for deep neural networks that will be utilized in the rest of the volume – how they were inspired by biological research, what challenges researchers overcame in developing ever larger and more sophisticated models, and the various building blocks of network architectures, optimizers, and regularizers utilized by generative AI examples in the rest of the book.

Chapter 4, Teaching Networks to Generate Digits, demonstrates how to implement a deep belief network, a breakthrough neural network architecture that achieved state-of-the-art results in classifying images of handwritten digits through a generative AI approach, which teaches the network to generate images before learning to classify them.

Chapter 5, Painting Pictures with Neural Networks Using VAEs, describes variational autoencoders (VAEs), an advancement from deep belief networks which create sharper images of complex objects through clever use of an objective function grounded in Bayesian statistics. The reader will implement both a basic and advanced VAE which utilizes inverse autoregressive flow (IAF), a recursive transformation that can map random numbers to complex data distributions to create striking synthetic images.

Chapter 6, Image Generation with GANs, introduces generative adversarial networks, or GANs, as powerful deep learning architectures for generative modeling. Starting with the building blocks of GANs and other fundamental concepts, this chapter covers a number of GAN architectures and how they are used to generate high resolution images from random noise.

Chapter 7, Style Transfer with GANs, focuses on a creative application of generative modeling, particularly GANs, called style transfer. Applications such as transforming black and white images to colored ones, aerial maps to Google Maps-like outputs, and background removal are all made possible using style transfer. We cover a number of paired and unpaired architectures like pix2pix and CycleGAN.

Chapter 8, Deepfakes with GANs, introduces an interesting and controversial application of GANs called deepfakes. The chapter discusses the basic building blocks for deepfakes, such as features and different modes of operations, along with a number of key architectures. It also includes a number of hands-on examples to generate fake photos and videos based on key concepts covered, so readers can create their own deepfake pipelines.

Chapter 9, The Rise of Methods for Text Generation, introduces concepts and techniques relating to text generation tasks. We first cover the very basics of language generation using deep learning models, starting from different ways of representing text in vector space. We progress to different architectural choices and decoding mechanisms to achieve high quality outputs. This chapter lays the foundation for more complex text generation methods covered in the subsequent chapter.

Chapter 10, NLP 2.0: Using Transformers to Generate Text, covers the latest and greatest in the NLP domain, with a primary focus on the text generation capabilities of some of the state-of-the-art architectures (like GPT-x) based on transformers and the like, and how they have revolutionized the language generation and NLP domain in general.

Chapter 11, Composing Music with Generative Models, covers music generation using generative models. This is an interesting yet challenging application of generative models and involves understanding a number of nuances and concepts associated with music. This chapter covers a number of different methods to generate music, from basic LSTMs to simple GANs and eventually MuseGAN for polyphonic music generation.

Chapter 12, Play Video Games with Generative AI: GAIL, describes the connection between generative AI and reinforcement learning, a branch of machine learning that teaches "agents" to navigate real or virtual "environments" while performing specified tasks. Through a connection between GANs and reinforcement learning, the reader will teach a hopping figure to navigate a 3D environment by imitating an expert example of this movement.

Chapter 13, Emerging Applications in Generative AI, describes recent research in generative AI, spanning biotechnology, fluid mechanics, video, and text synthesis.