Book Image

Generative AI with Python and TensorFlow 2

By : Joseph Babcock, Raghav Bali
4 (1)
Book Image

Generative AI with Python and TensorFlow 2

4 (1)
By: Joseph Babcock, Raghav Bali

Overview of this book

Machines are excelling at creative human skills such as painting, writing, and composing music. Could you be more creative than generative AI? In this book, you’ll explore the evolution of generative models, from restricted Boltzmann machines and deep belief networks to VAEs and GANs. You’ll learn how to implement models yourself in TensorFlow and get to grips with the latest research on deep neural networks. There’s been an explosion in potential use cases for generative models. You’ll look at Open AI’s news generator, deepfakes, and training deep learning agents to navigate a simulated environment. Recreate the code that’s under the hood and uncover surprising links between text, image, and music generation.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

Summary

Congratulations on completing a complex chapter involving a large number of concepts. In this chapter, we covered various concepts associated with handling textual data for the task of text generation. We started off by developing an understanding of different text representation models. We covered most of the widely used representation models, from Bag of Words to word2vec and even FastText.

The next section of the chapter focused on developing an understanding of RNN-based text generation models. We briefly discussed what comprises a language model and how we can prepare a dataset for such a task. We then trained a character-based language model to generate synthetic text samples. We touched upon different decoding strategies and used them to understand different outputs from our RNN based-language model. We also delved into a few variants, such as stacked LSTMs and bidirectional LSTM-based language models. Finally, we discussed the usage of convolutional networks in...