Book Image

TensorFlow Machine Learning Projects

By : Ankit Jain, Amita Kapoor
Book Image

TensorFlow Machine Learning Projects

By: Ankit Jain, Amita Kapoor

Overview of this book

TensorFlow has transformed the way machine learning is perceived. TensorFlow Machine Learning Projects teaches you how to exploit the benefits—simplicity, efficiency, and flexibility—of using TensorFlow in various real-world projects. With the help of this book, you’ll not only learn how to build advanced projects using different datasets but also be able to tackle common challenges using a range of libraries from the TensorFlow ecosystem. To start with, you’ll get to grips with using TensorFlow for machine learning projects; you’ll explore a wide range of projects using TensorForest and TensorBoard for detecting exoplanets, TensorFlow.js for sentiment analysis, and TensorFlow Lite for digit classification. As you make your way through the book, you’ll build projects in various real-world domains, incorporating natural language processing (NLP), the Gaussian process, autoencoders, recommender systems, and Bayesian neural networks, along with trending areas such as Generative Adversarial Networks (GANs), capsule networks, and reinforcement learning. You’ll learn how to use the TensorFlow on Spark API and GPU-accelerated computing with TensorFlow to detect objects, followed by how to train and develop a recurrent neural network (RNN) model to generate book scripts. By the end of this book, you’ll have gained the required expertise to build full-fledged machine learning projects at work.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
Dedication
About Packt
Contributors
Preface
Index

Defining and training a text-generating model


  1. Begin by loading the saved text data for pre-processing with the help of the load_data function:
 def load_data():
 """
 Loading Data
 """
 input_file = os.path.join(TEXT_SAVE_DIR)
 with open(input_file, "r") as f:
 data = f.read()

return data
  1. Implement define_tokens, as defined in the Pre-processing the data section of this chapter. This will help us create a dictionary of the key words and their corresponding tokens:
 def define_tokens():
 """
 Generate a dict to turn punctuation into a token. Note that Sym before each text denotes Symbol
 :return: Tokenize dictionary where the key is the punctuation and the value is the token
 """
 dict = {'.':'_Sym_Period_',
 ',':'_Sym_Comma_',
 '"':'_Sym_Quote_',
 ';':'_Sym_Semicolon_',
 '!':'_Sym_Exclamation_',
 '?':'_Sym_Question_',
 '(':'_Sym_Left_Parentheses_',
 ')':'_Sym_Right_Parentheses_',
 '--':'_Sym_Dash_',
 '\n':'_Sym_Return_',
 }
 return dict

The dictionary that we've created will be used to replace...