Book Image

Machine Learning for Healthcare Analytics Projects

Book Image

Machine Learning for Healthcare Analytics Projects

Overview of this book

Machine Learning (ML) has changed the way organizations and individuals use data to improve the efficiency of a system. ML algorithms allow strategists to deal with a variety of structured, unstructured, and semi-structured data. Machine Learning for Healthcare Analytics Projects is packed with new approaches and methodologies for creating powerful solutions for healthcare analytics. This book will teach you how to implement key machine learning algorithms and walk you through their use cases by employing a range of libraries from the Python ecosystem. You will build five end-to-end projects to evaluate the efficiency of Artificial Intelligence (AI) applications for carrying out simple-to-complex healthcare analytics tasks. With each project, you will gain new insights, which will then help you handle healthcare data efficiently. As you make your way through the book, you will use ML to detect cancer in a set of patients using support vector machines (SVMs) and k-Nearest neighbors (KNN) models. In the final chapters, you will create a deep neural network in Keras to predict the onset of diabetes in a huge dataset of patients. You will also learn how to predict heart diseases using neural networks. By the end of this book, you will have learned how to address long-standing challenges, provide specialized solutions for how to deal with them, and carry out a range of cognitive tasks in the healthcare domain.
Table of Contents (7 chapters)

Finding the optimal hyperparameters

We're now going to optimize the weight initialization that we're applying to the end of each of these neurons:

  1. To do this, we will first copy the code from the cell that we ran in the previous Reducing overfitting using dropout regularization section, and paste it into a new one. In this section, we won't be changing the general structure of the code; instead, we will be modifying some parameters and optimizing the search.
  2. We now know the best learn_rate and dropout_rate, so we are going to hardcode these and remove them. We are also going to remove the Dropout layers that we added in the previous section. We will modify the learning rate of the Adam optimizer to 0.001, as this is the best value that we found.
  3. Since we are trying to optimize the activation and init variables, we will define them in the create_model() function...