Book Image

Artificial Intelligence for Big Data

By : Anand Deshpande, Manish Kumar
Book Image

Artificial Intelligence for Big Data

By: Anand Deshpande, Manish Kumar

Overview of this book

In this age of big data, companies have larger amount of consumer data than ever before, far more than what the current technologies can ever hope to keep up with. However, Artificial Intelligence closes the gap by moving past human limitations in order to analyze data. With the help of Artificial Intelligence for big data, you will learn to use Machine Learning algorithms such as k-means, SVM, RBF, and regression to perform advanced data analysis. You will understand the current status of Machine and Deep Learning techniques to work on Genetic and Neuro-Fuzzy algorithms. In addition, you will explore how to develop Artificial Intelligence algorithms to learn from data, why they are necessary, and how they can help solve real-world problems. By the end of this book, you'll have learned how to implement various Artificial Intelligence algorithms for your big data systems and integrate them into your product offerings such as reinforcement learning, natural language processing, image recognition, genetic algorithms, and fuzzy logic systems.
Table of Contents (19 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Nonlinearities model


With the background information about the activation functions, we now understand why we need nonlinearities within the neural network. The nonlinearity is essential in order to model complex data patterns that solve regression and classification problems with accuracy. Let's once again go back to our initial example problem where we have established the activity of the hidden layer. Let's apply the sigmoid activation function to the activity for each of the nodes in the hidden layer. This gives our second formula in the perceptron model:

  • Z(2) = XW(1) 
  • a(2) = f(z(2))

Once we apply the activation function, f, the resultant matrix will be the same size as z(2). That is, 5 x 3. The next step is to multiply the activities of the hidden layer by the weights on the synapse on the output layer. Refer to the diagram on ANN notations. Note that we have three weights, one for each link from the nodes in the hidden layer to the output layer. Let's call these weights W(2). With this...