Book Image

Neural Network Programming with Java

By : Alan M. F. Souza, Fabio M. Soares
Book Image

Neural Network Programming with Java

By: Alan M. F. Souza, Fabio M. Soares

Overview of this book

<p>Vast quantities of data are produced every second. In this context, neural networks become a powerful technique to extract useful knowledge from large amounts of raw, seemingly unrelated data. One of the most preferred languages for neural network programming is Java as it is easier to write code using it, and most of the most popular neural network packages around already exist for Java. This makes it a versatile programming language for neural networks.</p> <p>This book gives you a complete walkthrough of the process of developing basic to advanced practical examples based on neural networks with Java.</p> <p>You will first learn the basics of neural networks and their process of learning. We then focus on what Perceptrons are and their features. Next, you will implement self-organizing maps using the concepts you’ve learned. Furthermore, you will learn about some of the applications that are presented in this book such as weather forecasting, disease diagnosis, customer profiling, and characters recognition (OCR). Finally, you will learn methods to optimize and adapt neural networks in real time.</p> <p>All the examples generated in the book are provided in the form of illustrative source code, which merges object-oriented programming (OOP) concepts and neural network features to enhance your learning experience.</p>
Table of Contents (19 chapters)
Neural Network Programming with Java
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface
Index

Examples of learning algorithms


Let's now merge the theoretical content presented so far together into simple examples of learning algorithms. In this chapter, we are going to explore two neural architectures: perceptron and adaline. Both are very simple, containing only one layer.

Perceptron

The perceptrons learn by taking into account only the error between the target and the output, and the learning rate. The update rule is as follows:

Where wi is the weight connecting the ith input to the neuron, t[k] is the target output for the kth sample, y[k] is the result of the neural network for the kth sample, xi[k] is the ith input for the kth sample, and η is the learning rate. It can be seen that this rule is very simplistic and does not consider the perceptron nonlinearities present in the activation function; it just goes in the opposite direction of the error in the naïve hope that this would take the network close to the objective.

Delta rule

A better algorithm based on the gradient descent...