## Perceptron

The perceptron is a simple algorithm which, given an input vector *x* of *m* values (*x _{1}*,

*x*, ...,

_{2}*x*) often called input features or simply features, outputs either

_{n}*1*(yes) or

*0*(no). Mathematically, we define a function:

Here, *w* is a vector of weights, *wx* is the dot product

, and *b* is a bias. If you remember elementary geometry, *wx + b* defines a boundary hyperplane that changes position according to the values assigned to *w* and *b*. If *x* lies above the straight line, then the answer is positive, otherwise it is negative. Very simple algorithm! The perception cannot express a *maybe* answer. It can answer *yes* (*1*) or *no* (*0*) if we understand how to define *w* and *b*, that is the training process that will be discussed in the following paragraphs.

### The first example of Keras code

The initial building block of Keras is a model, and the simplest model is called **sequential**. A sequential Keras model is a linear pipeline (a stack) of neural networks layers. This code fragment defines a single layer with `12`

artificial neurons, and it expects `8`

input variables (also known as features):

from keras.models import Sequential model = Sequential() model.add(Dense(12, input_dim=8, kernel_initializer='random_uniform'))

Each neuron can be initialized with specific weights. Keras provides a few choices, the most common of which are listed as follows:

`random_uniform`

: Weights are initialized to uniformly random small values in (*-0.05*,*0.05*). In other words, any value within the given interval is equally likely to be drawn.`random_normal`

: Weights are initialized according to a Gaussian, with a zero mean and small standard deviation of*0.05*. For those of you who are not familiar with a Gaussian, think about a symmetric*bell curve*shape.`zero`

: All weights are initialized to zero.

A full list is available at https://keras.io/initializations/.