If you are viewing this file in preview mode, some links won't work. Find the fully featured Jupyter Notebook file on the website of Prof. Jens Flemming at Zwickau University of Applied Sciences. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Introduction to artificial neural networks (ANNs)

Supervised learning aims at approximating a function $f:X\rightarrow Y$ by a function $f_{\mathrm{approx}}:X\rightarrow Y$ based on a finite set of examples $(x_1,y_1),\ldots,(x_n,y_n)$ satisfying $f(x_l)=y_l$ or at least $f(x_l)\approx y_l$ for $l=1,\ldots,n$. Almost always we have finite dimensional feature spaces $X=\mathbb{R}^m$ and target spaces $Y=\mathbb{R}$.

A good hypothesis $f_{\mathrm{approx}}$ has to satisfy $f_{\mathrm{approx}}(x)\approx f(x)$ for all $x$ from the example set (good fit on training set) and for all other $x$ expected to appear in the underlying practical problem (good generalization). To construct good hypotheses we have to pose additional assumptions on $f_{\mathrm{approx}}$.

In linear regression we assume that the hypothesis is a linear combination of several prescribed basis functions. The coefficients are chosen to minimize the fitting error on the training set. Artificial neural networks follow the same idea: take some function containing several parameters and choose parameters such that the fitting error on the training set is small. The only difference to linear regression is the chosen ansatz. Typically one does not write down an explicit formula for $f_{\mathrm{approx}}$, but one provides a graphical scheme containing all information about the hypothesis. In contrast to linear regression, ANNs contain the parameters in a nonlinear fashion, resulting in more difficult minimization procedures. ANNs thus are an example for nonlinear regression.

Motivation from biology

ANNs originated from the wish to simulate human brains. A brain consists of many nerve cells (neurons) interconnected to transmit information (electrical pulses). All nerve cells have similar structure. The strength of interconnections between different nerve cells may vary and it is this varying strength which allows humans to learn new things. Learning, as far as we know, is realized by changing the strength of interconnections between nerve cells and, thus, reducing or improving the flow of information between different cells. A neuron takes all the electrical pulses from connected cells (inputs) and generates an output pulse from the inputs.

Of course human brains are much more complicated than described here and many mechanisms are not well understood. But the idea of many interconnected simple units forming a large powerful machine seems to be a key to artificial intelligence. ANNs try to simulate such networks of neurons on a digital computer.

Next to ANNs there exist several other ideas based on the concept of connecting many simple units. If you are interested have a look at cellular automata and collective intelligence.

Artificial neurons

ANNs are composed of artificial neurons, mimicking biological neurons. An artificial neuron os a function taking $p$ inputs and yielding one output. Inputs and outputs are real numbers. Each input is multiplied by a weight, then all the products are added, and an activation function is applied to the sum. The outcome of the activation function is the neuron's output.

Weights correspond to the strength of interconnections between biological neurons. The activation function simulates the fact, that a biological neuron fires (that is, generates an output pulse) only if the level and number of input pulses is high enough.

Activation functions almost always are monotonically increasing. Typical activation functions are shown in the Wikipedia article on activation functions. Which activation function to choose depends on the underlying practical problem and heavily on experience.

Denoting the inputs by $u_1,\ldots,u_p\in\mathbb{R}$, the weights by $w_1,\ldots,w_p\in\mathbb{R}$, the activation function by $g:\mathbb{R}\to\mathbb{R}$, and the output by $v\in\mathbb{R}$, we have \begin{equation*} v=g\left(\sum_{\kappa=1}^p w_\kappa\,u_\kappa\right). \end{equation*} If $u$ is the vector of inputs and $w$ is the vector of weights, we may write \begin{equation*} v=g\bigl(w^{\mathrm{T}}\,u\bigr). \end{equation*}

Networks of artificial neurons

The simplest ANN consists of only one neuron. It takes the the feature values $x^{(1)},\ldots,x^{(m)}$ of a feature vector $x$ as inputs, that is, $p=m$, and the output is interpreted as prediction for the corresponding target $f(x)$.

We could also take more neurons and feed them with all or some of the feature values. Then the outputs of all neurons may be fed to one or more other neurons and so on. This way we obtain a network of neurons similar to biological neural networks (brains). The output of one of the neurons is interpreted as prediction for the targets.

ANNs can be represented graphically. Each neuron is a circle or rect containing information about the activation function used by the neuron. Connections between inputs and outputs are lines and the weights are numbers assigned to the corresponding input's line.

graphical representation of ANNs

The depicted ANN contains 5 neurons. It is a special case of a fully connected two-layered feedforward network. These terms will be introduced below. May may write down corresponding hypothesis $f_{\mathrm{approx}}$ as mathematical formula. Denote the weight vectors by $w$, $\hat{w}$, $\mathring{w}$, $\tilde{w}$, $\bar{w}$ and the activation functions by $g$, $\hat{g}$, $\mathring{g}$, $\tilde{g}$, $\bar{g}$. The we have \begin{equation*} f_{\mathrm{approx}}(x)=\bar{g}\Bigl(\bar{w}_1\,g\bigl(w^{\mathrm{T}}\,x\bigr)+\bar{w}_2\,\hat{g}\bigl(\hat{w}^{\mathrm{T}}\,x\bigr)+\bar{w}_3\,\mathring{g}\bigl(\mathring{w}^{\mathrm{T}}\,x\bigr)+\bar{w}_4\,\tilde{g}\bigl(\tilde{w}^{\mathrm{T}}\,x\bigr)\Bigr) \end{equation*} with 16 parameters (all the weights). Those 16 parameters have to be chosen to solve \begin{equation*} \frac{1}{n}\,\sum_{l=1}^n\bigl(f_{\mathrm{approx}}(x_l)-y_l\bigr)^2\to\min_{\text{weights}} \end{equation*} with training samples $(x_1,y_1),\ldots,(x_n,y_n)$. Below we will discuss how to solve such nonlinear minimization problems numerically.

Feedforward and layered networks

There are many kinds of ANNs and we will meet most of them when going on studying data science. The simplest and most widely used type of ANNs are feedforward networks. Those are networks in which information flows only in one direction. The feature values are fed to a set of neurons. Corresponding outputs are fed to a different set of neurons and so on. The process always ends with a single neuron yielding the prediction. No neuron is used twice. In contrast there are ANNs which feed a neuron's output back to another neuron involved in generating the neuron's input. Such ANNs contain circles and it is not straight forward how to compute the ANN's output. It's a dynamic process which may converge or not. Although such ANNs are more close to biological neural networks, they are rarely used because of their computational complexity. Only very special and well structured non-feedforward ANNs appear in practice.

To allow for more efficient computation, feedforward ANNs often are organized in layers. A layer is a set of neurons with no interconnections. Neurons of a layer only have connections to other layers. Layers are organized sequentially. Inputs of the first layer's neurons are connected to the network inputs (feature values). Outputs are connected to the inputs of the second layer's neurons. Outputs from second layer are connected to inputs of third layer and so on. The last layer has only one neuron yielding the ANN's output.

layered network

A layer may be fully connected to previous and next layer or some connections may be missing (corresponding weights are fixed to zero). Networks with all layers fully connected are called dense networks.

Computational efficiency of layered feedword networks stems from the fact that the outputs of all neurons in a layer can be computed simultaneously by matrix vector multiplication. Matrix vector multiplication is a very fast operation on modern computers, especially if additional GPU capabilities are available.

If $u$ is the vector of inputs of a layer (that is, the vector of outputs of the previous layer), then to get the output of each neuron we have to compute the inner products $w^{\mathrm{T}}\,u$ with $w$ being different for each neuron. Taking all the weight vectors of the neurons in the layer as rows of a matrix $W$ the components of $W\,u$ are exactly those inner products. If all neurons in the layer use the same activation function (which typically is the case), then we simply have to apply the activation function to all components of $W\,u$ to get the layer's outputs.

In a three-layered network with weight matrices $W_1,W_2,W_3$ and (per layer) activation functions $g_1,g_2,g_3$ we would have \begin{equation*} f_{\mathrm{approx}}(x)=g_3\Bigl(W_3\,g_2\bigl(W_2\,g_1(W_1\,x)\bigr)\Bigr), \end{equation*} where the activation functions are applied componentwise.

Training ANNs

Training an ANN means solving the minimization problem \begin{equation*} \frac{1}{n}\,\sum_{l=1}^n\bigl(f_{\mathrm{approx}}(x_l)-y_l\bigr)^2\to\min_{\text{weights}}. \end{equation*} The dependence of $f_{\mathrm{approx}}$ on the weights is highly nonlinear. Thus, there is no simple analytical solution. Instead we have to use numerical procedures to find weights which are at least close to minimizing weights.

The basic idea of such numerical algorithms is to start with arbitrary weights and to improve weights iteratively. Next to several more advanced techniques, there is a class of algorithms known as gradient descent method. They take the gradient of the objective function to calculate improved weights. The negative gradient is the direction of steepest descent. Thus, it should be a good idea to modify weights by substracting the gradient at the current weights. We stop the iteration, if the gradient is close to zero, that is, if we reached a stationary point.

Gradient descent methods suffer from different problems:

Due to their simplicity, gradient descent methods are the standard technique for training ANNs. More involved methods only work for special ANNs, whereas gradient descent is almost always applicable.

We will cover the details of gradient descent in a subsequent lecture.

Overfitting and regularization

Large ANNs tend to overfit the training data. As for linear regression we might add a penalty to the objective function to avoid overfitting. Concerning overfitting and regularization there is no difference between linear regression and ANNs.

For ANNs there also exist regularization techniques not applicable to linear regression. One such technique is kown as drop out. In each training step the weights of a randomly selected set of neurons is hold fixed, that is, excluded from traing. This set changes from step to step and the size of the set is a hyperparameter. The idea is to get more redundancies in the ANN and, thus, more reliable predictions. Especially, generalization power can be improved by drop out.

Hyperparameters

ANNs contain twoo obvious hyperparameters:

But activation function may be regarded as hyperparameters, too, since have to choose them in advance.

There is no essential difference between tuning hyperparameters for ANNs and tuning hyperparameters for liner regression.

Bias neurons

Artificial neurons suffer from a problem with inputs being all zero. If all inputs are zero, then multiplication with weights yield zero, too. Activation functions again map zero to zero. Thus, artificial neurons are not able to give a nonzero response to all zero inputs.

One solution would be to use activation functions with nonzero activation for zero input. But this would contradict the idea of an activation function and we would have to add parameters to activation functions to get variable output for all zero inputs.

A better idea is to add a bias neuron to each layer. A bias neuron takes no inputs and always yields the number one as its output. Neurons in the next layer connected to the bias neuron of the previous layer now always have nonzero input. With the corresponding weight we are able to adjust the size of the input. Even if all regular inputs are zero we are able to yield nonzero neuron output this way.

ANN with bias neurons

Denote the activation function a neuron by $g$. If $w_0$ is the weight for the input from the bias neuron and if $w$ and $u$ are the vectors of regular weights and inputs, respectively, then the neuron's output is \begin{equation*} g\bigl(w_0+w^{\mathrm{T}}\,u\bigr). \end{equation*} We see that using bias neurons, the activation function is shifted to the left or to the right, depending on the weight $w_0$.

For isntance, if we use the activation function \begin{equation*} g(t)=\begin{cases}1,&\text{if }t>0,\\0,&\text{else},\end{cases} \end{equation*} which fires if and only if the weighted inputs add up to a positive number, then introducing a bias neuron, we obtain \begin{equation*} g\bigl(w_0+w^{\mathrm{T}}\,u\bigr)=\begin{cases}1,&\text{if }w^{\mathrm{T}}\,u>-w_0,\\0,&\text{else}.\end{cases} \end{equation*} That is, the neuron fires if the sum of weighted regular inputs lies above $-w_0$. In this special case, bias neurons allow for modifying the threshold for acitvation without modifying the activation function.

From the training point of view, bias neurons do not matter, because they have inputs and thus no weights to train.

Approximation properties

In linear regression it is obvious which types of functions can be represented by the ansatz for $f_{\mathrm{approx}}$ (linear functions, polynomials, and so on). For ANNs we have to look more closely. Representable function classes depend on the activation function, one the number of layers, and on the number of neurons in each layer.

For instance, if we have only one layer and we use threshold activation (zero or one), then $f_{\mathrm{approx}}$ always is a piecewise constant function. With rectified linear units we always would obtain piecewise linear functions.

Considering more than one layer, things become tricky. But an import result in the theory of ANNs states, that an ANN with at least one layer is able to approximate arbitrary continuous functions. We simply have to use enough neurons. The more neurons the better the approximation.

The number of neurons required for good approximation in a single layer ANN might be very large. Often it is computationally more efficient to have more layers with less neurons. There exist many results on approximation properties of ANNs. The keyword is universal approximation theorems.

Vector-valued regression and classification

Up to now we only considered approximating realvalued functions of several variables, that is, the underlying truth has continuous range in $\mathbb{R}$. If we want to approximate functions $f$ taking values in some higher dimensional space $\mathbb{R}^d$, then we could apply linear regression oo ANNs to each of the $d$ components of the function.

In constrast to linear regression, ANNs allow for natural extension to multiple outputs. We simply have to add some neurons to the output layer. This way, the "knowledge" of the ANN can be used by all outputs without training individual nets for each output component.

ANN with three outputs

Squared error loss for vector-valued regression is \begin{equation*} \frac{1}{n}\,\sum_{l=1}^n\bigl|f_{\mathrm{approx}}(x_l)-y_l\bigr|^2, \end{equation*} where $f_{\mathrm{approx}}(x_l)-y_l$ is a vector with $d$ components and $|\cdot|$ denotes the length of a vector.

An import application of multiple output ANNs are classification task. Classification differs from regression in that the range of the truth $f$ is discrete with only few different values (classes). If we predict for each class the probability that a feature vector belongs to this class, we have a regression problem with multiple outputs. Thus, ANNs can be used for solving classification problems, too. Details on classification problems will be discussed next semester.