next up previous contents index
Next: and Parameters for AM-LM Up: New Random Neural Network Previous: The Algorithm with Adaptive   Contents   Index


Performance Evaluation of the Proposed Algorithms

In this Section, we provide some experimental results that we obtained in evaluating the proposed algorithms. We compare these results with those obtained for the gradient descent algorithm. For this purpose, we used two problems that have been already used in [1] for validation purposes. In addition, the XOR problem is also considered. The description of these problems is as follows: $\bullet$ We have implemented the proposed algorithms in MATLAB. During their evaluation, we have noticed that when we allow negative values for the weights, we obtain better results. Thus, to deal with this point, we have considered three different procedures when a negative weight is produced. In the first one, whenever there is a negative value of the weight, we simply put it to zero. The second one is as proposed in [87]. To ensure that the weights are always positive, the relation $p^2=w_{i,j}$ is used. In this case, the above equations are slightly modified to take into account the derivatives with respect to the weights. The third case is to allow negative weights. We provide a comparative study for these cases. In the following analysis, Figures and Tables, we use the acronyms given bellow:
GD:
the basic gradient descent training algorithm, as proposed in [1].
LM:
the Levenberg-Marquardt training algorithm; in this case, we allow negative weights to take place during the training.
LM1:
the Levenberg-Marquardt training algorithm, but in this case, no negative weights are allowed: whenever a negative value of any weight is produced, we simply put it to zero.
LM2:
the Levenberg-Marquardt training algorithm, no negative weights allowed, by using $p^2=w_{i,j}$ and by modifying the derivatives of the equations.
AM-LM:
the Levenberg-Marquardt with adaptive momentum training algorithm. We allow negative weights to occur.

Figure 10.1: The 7-5-2 feedforward RNN network architecture.
\fbox{\includegraphics[width=.95\textwidth]{RnnFigs/rnn-7-5-2.eps}}
  
Figure 10.2: The fully-connected recurrent RNN network architecture.
\fbox{\includegraphics[width=.95\textwidth]{RnnFigs/recurrent.eps}}



Subsections
next up previous contents index
Next: and Parameters for AM-LM Up: New Random Neural Network Previous: The Algorithm with Adaptive   Contents   Index
Samir Mohamed 2003-01-08