next up previous contents index
Next: List of Tables Up: Thesis Previous: Contents   Contents   Index


List of Figures

  1. Le problème de surentraînement en utilisant ANN.
  2. La performance d'un ANN et d'un RNN à interpoler et à extrapoler pour des nombres différents des neurones cachés.
  3. La corrélation entre les valeurs réelles et prévues de MOS (langage arabe).
  4. La corrélation entre les valeurs réelles et prévues de MOS (langage espagnol).
  5. La corrélation entre les valeurs réelles et prévues de MOS (langage français).
  6. La corrélation entre les valeurs réelles et prévues de MOS pour les deux BD.
  7. L'architecture du mécanisme proposé de contrôle.
  8. Les débits suggérés par TFRC et l'économie obtenue en utilisant des règles de contrôle en changeant le codec dans le cas de la parole.
  9. Les valeurs de MOS avec et sans notre contrôleur en changeant le codec dans le cas de la parole.
  10. Les débits suggérés par TFRC et ceux de l'expéditeur
  11. Les valeurs de MOS en changeant FR et ceux en changeant QP.
  12. Une représentation de boîte noire de notre outil pour prévoir en temps réel le futur trafic
  13. Le trafic réel contre celui prévu pour la totalité de deux semaines suivantes.
  14. Le trafic réel contre celui prévu pour les troisième et quatrièmes jours de la troisième semaine.
  15. La différence entre le trafic réel et prévu pour les suivantes deux semaines complètes.
  16. Un histogramme de la distribution entre la différence de trafic réel et prévu
  17. La différence entre le trafic réel et prévu pour les suivantes deux semaines complètes une fois que les échantillons spiky sont retirés.
  18. L'effet de variables $\zeta $ et $dP$ sur la performance de l'algorithme AM-LM pour un RNN. Les résultats sont pour le premier problème.
  19. Les performances des algorithmes GD, LM, LM2 et AM-AM d'apprentissage du premier problème.
  20. Les performances des algorithmes GD, LM, LM1, et LM2 d'apprentissage du deuxième problème.
  21. Comparaison entre les performances de GD et d'AM-LM lors de l'apprentissage du problème de la qualité de la vidéo présenté en section 1.4.
  22. Architecture of a three-layer feedforward neural network.
  23. The overall architecture of the new method to evaluate real-time speech, audio and/or video quality in real time.
  24. A schematic diagram of our method showing the steps in the design phase.
  25. Eleven-point quality scale.
  26. ITU 5-point impairment scale
  27. Stimulus presentation timing in ACR method.
  28. Stimulus presentation timing in DCR method.
  29. A portion of quality rating form using continuous scales.
  30. A portion of quality rating form using continuous scales for DSCQS method.
  31. The problem of overtraining using ANN
  32. Performance of ANN and RNN to interpolate and extrapolate for different number of hidden neurons
  33. The run-time mode of our method.
  34. The current existing model of objective methods.
  35. Operation mode for the tool in real-time video system.
  36. Maximum and minimum percentage loss rates as a function of the playback buffer length between Rennes and the other different sites.
  37. Minimum rates for one-to-ten consecutively lost packets as a function of the playback buffer length between Rennes and the other different sites.
  38. Maximum rates for one-to-ten consecutively lost packets as a function of the playback buffer length between Rennes and the other different sites.
  39. Actual vs. Predicted MOS values on the Arabic training database.
  40. Actual vs. Predicted MOS values on the Spanish training database.
  41. Actual vs. Predicted MOS values on the French training database.
  42. Actual vs. Predicted MOS values on the testing databases.
  43. Scatter plots to show the correlation between Actual and Predicted MOS values (Arabic Language).
  44. Scatter plots to show the correlation between Actual and Predicted MOS values (Spanish Language).
  45. Scatter plots to show the correlation between Actual and Predicted MOS values (French Language).
  46. MNB2 and E-model results against the MOS subjective values in evaluating a set of speech samples distorted by both encoding and network impairments. Source is [57].
  47. A screen dump showing an instance during the subjective quality test evaluation.
  48. The 95% confidence intervals before and after removing the rate of two unreliable subjects.
  49. Actual and Predicted MOS scores for the training database.
  50. Actual and Predicted MOS scores for the testing database.
  51. Scatter plots showing the correlation between Actual and Predicted MOS scores.
  52. The performance of the ITS metric to evaluate video quality.
  53. Quality assessment by MPQM as a function of the bitrate and the loss rate.
  54. MPQM quality assessment of MPEG-2 video as a function of the bit rate.
  55. The quality assessment by the ITS model of MPEG-2 video as a function of the bit rate.
  56. CMPQM quality assessment of MPEG-2 video as a function of the bit rate.
  57. NVFM quality assessment of MPEG-2 video as a function of the bit rate.
  58. Comparison of the subjective data against MPQM, CMPQM and NVFM metrics for the video sequence ``Mobile & Calendar''.
  59. Comparison of the subjective data against MPQM, CMPQM, NVFM and ITS metrics for the video sequence ``Basket Ball''.
  60. A screen dump showing manual mode for Stefan
  61. A screen dump showing manual mode for Children
  62. A screen dump showing manual mode for Foreman
  63. A screen dump showing Automatic Mode for Stefan
  64. On the left, we show the impact of LR and CLP on speech quality for the different codecs and PI=20 ms. On the right we show the effect of LR and PI on speech quality for CLP=1.
  65. The impact of CLP and LR on speech quality when LR=5 % (left) and when LR=10 %(right) for PCM, ADPCM and GSM codecs.
  66. The variations of the quality as a function of the LR and the employed speech codec in both languages for PI=20 ms and CLP=2.
  67. The impact of BR and FR on video quality.
  68. The impact of BR and LR on video quality.
  69. The impact of BR and CLP on video quality.
  70. The impact of BR and RA on video quality.
  71. The impact of FR and LR on video quality.
  72. The impact of FR and CLP on video quality.
  73. The impact of FR and RA on video quality.
  74. The impact of LR and CLP on video quality.
  75. The impact of LR and RA on video quality.
  76. The impact of CLP and RA on video quality.
  77. RA is more benefic than FR for lower values of BR.
  78. Architecture of the proposed control mechanism.
  79. Rates suggested by TCP-friendly and the saving using control rules when changing the codec in the case of Speech
  80. MOS values with and without our control when changing the codec in the case of Speech (CM stands for Control Mechanism)
  81. The supposed rates suggested by TCP-friendly and that of the sender
  82. MOS values when changing frame rate and those when changing the quantization parameter to meet the bit rates shown in Figure 8.4
  83. A black-box representation of our tool to predict in real time the future traffic.
  84. Our best architecture employing both short- and long-range dependencies in traffic prediction for the ENSTB Network.
  85. The actual traffic against the predicted one for the whole complete next two weeks.
  86. The Normalized actual against that predicted for the third and fourth days from the third week.
  87. The difference between the actual and the predicted traffic for the complete next two weeks.
  88. The histogram of the distribution of difference between actual and prediction with a step of 0.1.
  89. The difference between the actual and the predicted traffic for the complete next two weeks once the spiky samples are removed.
  90. Predicting 2nd step ahead: the difference between the actual traffic and the predicted one for the complete next two weeks, including the spikes.
  91. The traditional NN model that has been widely used to predict network traffic. This Figure is taken from [56, p. 115], where $z^{-1}$ represents a unit-step delay function.
  92. The actual against the NN prediction when training and testing it by data generated by Eqn. 9.1. This Figure is taken from [56].
  93. The 7-5-2 feedforward RNN network architecture.
  94. The fully-connected recurrent RNN network architecture.
  95. The RNN network architecture used to solve the XOR problem.
  96. The impact of the two variables $\zeta $ and $dP$ on the performance of the adaptive momentum LM training algorithm for RNN. The results for the first problem.
  97. The performance of the GD, LM, LM2 and AM-LM training algorithms on the first problem.
  98. The performance of the GD, LM, LM1, and LM2 training algorithms on the second problem.
  99. Comparsion between the performance of GD and that of AM-LM on the video quality database presented in Chapter 6.


Samir Mohamed 2003-01-08