Training of MLP neural network in image compression using GSA method
Subject Areas : Generalmaryam dehbashian 1 * , hamid zahiri 2
1 -
2 -
Keywords:
Abstract :
One of the important research areas in image processing is image compression. Until now, various methods for image compression have been presented, among which neural networks have attracted many audiences. The most common training method of neural networks is the error backpropagation method, which converges and stops at local optima are considered one of its most important weaknesses. The researchers' new approach is to use innovative algorithms in the process of training neural networks. In this article, a new educational method based on gravity search method (GSA) is introduced. The gravity search method is the latest and newest version of all types of collective intelligence search and optimization methods. In this method, the candidate answers in the search space are objects that are affected by the force of gravity and their positions change. Gradually, objects with better fit have more mass and have a greater effect on other objects. In this research, an MLP neural network is trained for image compression using the GSA algorithm.
[1] S. Anna Durai, and E. Anna Saro, “Image Compression with Back-Propagation Neural Network using Cumulative Distribution
[2] Function”, World Academy of Science, Engineering and Technology, 2006.
[3] S. Kulkarni, B. Verma and M. Blumenstein, “Image Compression using a Direct Solution Method Based Neural Network”, Griffith University, Gold Coast Campus, QLD 4217, Australia.
[4] D.K Kumar, N. Mahalingam, "Nested neural networks for image compression", In IEEE Region 10 International Conference on Global Connectivity in Energy. 1998.
[5] C. Amerijckx, M. Verleysen, P. Thissen, and J.Legat, "Image Compression by Self-Organized Kohonen Map", In IEEE Transactions on Neural Networks, Vol. 9, No. 3, May 1998.
[6] J. MI, and D. S. Huang, "Image Compression Using Principal Component Neural Network", In IEEE International Conference on Control, Automation,Robotics and Vision, 2004.
[7] Robert D. Dony, Simon Haykin, “Neural Network Approaches to Image Compression”, In PROCEEDINGS OF THE IEEE, VOL. 83, NO. 2, FEBRUARY 1995.
[8] Hamdy S. Soliman, Mohammed Omari, “A neural networks approach to image data compression” In Applied Soft Computing 6, pp. 258–271, 2006.
[9] G. W. Cottrell, P. Munro and D. Zipser, "Image Data Compression by Back Propagation: An example of Extensional Programming", ICs Report 8702, 1987.
[10] A. A. Miniani, R. D. Williams, “Acceleration of back-propagation through learning rate and momentum adaptation”, Proc. International joint conference on neural networks, San Diego, CA, pp. 676-679, 1990.
[11] R. A. Jacobs, “Increased rate of convergence through learning rate adaptation”, Neural Networks, Vol. 1, No. 4, pp. 295-308, 1988.
[12] K. Balakrishnan, V. Honavar, “Improving convergence of back propagation by handling flat-spots in the output layer”, Proc. Second International Conference on Artificial Neural Networks, Brighton, U.K., pp. 139-144, 1992.
[13] W. Yan, S. Hongbao, “Improvement of Neural Network learning algorithm and its application in control”, Proc. The 3rd World Congress on neural networks, Hefei, Anhui, pp. 971-975, 2000.
[14] B. Bazartseren, G. Hildebrandt, K. P. Holz, “Short-term water level prediction using neural networks and neuro-fuzzy approach”, neuro computing, Vol. 55, No. 3–4, pp. 439–450, 2003.
[15] M.Engin, “ECG beat classification using neuro-fuzzy network”, Pattern Recognition Letters, Vol. 25, pp. 1715-1722, 2004.
[16] P. Kumar, S. N. Merchant, U. B. Desai, “Improving performance in pulse radar detection using Bayesian regularization for neural network training”, Digital Signal Processing, Vol. 14, No. 5, pp. 438–448, 2004.
[17] L. L. Rogers, F. U. Dowla, V. M. Johnson, “Optimal field-scale groundwater remediation using neural networks and the genetic algorithm”, Environmental Science and Technology, Vol. 29, No. 5, pp. 1145– 1155, 1995.
[18] A. L. Arnaud, P. J. L. Adeodato, G. C. Vasconcelos, R. F. O. Neto, “MLP neural networks optimization through simulated annealing in a hybrid approach for time series prediction”, Congresso de Soceidade Brasileira de computacäo, pp. 1110-1113, 2005.
[19] Y. P. Chang, C. N. Ko, “A PSO method with nonlinear time-varying evolution based on neural network for design of optimal harmonic filters”, Expert Systems with Applications, Vol. 36, pp. 6809–6816, 2009.
[20] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, “GSA: A Gravitational Search Algorithm”, Information Sciences, Vol. 179, pp. 2232–2248, 2009.
اکبری، رضا؛ زیارتی، کوروش؛ " استفاده از بهینه¬سازی گروهی ذرات برای آموزش شبکه¬های عصبی و کاربرد آن در فشرده-سازی تصویر" ، هشتمین کنفرانس سیستم¬های هوشمند، دانشگاه فردوسی، مشهد، 1386.
[21] M. T. Hagan, M. Menhaj, “Training feed forward networks with the Marquardt algorithm”, IEEE transactions on Neural Networks, Vol. 5, No. 6, pp. 989-993, 1994.
[22] راشدی، عصمت؛ نظام¬آبادی پور، حسین؛ سریزدی، سعید؛ الگوریتم جستجوی گرانشی باینری، هشتمین کنفرانس سیستم-های هوشمند، دانشگاه فردوسی مشهد، 1386.
[23] ده¬باشیان، مریم؛ ظهیری، سیدحمید؛ الگوریتم جستجوی گرانشی نخبه¬گرای پیشرفته، اولین کنفرانس انرژی¬های تجدیدپذیر و تولید پراکنده ایران، دانشگاه بیرجند، 1388.
[24] ده¬باشیان، مریم؛ ظهیری، سیدحمید؛ MOGSA : روشی جدید در بهینه¬سازی چند هدفه مبتنی بر الگوریتم جستجوی گرانشی، شانزدهمین کنفرانس ملی سالانه انجمن کامپیوتر ایران، دانشگاه صنعتی شریف، 1389.