Volume 10, Issue 1 (9-2013)                   JSDP 2013, 10(1): 26-13 | Back to browse issues page

XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

New fast pre training method for deep neural network learning . JSDP 2013; 10 (1) :26-13
URL: http://jsdp.rcisp.ac.ir/article-1-112-en.html
Abstract:   (12577 Views)
In this paper, we propose efficient method for pre-training of deep bottleneck neural network (DBNN). Pre-training is used for initial value of network weights convergence of DBNN is difficult because of different local minimums. While with efficient initial value for network weights can avoided some local minimums. This method divides DBNN to multi single hidden layer and adjusts them, then weighs of these networks is used for initial value of DBNN weights and then train network. Proposed network is used for extraction of face component. This Method is implemented on Bosphorus database. Comparing results shows that new method has more convergence speed and generalization than random initial value. By means of this new training method and with same training error rate pixel reconstruction error is decreased 13.69% and recognition rate is increased 10%
Full-Text [PDF 2866 kb]   (2947 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2013/06/8 | Accepted: 2013/08/13 | Published: 2013/12/3 | ePublished: 2013/12/3

Add your comments about this article : Your username or Email:
CAPTCHA

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing