Artículo
Compressing arrays of classifiers using Volterra-Neural Network: application to face recognition
Fecha de publicación:
07/2013
Editorial:
Springer
Revista:
Neural Computing And Applications
ISSN:
0941-0643
Idioma:
Inglés
Tipo de recurso:
Artículo publicado
Clasificación temática:
Resumen
Model compression is required when large models are used, for example, for a classification task, but there are transmission, space, time or computing constraints that have to be fulfilled. Multilayer Perceptron (MLP) models have been traditionally used as classifiers. Depending on the problem, they may need a large number of parameters (neuron functions, weights and bias) to obtain an acceptable performance. This work proposes a technique to compress an array of MLPs, through the weights of a Volterra-Neural Network (Volterra-NN), maintaining its classification performance. It will be shown that several MLP topologies can be well-compressed into the first, second and third order (Volterra-NN) outputs. The obtained results show that these outputs can be used to build an array of (Volterra-NN) that needs significantly less parameters than the original array of MLPs, furthermore having the same high accuracy. The Volterra-NN compression capabilities were tested for solving a face recognition problem. Experimental results are presented on two well-known face databases: ORL and FERET.
Archivos asociados
Licencia
Identificadores
Colecciones
Articulos(CCT - SANTA FE)
Articulos de CTRO.CIENTIFICO TECNOL.CONICET - SANTA FE
Articulos de CTRO.CIENTIFICO TECNOL.CONICET - SANTA FE
Citación
Rubiolo, María Florencia; Stegmayer, Georgina; Milone, Diego Humberto; Compressing arrays of classifiers using Volterra-Neural Network: application to face recognition; Springer; Neural Computing And Applications; 23; 6; 7-2013; 1687-1701
Compartir
Altmétricas