Mostrar el registro sencillo del ítem
dc.contributor.author
Terissi, Lucas Daniel

dc.contributor.author
Sad, Gonzalo Daniel

dc.contributor.author
Gómez, Juan Carlos
dc.date.available
2019-11-14T17:59:10Z
dc.date.issued
2018-06
dc.identifier.citation
Terissi, Lucas Daniel; Sad, Gonzalo Daniel; Gómez, Juan Carlos; Robust front-end for audio, visual and audio–visual speech classification; Springer; International Journal of Speech Technology; 21; 2; 6-2018; 293-307
dc.identifier.issn
1381-2416
dc.identifier.uri
http://hdl.handle.net/11336/88897
dc.description.abstract
This paper proposes a robust front-end for speech classification which can be employed with acoustic, visual or audio–visual information, indistinctly. Wavelet multiresolution analysis is employed to represent temporal input data associated with speech information. These wavelet-based features are then used as inputs to a Random Forest classifier to perform the speech classification. The performance of the proposed speech classification scheme is evaluated in different scenarios, namely, considering only acoustic information, only visual information (lip-reading), and fused audio–visual information. These evaluations are carried out over three different audio–visual databases, two of them public ones and the remaining one compiled by the authors of this paper. Experimental results show that a good performance is achieved with the proposed system over the three databases and for the different kinds of input information being considered. In addition, the proposed method performs better than other reported methods in the literature over the same two public databases. All the experiments were implemented using the same configuration parameters. These results also indicate that the proposed method performs satisfactorily, neither requiring the tuning of the wavelet decomposition parameters nor of the Random Forests classifier parameters, for each particular database and input modalities.
dc.format
application/pdf
dc.language.iso
eng
dc.publisher
Springer

dc.rights
info:eu-repo/semantics/openAccess
dc.rights.uri
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.subject
AUDIO–VISUAL SPEECH RECOGNITION
dc.subject
RANDOM FORESTS
dc.subject
WAVELET DECOMPOSITION
dc.subject.classification
Ingeniería Eléctrica y Electrónica

dc.subject.classification
Ingeniería Eléctrica, Ingeniería Electrónica e Ingeniería de la Información

dc.subject.classification
INGENIERÍAS Y TECNOLOGÍAS

dc.title
Robust front-end for audio, visual and audio–visual speech classification
dc.type
info:eu-repo/semantics/article
dc.type
info:ar-repo/semantics/artículo
dc.type
info:eu-repo/semantics/publishedVersion
dc.date.updated
2019-10-17T14:55:18Z
dc.journal.volume
21
dc.journal.number
2
dc.journal.pagination
293-307
dc.journal.pais
Alemania

dc.journal.ciudad
Berlin
dc.description.fil
Fil: Terissi, Lucas Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; Argentina
dc.description.fil
Fil: Sad, Gonzalo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; Argentina
dc.description.fil
Fil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; Argentina
dc.journal.title
International Journal of Speech Technology
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/url/https://link.springer.com/article/10.1007/s10772-018-9504-y
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1007/s10772-018-9504-y
Archivos asociados