Mostrar el registro sencillo del ítem
dc.contributor.author
Di Persia, Leandro Ezequiel
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.contributor.author
Milone, Diego Humberto
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.contributor.author
Yanagida, M.
dc.date.available
2020-04-06T15:28:36Z
dc.date.issued
2009-02
dc.identifier.citation
Di Persia, Leandro Ezequiel; Milone, Diego Humberto; Yanagida, M.; Indeterminacy Free Frequency-Domain Blind Separation of Reverberant Audio Sources; Institute of Electrical and Electronics Engineers; Ieee Transactions On Audio Speech And Language Processing; 17; 2; 2-2009; 299-311
dc.identifier.issn
1558-7916
dc.identifier.uri
http://hdl.handle.net/11336/102035
dc.description.abstract
Blind separation of convolutive mixtures is a very complicated task that has applications in many fields of speech and audio processing, such as hearing aids and man?machine interfaces. One of the proposed solutions is the frequency-domain independent component analysis. The main disadvantage of this method is the presence of permutation ambiguities among con- secutive frequency bins. Moreover, this problem is worst when reverberation time increases. Presented in this paper is a new frequency-domain method, that uses a simplified mixing model, where the impulse responses from one source to each microphone are expressed as scaled and delayed versions of one of these impulse responses. This assumption, based on the similitude among waveforms of the impulse responses, is valid for a small spacing of the microphones. Under this model, separation is per- formed without any permutation or amplitude ambiguity among consecutive frequency bins. This new method is aimed mainly to obtain separation, with a small reduction of reverberation. Nevertheless, as the reverberation is included in the model, the new method is capable of performing separation for a wide range of reverberant conditions, with very high speed. The separation quality is evaluated using a perceptually designed objective mea- sure. Also, an automatic speech recognition system is used to test the advantages of the algorithm in a real application. Very good results are obtained for both, artificial and real mixtures. The results are significantly better than those by other standard blind source separation algorithms.
dc.format
application/pdf
dc.language.iso
eng
dc.publisher
Institute of Electrical and Electronics Engineers
![Confianza no establecida para este valor](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.rights
info:eu-repo/semantics/openAccess
dc.rights.uri
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.subject
Blind Source Separation
dc.subject
Reverberation
dc.subject
Independent Component Analysis
dc.subject
Speech Enhancement
dc.subject.classification
Control Automático y Robótica
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.subject.classification
Ingeniería Eléctrica, Ingeniería Electrónica e Ingeniería de la Información
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.subject.classification
INGENIERÍAS Y TECNOLOGÍAS
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.title
Indeterminacy Free Frequency-Domain Blind Separation of Reverberant Audio Sources
dc.type
info:eu-repo/semantics/article
dc.type
info:ar-repo/semantics/artículo
dc.type
info:eu-repo/semantics/publishedVersion
dc.date.updated
2020-04-02T13:21:32Z
dc.journal.volume
17
dc.journal.number
2
dc.journal.pagination
299-311
dc.journal.pais
Estados Unidos
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.description.fil
Fil: Di Persia, Leandro Ezequiel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentina
dc.description.fil
Fil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentina
dc.description.fil
Fil: Yanagida, M.. Doshisha University; Japón
dc.journal.title
Ieee Transactions On Audio Speech And Language Processing
![Se ha confirmado la validez de este valor de autoridad por un usuario](/themes/CONICETDigital/images/authority_control/invisible.gif)
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1109/TASL.2008.2009568
Archivos asociados