Mostrar el registro sencillo del ítem
dc.contributor.author
Piantanida, Pablo
dc.contributor.author
Rey Vega, Leonardo Javier
dc.contributor.other
Rodrigues, Miguel R. D.
dc.date.available
2022-04-25T12:40:44Z
dc.date.issued
2020
dc.identifier.citation
Piantanida, Pablo; Rey Vega, Leonardo Javier; Information Bottleneck and Representation Learning; Cambridge University Press; 2020; 330-358
dc.identifier.isbn
9781108616799
dc.identifier.uri
http://hdl.handle.net/11336/155667
dc.description.abstract
A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Representation models (usually referred to as encoders) are often determined for optimizing performance on training data when the real objective is to generalize well to other (unseen) data. The first part of this chapter is devoted to provide an overview of and introduction to fundamental concepts in statistical learning theory and the Information Bottleneck principle. It serves as a mathematical basis for the technical results given in the second part, in which an upper bound to the generalization gap corresponding to the cross-entropy risk is given. When this penalty term times a suitable multiplier and the cross entropy empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data distribution. This result provides an interesting connection between mutual information and generalization, and helps to explain why noise injection during the training phase can improve the generalization ability of encoder models and enforce invariances in the resulting representations.
dc.format
application/pdf
dc.language.iso
eng
dc.publisher
Cambridge University Press
dc.rights
info:eu-repo/semantics/restrictedAccess
dc.rights.uri
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.subject
LEARNING
dc.subject
INFORMATION
dc.subject
RATE-DISTORTION
dc.subject
GENERALIZATION
dc.subject.classification
Otras Ingeniería Eléctrica, Ingeniería Electrónica e Ingeniería de la Información
dc.subject.classification
Ingeniería Eléctrica, Ingeniería Electrónica e Ingeniería de la Información
dc.subject.classification
INGENIERÍAS Y TECNOLOGÍAS
dc.title
Information Bottleneck and Representation Learning
dc.type
info:eu-repo/semantics/publishedVersion
dc.type
info:eu-repo/semantics/bookPart
dc.type
info:ar-repo/semantics/parte de libro
dc.date.updated
2021-09-07T14:55:58Z
dc.journal.pagination
330-358
dc.journal.pais
Reino Unido
dc.journal.ciudad
Cambridge
dc.description.fil
Fil: Piantanida, Pablo. No especifíca;
dc.description.fil
Fil: Rey Vega, Leonardo Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Centro de Simulación Computacional para Aplicaciones Tecnológicas; Argentina. Universidad de Buenos Aires. Facultad de Ingeniería. Departamento de Electronica; Argentina
dc.relation.alternativeid
info:eu-repo/semantics/altIdentifier/url/https://www.cambridge.org/core/books/informationtheoretic-methods-in-data-science/BC0340683CDB63CCFF73A41FE5E53E4C
dc.conicet.paginas
565
dc.source.titulo
Information-Theoretic Methods in Data Science
Archivos asociados