Repositorio Institucional
Repositorio Institucional
CONICET Digital
  • Inicio
  • EXPLORAR
    • AUTORES
    • DISCIPLINAS
    • COMUNIDADES
  • Estadísticas
  • Novedades
    • Noticias
    • Boletines
  • Ayuda
    • General
    • Datos de investigación
  • Acerca de
    • CONICET Digital
    • Equipo
    • Red Federal
  • Contacto
JavaScript is disabled for your browser. Some features of this site may not work without it.
  • INFORMACIÓN GENERAL
  • RESUMEN
  • ESTADISTICAS
 
Artículo

The role of mutual information in variational classifiers

Vera, Matías AlejandroIcon ; Rey Vega, Leonardo JavierIcon ; Piantanida, Pablo
Fecha de publicación: 05/2023
Editorial: Springer
Revista: Machine Learning
ISSN: 0885-6125
Idioma: Inglés
Tipo de recurso: Artículo publicado
Clasificación temática:
Otras Ciencias de la Computación e Información

Resumen

Overfitting data is a well-known phenomenon related with the generation of a model that mimics too closely (or exactly) a particular instance of data, and may therefore fail to predict future observations reliably. In practice, this behaviour is controlled by various—sometimes based on heuristics—regularization techniques, which are motivated by upper bounds to the generalization error. In this work, we study the generalization error of classifiers relying on stochastic encodings which are trained on the cross-entropy loss, which is often used in deep learning for classification problems. We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information between input features and the corresponding representations in the latent space, which are randomly generated according to the encoding distribution. Our bounds provide an information-theoretic understanding of generalization in the so-called class of variational classifiers, which are regularized by a Kullback–Leibler (KL) divergence term. These results give theoretical grounds for the highly popular KL term in variational inference methods that was already recognized to act effectively as a regularization penalty. We further observe connections with well studied notions such as Variational Autoencoders, Information Dropout, Information Bottleneck and Boltzmann Machines. Finally, we perform numerical experiments on MNIST, CIFAR and other datasets and show that mutual information is indeed highly representative of the behaviour of the generalization error.
Palabras clave: CROSS-ENTROPY LOSS , GENERALIZATION ERROR , INFORMATION BOTTLENECK , INFORMATION THEORY , PAC LEARNING , VARIATIONAL CLASSIFIERS
Ver el registro completo
 
Archivos asociados
Tamaño: 3.535Mb
Formato: PDF
.
Solicitar
Licencia
info:eu-repo/semantics/restrictedAccess Excepto donde se diga explícitamente, este item se publica bajo la siguiente descripción: Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Unported (CC BY-NC-SA 2.5)
Identificadores
URI: http://hdl.handle.net/11336/209807
URL: https://link.springer.com/article/10.1007/s10994-023-06337-6
DOI: http://dx.doi.org/10.1007/s10994-023-06337-6
Colecciones
Articulos(CSC)
Articulos de CENTRO DE SIMULACION COMPUTACIONAL P/APLIC. TECNOLOGICAS
Citación
Vera, Matías Alejandro; Rey Vega, Leonardo Javier; Piantanida, Pablo; The role of mutual information in variational classifiers; Springer; Machine Learning; 112; 9; 5-2023; 3105-3150
Compartir
Altmétricas
 

Enviar por e-mail
Separar cada destinatario (hasta 5) con punto y coma.
  • Facebook
  • X Conicet Digital
  • Instagram
  • YouTube
  • Sound Cloud
  • LinkedIn

Los contenidos del CONICET están licenciados bajo Creative Commons Reconocimiento 2.5 Argentina License

https://www.conicet.gov.ar/ - CONICET

Inicio

Explorar

  • Autores
  • Disciplinas
  • Comunidades

Estadísticas

Novedades

  • Noticias
  • Boletines

Ayuda

Acerca de

  • CONICET Digital
  • Equipo
  • Red Federal

Contacto

Godoy Cruz 2290 (C1425FQB) CABA – República Argentina – Tel: +5411 4899-5400 repositorio@conicet.gov.ar
TÉRMINOS Y CONDICIONES