Bonheme, Lisa, Grzes, Marek (2023) Be more active! understanding the Differences Between Mean and Sampled Representations of Variational Autoencoders. Journal of Machine Learning Research, 24 . Article Number 324. ISSN 1532-4435. E-ISSN 1533-7928. (KAR id:104329)
PDF
Publisher pdf
Language: English |
|
Download this file (PDF/850kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader | |
PDF
Publisher pdf
Language: English
This work is licensed under a Creative Commons Attribution 4.0 International License.
|
|
Download this file (PDF/833kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader | |
Official URL: https://jmlr.org/papers/v24/21-1145.html |
Abstract
The ability of Variational Autoencoders to learn disentangled representations has made them appealing for practical applications. However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured. In this paper, we refine this observation through the lens of selective posterior collapse, which states that only a subset of the learned representations, the active variables, is encoding useful information while the rest (the passive variables) is discarded. We first extend the existing definition to multiple data examples and show that active
variables are equally disentangled in mean and sampled representations. Based on this extension and the pre-trained models from disentanglement lib, we then isolate the passive variables and show that they are responsible for the discrepancies between mean and sampled representations. Specifically, passive variables exhibit high correlation scores with other variables in mean representations while being fully uncorrelated in sampled ones. We thus conclude that despite what their higher correlation might suggest, mean representations are still good candidates for downstream
tasks applications. However, it may be beneficial to remove their passive variables, especially when used with models sensitive to correlated features.
Item Type: | Article |
---|---|
Uncontrolled keywords: | Representation learning, Disentangled representations, Deep generative models, Variational autoencoders, Posterior collapse |
Subjects: | Q Science > QA Mathematics (inc Computing science) |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Funders: | University of Kent (https://ror.org/00xkeyj56) |
Depositing User: | Marek Grzes |
Date Deposited: | 18 Dec 2023 11:36 UTC |
Last Modified: | 03 Jan 2024 14:17 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/104329 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):