The XARES benchmark results and latent trajectory visualizations give us a picture of what JEPA-v0 captures and what it misses. The encoder picks up broad acoustic structure well: timbral qualities, spectral texture, and emotional shifts in speech. The CREMA-D trajectories show the model tracking pitch and energy changes that correlate with emotional categories, and the GTZAN trajectories show it spreading representations across a rich space that distinguishes musical texture. But when the task requires mapping audio to linguistic content, the encoder falls short. The LibriSpeech trajectory confirms this visually: most of the embedding variance collapses into a narrow region, suggesting the model treats different phonemes as near-identical. The encoder also does not align meaning across languages, as semantically equivalent utterances in different languages occupy separate regions of the embedding space, and therefore cross-lingual mapping will need to come from a downstream model or from changes to the training objective.
You don't have permission to access the page you requested.
,更多细节参见Snipaste - 截图 + 贴图
Explore our full range of subscriptions.For individuals
Россияне стали чаще тревожиться.Почему это произошло и как избавиться от переживаний?23 мая 2022