Category Taxonomy
2022 ar xiv annual report
null
arXiv submission rate statistics - arXiv info
Two million articles and counting! – arXiv blog
Predicting research trends with semantic and neural networks with an application in quantum physics | Proceedings of the National Academy of Sciences
The vast and growing number of publications in all disciplines of science cannot be
comprehended by a single human researcher. As a consequence, re...
Heat-assisted detection and ranging
Nature - Heat-assisted detection and ranging is experimentally shown to see texture and depth through darkness as if it were day, and also perceives decluttered physical attributes beyond RGB or...
New acoustic attack steals data from keystrokes with 95% accuracy
A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.
Record Once, Post Everywhere: Automatic Shortening of Audio Stories for Social Media | Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology
High Fidelity Neural Audio Compression
We introduce a state-of-the-art real-time, high-fidelity, audio codec
leveraging neural networks. It consists in a streaming encoder-decoder
architecture with quantized latent space trained in an end-to-end fashion. We
simplify and speed-up the training by using a single multiscale spectrogram
adversary that efficiently reduces artifacts and produce high-quality samples.
We introduce a novel loss balancer mechanism to stabilize training: the weight
of a loss now defines the fraction of the overall gradient it should represent,
thus decoupling the choice of this hyper-parameter from the typical scale of
the loss. Finally, we study how lightweight Transformer models can be used to
further compress the obtained representation by up to 40%, while staying faster
than real time. We provide a detailed description of the key design choices of
the proposed model including: training objective, architectural changes and a
study of various perceptual loss functions. We present an extensive subjective
evaluation (MUSHRA tests) together with an ablation study for a range of
bandwidths and audio domains, including speech, noisy-reverberant speech, and
music. Our approach is superior to the baselines methods across all evaluated
settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.
Code and models are available at github.com/facebookresearch/encodec.
Southrye - (₳) (@Southrye) / X
-'- Cardano Ambassador -- A #Futurist, #Blockchain and #Decentralization fan.
Exploring AI 🤖
10 yrs experience in Enterprise IT solution design. 🇨🇦
Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion
Zero-shot voice conversion (VC) converts source speech into the voice of any
desired speaker using only one utterance of the speaker without requiring
additional model updates. Typical methods use a speaker representation from a
pre-trained speaker verification (SV) model or learn speaker representation
during VC training to achieve zero-shot VC. However, existing speaker modeling
methods overlook the variation of speaker information richness in temporal and
frequency channel dimensions of speech. This insufficient speaker modeling
hampers the ability of the VC model to accurately represent unseen speakers who
are not in the training dataset. In this study, we present a robust zero-shot
VC model with multi-level temporal-channel retrieval, referred to as MTCR-VC.
Specifically, to flexibly adapt to the dynamic-variant speaker characteristic
in the temporal and channel axis of the speech, we propose a novel fine-grained
speaker modeling method, called temporal-channel retrieval (TCR), to find out
when and where speaker information appears in speech. It retrieves
variable-length speaker representation from both temporal and channel
dimensions under the guidance of a pre-trained SV model. Besides, inspired by
the hierarchical process of human speech production, the MTCR speaker module
stacks several TCR blocks to extract speaker representations from
multi-granularity levels. Furthermore, to achieve better speech disentanglement
and reconstruction, we introduce a cycle-based training strategy to simulate
zero-shot inference recurrently. We adopt perpetual constraints on three
aspects, including content, style, and speaker, to drive this process.
Experiments demonstrate that MTCR-VC is superior to the previous zero-shot VC
methods in modeling speaker timbre while maintaining good speech naturalness.
Gorilla
Audio Super Resolution