Found 27 bookmarks
Custom sorting
Suno
Suno
Suno is building a future where anyone can make great music.
·suno.com·
Suno
Lamucal : AI-Enhanced Tabs & Chords for Any Song
Lamucal : AI-Enhanced Tabs & Chords for Any Song
AI-generated Tabs,chords,lyrics,melodies. Edit,transpose,separate tracks easily.Explore over 40M songs.Also includes interactive learning, turns any music or song(YouTube, Deezer, SoundCloud, MP3) into chords.Play along with guitar, ukulele, or piano.
·lamucal.ai·
Lamucal : AI-Enhanced Tabs & Chords for Any Song
EMO
EMO
EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
·humanaigc.github.io·
EMO
GitHub - collabora/WhisperFusion: WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
GitHub - collabora/WhisperFusion: WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI. - GitHub - collabora/WhisperFusion: WhisperFusion builds upon the capabil...
·github.com·
GitHub - collabora/WhisperFusion: WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
Suno AI
Suno AI
We are building a future where anyone can make great music. No instrument needed, just imagination. From your mind to music.
·suno.ai·
Suno AI
yl4579/StyleTTS2: StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
yl4579/StyleTTS2: StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models - yl4579/StyleTTS2: StyleTTS 2: Towards Human-Level Text-to-Speech ...
·github.com·
yl4579/StyleTTS2: StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Transforming the future of music creation
Transforming the future of music creation
Announcing our most advanced music generation model and two new AI experiments, designed to open a new playground for creativity
·deepmind.google·
Transforming the future of music creation
facebookresearch/audiocraft: Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
facebookresearch/audiocraft: Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
·github.com·
facebookresearch/audiocraft: Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
SevaSk/ecoute: Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.
SevaSk/ecoute: Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.
·github.com·
SevaSk/ecoute: Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.
Turn ideas into music with MusicLM
Turn ideas into music with MusicLM
Starting today, you can sign up to try MusicLM, a new experimental AI tool that can turn your text descriptions into music.
·blog.google·
Turn ideas into music with MusicLM