Music and Audio

Music and Audio

#audio
MultiFeatureBeatTracking
MultiFeatureBeatTracking
Matlab implementation of the: J.R. Zapata, M. Davies and E. Gómez, "Multi-feature beat tracker," IEEE/ACM Transactions on Audio, Speech and Language Processing. 22(4), pp. 816-825, 2014"
·github.com·
MultiFeatureBeatTracking
SyncSink
SyncSink
Synchronization tool to sync different audio captures of the same event
·github.com·
SyncSink
Olaf
Olaf
Olaf: Overly Lightweight Acoustic Fingerprinting is a portable acoustic fingerprinting system.
·github.com·
Olaf
music-structure
music-structure
A web application that extracts and visualizes music structure. Connects with the Spotify API for audio content data.
·github.com·
music-structure
media-now
media-now
Get media information from YouTube and Vimeo videos, Spotify tracks and Discogs releases.
·github.com·
media-now
ScorePerformer
ScorePerformer
ScorePerformer: Expressive Piano Performance Rendering with Fine-Grained Control (ISMIR 2023)
·github.com·
ScorePerformer
muscall
muscall
Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)
·github.com·
muscall
ishkurs-guide-dataset
ishkurs-guide-dataset
Structured Data from Ishkur's Guide to Electronic Music. Working Mirror for v2.5 here: https://igorbrigadir.github.io/ishkurs-guide-dataset/
·github.com·
ishkurs-guide-dataset
music-recommendation-web-application-based-on-rhythmic-similarity-using-locality-sensitive-hashing
music-recommendation-web-application-based-on-rhythmic-similarity-using-locality-sensitive-hashing
This repository contains a web application that integrates with a music recommendation system, which leverages a dataset of 3,415 audio files, each lasting thirty seconds, utilising a Locality-Sensitive Hashing (LSH) implementation to determine rhythmic similarity, as part of an assignment for the Fundamental of Big Data Analytics (DS2004) course.
·github.com·
music-recommendation-web-application-based-on-rhythmic-similarity-using-locality-sensitive-hashing
learnfy
learnfy
[REPO] Create a music genre dataset (metadata or audio) for classification using the Spotify Web API
·github.com·
learnfy
ENST-drums-dataset
ENST-drums-dataset
the dataset used in the paper https://drive.google.com/file/d/0B4bIMgQlCAuqdGVRbVNNbzJfeUU/view
·github.com·
ENST-drums-dataset
AlignmentDuration
AlignmentDuration
Lyrics-to-audio-alignement system. Based on Machine Learning Algorithms: Hidden Markov Models with Viterbi forced alignment. The alignment is explicitly aware of durations of musical notes. The phonetic model are classified with MLP Deep Neural Network.
·github.com·
AlignmentDuration
DALI
DALI
DALI: a large Dataset of synchronised Audio, LyrIcs and vocal notes.
·github.com·
DALI
cunet
cunet
Control mechanisms to the U-Net architecture for doing multiple source separation instruments
·github.com·
cunet
deep-embedded-music
deep-embedded-music
Creation of an embedding space using unsupervised triplet loss and Tile2Vec that can be used for a variety of downstream tasks
·github.com·
deep-embedded-music
emusic_net
emusic_net
Neural network to classify some styles of Electronic music
·github.com·
emusic_net