diff --git a/pages/incubation.audio.synthesis.concatenative.sync-conflict-20221116-160822-ZL4KNMY.md b/pages/incubation.audio.synthesis.concatenative.sync-conflict-20221116-160822-ZL4KNMY.md new file mode 100644 index 0000000..24bfa28 --- /dev/null +++ b/pages/incubation.audio.synthesis.concatenative.sync-conflict-20221116-160822-ZL4KNMY.md @@ -0,0 +1,82 @@ +# Concatenative synthesis + +## 101 +> **Concatenative synthesis** is a technique for synthesising sounds by concatenating short samples of recorded sound (called _units_). +> - https://en.wikipedia.org/wiki/Concatenative_synthesis + +>synonyms for concatenation +>> +> - [chain](https://www.thesaurus.com/browse/chain) +> - [continuity](https://www.thesaurus.com/browse/continuity) +> - [integration](https://www.thesaurus.com/browse/integration) +> - [interlocking](https://www.thesaurus.com/browse/interlocking) +> - [link](https://www.thesaurus.com/browse/link) +> - [nexus](https://www.thesaurus.com/browse/nexus) +> - [series](https://www.thesaurus.com/browse/series) +> - [succession](https://www.thesaurus.com/browse/succession) +> - [uniting](https://www.thesaurus.com/browse/uniting) + +## Adjacent +- [[incubation.audio.sample.managment]] +- [[tools.tts]] +- [[ai.audio]] +- [[ai.music]] +- [[concepts.Database Art]] + + +## Tools + +### Promising +- https://colab.research.google.com/github/stevetjoa/musicinformationretrieval.com/blob/gh-pages/nmf_audio_mosaic.ipynb +- https://soundlab.cs.princeton.edu/research/mosievius/ +- http://imtr.ircam.fr/imtr/Corpus_Based_Synthesis + - max/msp + - http://imtr.ircam.fr/imtr/Diemo_Schwarz +- https://github.com/benhackbarth/audioguide + - OSX but promising + +#### Audiostellar [8/10] +- https://audiostellar.xyz/ + - https://www.arj.no/tag/sox/ + - cataRT + - ableton link incoming + + + +#### timbreIDLib Pure Data +https://github.com/wbrent/timbreIDLib +>timbreIDLib is a library of audio analysis externals for Pure Data. The classification external [timbreID] accepts arbitrary lists of audio features and attempts to find the best match between an input feature and previously stored training instances. The library can be used for a variety of real-time and non-real-time applications, including sound classification, sound searching, sound visualization, automatic segmenting, ordering of sounds by timbre, key and tempo estimation, and concatenative synthesis. + +- https://forum.pdpatchrepo.info/topic/11876/scrambled-hackz-how-did-he-do-it + +#### flucoma +https://www.flucoma.org/ +>TheFluidCorpusManipulationproject(FluCoMa)instigatesnewmusicalwaysofexploitingever-growingbanksofsoundandgestureswithinthedigitalcompositionprocess,bybringingbreakthroughsofsignaldecompositionDSPandmachinelearningtothetoolsetoftechno-fluentcomputercomposers,creativecodersanddigitalartists. + +- PD / Max / supercollider also +- https://learn.flucoma.org/explore/roma/ +- https://learn.flucoma.org/learn/2d-corpus-explorer/ +- https://www.youtube.com/watch?v=2YxONrfA6po + +#### samplebrain [4/10] +https://gitlab.com/then-try-this/samplebrain + +### Unsorted +- https://en.wikipedia.org/wiki/Festival_Speech_Synthesis_System +- https://en.wikipedia.org/wiki/ESpeak + + - - https://www.isi.edu/~carte/e-speech/synth/index.html + + +- https://en.wikipedia.org/wiki/Sinsy + - > **Sinsy** (**Sin**ging Voice **Sy**nthesis System) (しぃんしぃ) is an online [Hidden Markov model](https://en.wikipedia.org/wiki/Hidden_Markov_model "Hidden Markov model") (HMM)-based singing voice synthesis system by the [Nagoya Institute of Technology](https://en.wikipedia.org/wiki/Nagoya_Institute_of_Technology "Nagoya Institute of Technology") that was created under the [Modified BSD license](https://en.wikipedia.org/wiki/BSD_licenses "BSD licenses").[](https://en.wikipedia.org/wiki/Sinsy#cite_note-1) + - http://www.sinsy.jp/ + + +- https://www.audiolabs-erlangen.de/resources/MIR/2015-ISMIR-LetItBee + - http://labrosa.ee.columbia.edu/hamr_ismir2014/proceedings/doku.php?id=audio_mosaicing + +- https://www.danieleghisi.com/phd/PHDThesis_20180118.pdf +- https://musicinformationretrieval.com/ + +