mirror of
https://git.sdbs.cz/sdbs/digital-garden-anabasis.git
synced 2025-01-22 19:15:46 +01:00
Automatic update, changed: concepts.Database Art.md, incubation.audio.synthesis.concatenative.md
This commit is contained in:
parent
1e7b067476
commit
d4e74582bb
2 changed files with 31 additions and 20 deletions
|
@ -12,6 +12,7 @@ the world appears to us as an endless and unstructured collection of images, tex
|
||||||
|
|
||||||
--------------------
|
--------------------
|
||||||
- [[people.DzigaVertov]]
|
- [[people.DzigaVertov]]
|
||||||
|
- [[people.LevManovich]]
|
||||||
- [[concepts.Algorithmic Editing]]
|
- [[concepts.Algorithmic Editing]]
|
||||||
- [[areas.filetag]]
|
- [[areas.filetag]]
|
||||||
- [[concepts.archives.art]]
|
- [[concepts.archives.art]]
|
||||||
|
|
|
@ -7,28 +7,12 @@
|
||||||
## Adjacent
|
## Adjacent
|
||||||
- [[incubation.audio.sample.managment]]
|
- [[incubation.audio.sample.managment]]
|
||||||
- [[tools.tts]]
|
- [[tools.tts]]
|
||||||
|
- [[ai.audio]]
|
||||||
|
- [[ai.music]]
|
||||||
|
- [[concepts.Database Art]]
|
||||||
|
|
||||||
|
|
||||||
## Tools
|
## Tools
|
||||||
### Unsorted
|
|
||||||
- https://en.wikipedia.org/wiki/Festival_Speech_Synthesis_System
|
|
||||||
- https://en.wikipedia.org/wiki/ESpeak
|
|
||||||
|
|
||||||
- - https://www.isi.edu/~carte/e-speech/synth/index.html
|
|
||||||
|
|
||||||
|
|
||||||
- https://en.wikipedia.org/wiki/Sinsy
|
|
||||||
- > **Sinsy** (**Sin**ging Voice **Sy**nthesis System) (しぃんしぃ) is an online [Hidden Markov model](https://en.wikipedia.org/wiki/Hidden_Markov_model "Hidden Markov model") (HMM)-based singing voice synthesis system by the [Nagoya Institute of Technology](https://en.wikipedia.org/wiki/Nagoya_Institute_of_Technology "Nagoya Institute of Technology") that was created under the [Modified BSD license](https://en.wikipedia.org/wiki/BSD_licenses "BSD licenses").[](https://en.wikipedia.org/wiki/Sinsy#cite_note-1)
|
|
||||||
- http://www.sinsy.jp/
|
|
||||||
|
|
||||||
|
|
||||||
- https://www.audiolabs-erlangen.de/resources/MIR/2015-ISMIR-LetItBee
|
|
||||||
- http://labrosa.ee.columbia.edu/hamr_ismir2014/proceedings/doku.php?id=audio_mosaicing
|
|
||||||
|
|
||||||
- https://www.danieleghisi.com/phd/PHDThesis_20180118.pdf
|
|
||||||
- https://musicinformationretrieval.com/
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Promising
|
### Promising
|
||||||
- https://colab.research.google.com/github/stevetjoa/musicinformationretrieval.com/blob/gh-pages/nmf_audio_mosaic.ipynb
|
- https://colab.research.google.com/github/stevetjoa/musicinformationretrieval.com/blob/gh-pages/nmf_audio_mosaic.ipynb
|
||||||
|
@ -51,4 +35,30 @@
|
||||||
https://github.com/wbrent/timbreIDLib
|
https://github.com/wbrent/timbreIDLib
|
||||||
>timbreIDLib is a library of audio analysis externals for Pure Data. The classification external [timbreID] accepts arbitrary lists of audio features and attempts to find the best match between an input feature and previously stored training instances. The library can be used for a variety of real-time and non-real-time applications, including sound classification, sound searching, sound visualization, automatic segmenting, ordering of sounds by timbre, key and tempo estimation, and concatenative synthesis.
|
>timbreIDLib is a library of audio analysis externals for Pure Data. The classification external [timbreID] accepts arbitrary lists of audio features and attempts to find the best match between an input feature and previously stored training instances. The library can be used for a variety of real-time and non-real-time applications, including sound classification, sound searching, sound visualization, automatic segmenting, ordering of sounds by timbre, key and tempo estimation, and concatenative synthesis.
|
||||||
|
|
||||||
- https://forum.pdpatchrepo.info/topic/11876/scrambled-hackz-how-did-he-do-it
|
- https://forum.pdpatchrepo.info/topic/11876/scrambled-hackz-how-did-he-do-it
|
||||||
|
|
||||||
|
#### flucoma
|
||||||
|
https://www.flucoma.org/
|
||||||
|
>TheFluidCorpusManipulationproject(FluCoMa)instigatesnewmusicalwaysofexploitingever-growingbanksofsoundandgestureswithinthedigitalcompositionprocess,bybringingbreakthroughsofsignaldecompositionDSPandmachinelearningtothetoolsetoftechno-fluentcomputercomposers,creativecodersanddigitalartists.
|
||||||
|
- https://learn.flucoma.org/explore/roma/
|
||||||
|
- https://learn.flucoma.org/learn/2d-corpus-explorer/
|
||||||
|
|
||||||
|
### Unsorted
|
||||||
|
- https://en.wikipedia.org/wiki/Festival_Speech_Synthesis_System
|
||||||
|
- https://en.wikipedia.org/wiki/ESpeak
|
||||||
|
|
||||||
|
- - https://www.isi.edu/~carte/e-speech/synth/index.html
|
||||||
|
|
||||||
|
|
||||||
|
- https://en.wikipedia.org/wiki/Sinsy
|
||||||
|
- > **Sinsy** (**Sin**ging Voice **Sy**nthesis System) (しぃんしぃ) is an online [Hidden Markov model](https://en.wikipedia.org/wiki/Hidden_Markov_model "Hidden Markov model") (HMM)-based singing voice synthesis system by the [Nagoya Institute of Technology](https://en.wikipedia.org/wiki/Nagoya_Institute_of_Technology "Nagoya Institute of Technology") that was created under the [Modified BSD license](https://en.wikipedia.org/wiki/BSD_licenses "BSD licenses").[](https://en.wikipedia.org/wiki/Sinsy#cite_note-1)
|
||||||
|
- http://www.sinsy.jp/
|
||||||
|
|
||||||
|
|
||||||
|
- https://www.audiolabs-erlangen.de/resources/MIR/2015-ISMIR-LetItBee
|
||||||
|
- http://labrosa.ee.columbia.edu/hamr_ismir2014/proceedings/doku.php?id=audio_mosaicing
|
||||||
|
|
||||||
|
- https://www.danieleghisi.com/phd/PHDThesis_20180118.pdf
|
||||||
|
- https://musicinformationretrieval.com/
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Add table
Reference in a new issue