Thank you for visiting. I am Jongmoon Ryu, a graduate student in the Speech and Audio Processing Lab(SAPL) at the EECS of Gwangju Institute of Science and Technology(GIST). Currently, I am studying and exploring deep learning-based speech synthesis (TTS) systems, and I have a keen interest in furthering my expertise in this area. If you would like to know more about me, please refer to my CV, Blog #1 and Blog #2.
-
GIST
- Gwangju, South Korea
-
14:50
(UTC +09:00) - https://killerwhale0917.tistory.com/
Highlights
Pinned Loading
-
TransformerTTS
TransformerTTS PublicUnofficial PyTorch implementation of Transformer-TTS, a Transformer-based neural speech synthesis model.
-
Spectrogram-VQ
Spectrogram-VQ PublicUnofficial implementation of Spectrogram VQ from DCTTS paper - Vector quantization of mel-spectrograms for discrete speech representation
Jupyter Notebook
-
fine-grained-emotional-control-of-tts
fine-grained-emotional-control-of-tts PublicUnofficial implementation of "Fine-grained Emotional Control of TTS (ICASSP 2023)" โ combines a rank-based intensity model with FastSpeech2 to synthesize speech with controllable emotion intensity.
-
boostcampaitech3/final-project-level3-recsys-07
boostcampaitech3/final-project-level3-recsys-07 Public[RecSys] ๋ค์ด๋ฒ ๋ถ์คํธ์บ ํ AI Tech 3๊ธฐ / ๊ฐ์ง ์ท ๊ธฐ๋ฐ ํจ์ ์์ดํ ์ถ์ฒ ์๋น์ค - ์ ์พํ๋ฐ์ํ
-
boostcampaitech3/level2-dkt-level2-recsys-07
boostcampaitech3/level2-dkt-level2-recsys-07 Public[RecSys] ๋ค์ด๋ฒ ๋ถ์คํธ์บ ํ AI Tech 3๊ธฐ / DKT ๋ํ (2๋ฑ) - ์ ์พํ๋ฐ์ํ
If the problem persists, check the GitHub status page or contact support.



