New top story on Hacker News: Show HN: Sonauto – a more controllable AI music creator
Show HN: Sonauto – a more controllable AI music creator
59 by zaptrem | 29 comments on Hacker News.
Hey HN, My cofounder and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: https://sonauto.ai/ Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec ( https://ift.tt/9LM7dVP ) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics! We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make your own music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you. For example, we turned this drum line ( https://ift.tt/3sSdL4G ) into this full song ( https://ift.tt/PG3TlNg skip to 1:05 if impatient) or this other song I like better ( https://ift.tt/r4H3zqt - we accidentally compressed it with AAC instead of Opus which hurt quality, though) We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site free and unlimited for as long as possible. We'd love to answer your questions. Let us know what you think of our first model! https://sonauto.ai/
59 by zaptrem | 29 comments on Hacker News.
Hey HN, My cofounder and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: https://sonauto.ai/ Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec ( https://ift.tt/9LM7dVP ) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics! We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make your own music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you. For example, we turned this drum line ( https://ift.tt/3sSdL4G ) into this full song ( https://ift.tt/PG3TlNg skip to 1:05 if impatient) or this other song I like better ( https://ift.tt/r4H3zqt - we accidentally compressed it with AAC instead of Opus which hurt quality, though) We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site free and unlimited for as long as possible. We'd love to answer your questions. Let us know what you think of our first model! https://sonauto.ai/
Comments
Post a Comment