Chirp is Google Cloud's 2B-parameter speech model built via self-supervised training on millions of hours of audio and 28 billion sentences of text spanning 100+ languages. Chirp delivers 98% speech ...
What's different about this implementation? I tried to keep this project as similar as possible to the Chirp example, but there are a few key areas of difference: Using SvelteKit instead of Next.js ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results