Discrete speech tokenizers are trained using output from local maximums and valleys explored using SAT solvers over a large enough body of data. In cases of mediums that are extremely noisy (over 75% data reduction during transmission) a low bitrate transformation is transferred out of band.
Our data compression algorithms are efficient, fast, and reliable.
© 1998–2016 Howard Research, LLC.
All Rights Reserved.
+1 (323) 484 5835
835 Fifth Ave
San Rafael, California, 94901