Path: blob/master/notebooks/TensorFlowTTS_FastSpeech_with_TFLite.ipynb
1558 views
Copyright 2020 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
Authors : jaeyoo@, khanhlvg@, abattery@, thaink@ (Google Research) (refactored by sayakpaul (PyImageSearch))
Created : 2020-07-03 KST
Last updated : 2020-07-04 KST
Change logs
2020-07-04 KST : Update notebook with the latest repo.
https://github.com/TensorSpeech/TensorflowTTS/pull/84 merged.
2020-07-03 KST : First implementation (outputs :
fastspeech_quant.tflite)varied-length input tensor, varied-length output tensor
Inference on tflite works well.
2020-12-22 IST: Notebook runs end-to-end on Colab.
Status : successfully converted (fastspeech_quant.tflite)
Disclaimer
This colab doesn't care about the latency, so it compressed the model with quantization. (112 MB -> 28 MB)
The TFLite file doesn't have LJSpeechProcessor. So you need to run it before feeding input vectors.
tf-nightly>=2.4.0-dev20200630
Generate voice with FastSpeech
Another runtime restart is required.