Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
TensorSpeech
GitHub Repository: TensorSpeech/TensorFlowTTS
Path: blob/master/notebooks/tacotron2_inference.ipynb
1558 views
Kernel: Python 3
import yaml import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow_tts.inference import AutoConfig from tensorflow_tts.inference import TFAutoModel from tensorflow_tts.inference import AutoProcessor import IPython.display as ipd
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en")
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=3568.0, style=ProgressStyle(description…
input_text = "i love you so much." input_ids = processor.text_to_sequence(input_text)
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-ljspeech-en")
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=127975304.0, style=ProgressStyle(descri…
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=4027.0, style=ProgressStyle(description…
tacotron2.setup_window(win_front=6, win_back=6) tacotron2.setup_maximum_iterations(3000)

Save to Pb

# save model into pb and do inference. Note that signatures should be a tf.function with input_signatures. tf.saved_model.save(tacotron2, "./test_saved", signatures=tacotron2.inference)
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow_tts.models.tacotron2.TFTacotronLocationSensitiveAttention object at 0x7fcd904c7a10>, because it is not built. WARNING:tensorflow:From /home/lap13548/anaconda3/envs/tensorflow-tts/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. WARNING:tensorflow:From /home/lap13548/anaconda3/envs/tensorflow-tts/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. INFO:tensorflow:Assets written to: ./test_saved/assets

Load and Inference

tacotron2 = tf.saved_model.load("./test_saved")
WARNING:tensorflow:Importing a function (__inference_batch_norm_._2_layer_call_and_return_conditional_losses_10690) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._1_layer_call_and_return_conditional_losses_31114) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._3_layer_call_and_return_conditional_losses_31344) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._3_layer_call_and_return_conditional_losses_17482) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._2_layer_call_and_return_conditional_losses_31229) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._3_layer_call_and_return_conditional_losses_10896) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._1_layer_call_and_return_conditional_losses_17070) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._2_layer_call_and_return_conditional_losses_30654) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._1_layer_call_and_return_conditional_losses_30539) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._0_layer_call_and_return_conditional_losses_30999) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._4_layer_call_and_return_conditional_losses_17688) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._3_layer_call_and_return_conditional_losses_30769) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._2_layer_call_and_return_conditional_losses_17276) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._4_layer_call_and_return_conditional_losses_30884) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._4_layer_call_and_return_conditional_losses_31459) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._4_layer_call_and_return_conditional_losses_11102) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._0_layer_call_and_return_conditional_losses_10278) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._0_layer_call_and_return_conditional_losses_16864) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._0_layer_call_and_return_conditional_losses_30424) with ops with custom gradients. Will likely fail if a gradient is requested. WARNING:tensorflow:Importing a function (__inference_batch_norm_._1_layer_call_and_return_conditional_losses_10484) with ops with custom gradients. Will likely fail if a gradient is requested.
input_text = "Unless you work on a ship, it's unlikely that you use the word boatswain in everyday conversation, so it's understandably a tricky one. The word - which refers to a petty officer in charge of hull maintenance is not pronounced boats-wain Rather, it's bo-sun to reflect the salty pronunciation of sailors, as The Free Dictionary explains." input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), tf.convert_to_tensor([len(input_ids)], tf.int32), tf.convert_to_tensor([0], dtype=tf.int32) )
fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) ax.set_title(f'Alignment steps') im = ax.imshow( alignment_history[0].numpy(), aspect='auto', origin='lower', interpolation='none') fig.colorbar(im, ax=ax) xlabel = 'Decoder timestep' plt.xlabel(xlabel) plt.ylabel('Encoder timestep') plt.tight_layout() plt.show() plt.close()
Image in a Jupyter notebook
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy() fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-after-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() plt.close()
Image in a Jupyter notebook

Let inference other input to check dynamic shape

input_text = "The Commission further recommends that the Secret Service coordinate its planning as closely as possible with all of the Federal agencies from which it receives information." input_ids = processor.text_to_sequence(input_text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), tf.convert_to_tensor([len(input_ids)], tf.int32), tf.convert_to_tensor([0], dtype=tf.int32), )
fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) ax.set_title(f'Alignment steps') im = ax.imshow( alignment_history[0].numpy(), aspect='auto', origin='lower', interpolation='none') fig.colorbar(im, ax=ax) xlabel = 'Decoder timestep' plt.xlabel(xlabel) plt.ylabel('Encoder timestep') plt.tight_layout() plt.show() plt.close()
Image in a Jupyter notebook
mel_outputs = tf.reshape(mel_outputs, [-1, 80]).numpy() fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-after-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() plt.close()
Image in a Jupyter notebook