Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/pt-br/hub/tutorials/tf2_image_retraining.ipynb
25118 views
Kernel: Python 3

Licensed under the Apache License, Version 2.0 (the "License");

# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ==============================================================================

Retreinamento de um classificador de imagem

Introdução

Modelos de classificação de imagem têm milhões de parâmetros, e treiná-los do zero exige muitos dados rotulados e recursos computacionais. O aprendizado por transferência é uma técnica que pega um atalho ao aproveitar um pedaço de um modelo já treinado em uma tarefa relacionada e reutilizá-lo em um novo modelo.

Este Colab demonstra como criar um modelo do Keras para classificar cinco espécies de flores usando um modelo SavedModel pré-treinado do TF2 no TensorFlow Hub para extração de características de imagens, que foi treinado usando-se o dataset ImageNet, que é muito maior e mais generalizado. Opcionalmente, o extrator de características pode ser treinado (passar por ajustes finos) juntamente com o classificador recém-adicionado.

Está procurando uma ferramenta?

Este é um tutorial de programação do TensorFlow. Se você deseja uma ferramenta que crie o modelo do TensorFlow ou TF Lite, confira a ferramenta de linha de comando make_image_classifier, que é instalada pelo pacote PIP tensorflow-hub[make_image_classifier], ou confira este Colab do TF Lite.

Configuração

import itertools import os import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub print("TF version:", tf.__version__) print("Hub version:", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")

Selecione o módulo SavedModel do TF2 a ser usado

Para começar, use https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4. A mesma URL pode ser usada no código para identificar o SavedModel e no navegador para mostrar a documentação (atenção: modelos no formato do TF1 Hub não funcionarão aqui).

Confira mais modelos do TF2 que geram vetores de características de imagens aqui.

Existem diversos modelos que podem ser testados, basta selecionar um diferente na célula abaixo e acompanhar o notebook.

#@title model_name = "efficientnetv2-xl-21k" # @param ['efficientnetv2-s', 'efficientnetv2-m', 'efficientnetv2-l', 'efficientnetv2-s-21k', 'efficientnetv2-m-21k', 'efficientnetv2-l-21k', 'efficientnetv2-xl-21k', 'efficientnetv2-b0-21k', 'efficientnetv2-b1-21k', 'efficientnetv2-b2-21k', 'efficientnetv2-b3-21k', 'efficientnetv2-s-21k-ft1k', 'efficientnetv2-m-21k-ft1k', 'efficientnetv2-l-21k-ft1k', 'efficientnetv2-xl-21k-ft1k', 'efficientnetv2-b0-21k-ft1k', 'efficientnetv2-b1-21k-ft1k', 'efficientnetv2-b2-21k-ft1k', 'efficientnetv2-b3-21k-ft1k', 'efficientnetv2-b0', 'efficientnetv2-b1', 'efficientnetv2-b2', 'efficientnetv2-b3', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'bit_s-r50x1', 'inception_v3', 'inception_resnet_v2', 'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v2_50', 'resnet_v2_101', 'resnet_v2_152', 'nasnet_large', 'nasnet_mobile', 'pnasnet_large', 'mobilenet_v2_100_224', 'mobilenet_v2_130_224', 'mobilenet_v2_140_224', 'mobilenet_v3_small_100_224', 'mobilenet_v3_small_075_224', 'mobilenet_v3_large_100_224', 'mobilenet_v3_large_075_224'] model_handle_map = { "efficientnetv2-s": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_s/feature_vector/2", "efficientnetv2-m": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_m/feature_vector/2", "efficientnetv2-l": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_l/feature_vector/2", "efficientnetv2-s-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_s/feature_vector/2", "efficientnetv2-m-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_m/feature_vector/2", "efficientnetv2-l-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_l/feature_vector/2", "efficientnetv2-xl-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_xl/feature_vector/2", "efficientnetv2-b0-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b0/feature_vector/2", "efficientnetv2-b1-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b1/feature_vector/2", "efficientnetv2-b2-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b2/feature_vector/2", "efficientnetv2-b3-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/feature_vector/2", "efficientnetv2-s-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2", "efficientnetv2-m-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/feature_vector/2", "efficientnetv2-l-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/feature_vector/2", "efficientnetv2-xl-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/feature_vector/2", "efficientnetv2-b0-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/feature_vector/2", "efficientnetv2-b1-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b1/feature_vector/2", "efficientnetv2-b2-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b2/feature_vector/2", "efficientnetv2-b3-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b3/feature_vector/2", "efficientnetv2-b0": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2", "efficientnetv2-b1": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b1/feature_vector/2", "efficientnetv2-b2": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b2/feature_vector/2", "efficientnetv2-b3": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b3/feature_vector/2", "efficientnet_b0": "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1", "efficientnet_b1": "https://tfhub.dev/tensorflow/efficientnet/b1/feature-vector/1", "efficientnet_b2": "https://tfhub.dev/tensorflow/efficientnet/b2/feature-vector/1", "efficientnet_b3": "https://tfhub.dev/tensorflow/efficientnet/b3/feature-vector/1", "efficientnet_b4": "https://tfhub.dev/tensorflow/efficientnet/b4/feature-vector/1", "efficientnet_b5": "https://tfhub.dev/tensorflow/efficientnet/b5/feature-vector/1", "efficientnet_b6": "https://tfhub.dev/tensorflow/efficientnet/b6/feature-vector/1", "efficientnet_b7": "https://tfhub.dev/tensorflow/efficientnet/b7/feature-vector/1", "bit_s-r50x1": "https://tfhub.dev/google/bit/s-r50x1/1", "inception_v3": "https://tfhub.dev/google/imagenet/inception_v3/feature-vector/4", "inception_resnet_v2": "https://tfhub.dev/google/imagenet/inception_resnet_v2/feature-vector/4", "resnet_v1_50": "https://tfhub.dev/google/imagenet/resnet_v1_50/feature-vector/4", "resnet_v1_101": "https://tfhub.dev/google/imagenet/resnet_v1_101/feature-vector/4", "resnet_v1_152": "https://tfhub.dev/google/imagenet/resnet_v1_152/feature-vector/4", "resnet_v2_50": "https://tfhub.dev/google/imagenet/resnet_v2_50/feature-vector/4", "resnet_v2_101": "https://tfhub.dev/google/imagenet/resnet_v2_101/feature-vector/4", "resnet_v2_152": "https://tfhub.dev/google/imagenet/resnet_v2_152/feature-vector/4", "nasnet_large": "https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/4", "nasnet_mobile": "https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/4", "pnasnet_large": "https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/4", "mobilenet_v2_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4", "mobilenet_v2_130_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/feature_vector/4", "mobilenet_v2_140_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/feature_vector/4", "mobilenet_v3_small_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5", "mobilenet_v3_small_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_075_224/feature_vector/5", "mobilenet_v3_large_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5", "mobilenet_v3_large_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_075_224/feature_vector/5", } model_image_size_map = { "efficientnetv2-s": 384, "efficientnetv2-m": 480, "efficientnetv2-l": 480, "efficientnetv2-b0": 224, "efficientnetv2-b1": 240, "efficientnetv2-b2": 260, "efficientnetv2-b3": 300, "efficientnetv2-s-21k": 384, "efficientnetv2-m-21k": 480, "efficientnetv2-l-21k": 480, "efficientnetv2-xl-21k": 512, "efficientnetv2-b0-21k": 224, "efficientnetv2-b1-21k": 240, "efficientnetv2-b2-21k": 260, "efficientnetv2-b3-21k": 300, "efficientnetv2-s-21k-ft1k": 384, "efficientnetv2-m-21k-ft1k": 480, "efficientnetv2-l-21k-ft1k": 480, "efficientnetv2-xl-21k-ft1k": 512, "efficientnetv2-b0-21k-ft1k": 224, "efficientnetv2-b1-21k-ft1k": 240, "efficientnetv2-b2-21k-ft1k": 260, "efficientnetv2-b3-21k-ft1k": 300, "efficientnet_b0": 224, "efficientnet_b1": 240, "efficientnet_b2": 260, "efficientnet_b3": 300, "efficientnet_b4": 380, "efficientnet_b5": 456, "efficientnet_b6": 528, "efficientnet_b7": 600, "inception_v3": 299, "inception_resnet_v2": 299, "nasnet_large": 331, "pnasnet_large": 331, } model_handle = model_handle_map.get(model_name) pixels = model_image_size_map.get(model_name, 224) print(f"Selected model: {model_name} : {model_handle}") IMAGE_SIZE = (pixels, pixels) print(f"Input size {IMAGE_SIZE}") BATCH_SIZE = 16#@param {type:"integer"}

Configure o dataset Flowers

As entradas são redimensionadas adequadamente para o módulo selecionado. A ampliação do dataset (por exemplo, distorções aleatórias de uma imagem cada vez que ela é lida) melhora o treinamento, especialmente ao fazer os ajustes finos.

data_dir = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True)
def build_dataset(subset): return tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=.20, subset=subset, label_mode="categorical", # Seed needs to provided when using validation_split and shuffle = True. # A fixed seed is used so that the validation set is stable across runs. seed=123, image_size=IMAGE_SIZE, batch_size=1) train_ds = build_dataset("training") class_names = tuple(train_ds.class_names) train_size = train_ds.cardinality().numpy() train_ds = train_ds.unbatch().batch(BATCH_SIZE) train_ds = train_ds.repeat() normalization_layer = tf.keras.layers.Rescaling(1. / 255) preprocessing_model = tf.keras.Sequential([normalization_layer]) do_data_augmentation = False #@param {type:"boolean"} if do_data_augmentation: preprocessing_model.add( tf.keras.layers.RandomRotation(40)) preprocessing_model.add( tf.keras.layers.RandomTranslation(0, 0.2)) preprocessing_model.add( tf.keras.layers.RandomTranslation(0.2, 0)) # Like the old tf.keras.preprocessing.image.ImageDataGenerator(), # image sizes are fixed when reading, and then a random zoom is applied. # If all training inputs are larger than image_size, one could also use # RandomCrop with a batch size of 1 and rebatch later. preprocessing_model.add( tf.keras.layers.RandomZoom(0.2, 0.2)) preprocessing_model.add( tf.keras.layers.RandomFlip(mode="horizontal")) train_ds = train_ds.map(lambda images, labels: (preprocessing_model(images), labels)) val_ds = build_dataset("validation") valid_size = val_ds.cardinality().numpy() val_ds = val_ds.unbatch().batch(BATCH_SIZE) val_ds = val_ds.map(lambda images, labels: (normalization_layer(images), labels))

Definição do modelo

Basta colocar um classificador linear por cima do feature_extractor_layer com o módulo do Hub.

Para ter uma alta velocidade, começamos com um feature_extractor_layer não treinável, mas você também pode ativar os ajustes finos para aumentar a exatidão.

do_fine_tuning = False #@param {type:"boolean"}
print("Building model with", model_handle) model = tf.keras.Sequential([ # Explicitly define the input shape so the model can be properly # loaded by the TFLiteConverter tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)), hub.KerasLayer(model_handle, trainable=do_fine_tuning), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense(len(class_names), kernel_regularizer=tf.keras.regularizers.l2(0.0001)) ]) model.build((None,)+IMAGE_SIZE+(3,)) model.summary()

Treine o modelo

model.compile( optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1), metrics=['accuracy'])
steps_per_epoch = train_size // BATCH_SIZE validation_steps = valid_size // BATCH_SIZE hist = model.fit( train_ds, epochs=5, steps_per_epoch=steps_per_epoch, validation_data=val_ds, validation_steps=validation_steps).history
plt.figure() plt.ylabel("Loss (training and validation)") plt.xlabel("Training Steps") plt.ylim([0,2]) plt.plot(hist["loss"]) plt.plot(hist["val_loss"]) plt.figure() plt.ylabel("Accuracy (training and validation)") plt.xlabel("Training Steps") plt.ylim([0,1]) plt.plot(hist["accuracy"]) plt.plot(hist["val_accuracy"])

Teste o modelo com uma imagem dos dados de validação:

x, y = next(iter(val_ds)) image = x[0, :, :, :] true_index = np.argmax(y[0]) plt.imshow(image) plt.axis('off') plt.show() # Expand the validation image to (1, 224, 224, 3) before predicting the label prediction_scores = model.predict(np.expand_dims(image, axis=0)) predicted_index = np.argmax(prediction_scores) print("True label: " + class_names[true_index]) print("Predicted label: " + class_names[predicted_index])

Por fim, o modelo treinado pode ser salvo para implantação no TF Serving ou TF Lite (em dispositivos móveis) da seguinte forma:

saved_model_path = f"/tmp/saved_flowers_model_{model_name}" tf.saved_model.save(model, saved_model_path)

Opcional: desenvolvimento para TensorFlow Lite

Com o TensorFlow Lite, você pode implantar modelos do TensorFlow em dispositivos móveis e IoT. O código abaixo mostra como converter o modelo treinado para o TF Lite e aplicar as ferramentas de pós-treinamento do Toolkit de otimização de modelos do TensorFlow. Por fim, ele o executa no interpretador do TF Lite para avaliar a qualidade resultante.

  • A conversão sem otimização gera os mesmos resultados que antes (até o erro de arredondamento).

  • A conversão com otimização sem qualquer dado quantiza os pesos do modelo em 8 bits, mas a inferência ainda usa computação de ponto flutuante para ativações da rede neural, o que reduz o tamanho do modelo em quase quatro vezes e aumenta a latência de CPU em dispositivos móveis.

  • Além disso, a computação das ativações da rede neural também pode ser quantizada em inteiros de 8 bits, se um pequeno dataset de referência for fornecido para calibrar o intervalo de quantização. Em um dispositivo móvel, isso acelera ainda mais a inferência e permite executar em aceleradores, como o Edge TPU.

#@title Optimization settings optimize_lite_model = False #@param {type:"boolean"} #@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount. num_calibration_examples = 60 #@param {type:"slider", min:0, max:1000, step:1} representative_dataset = None if optimize_lite_model and num_calibration_examples: # Use a bounded number of training examples without labels for calibration. # TFLiteConverter expects a list of input tensors, each with batch size 1. representative_dataset = lambda: itertools.islice( ([image[None, ...]] for batch, _ in train_ds for image in batch), num_calibration_examples) converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path) if optimize_lite_model: converter.optimizations = [tf.lite.Optimize.DEFAULT] if representative_dataset: # This is optional, see above. converter.representative_dataset = representative_dataset lite_model_content = converter.convert() with open(f"/tmp/lite_flowers_model_{model_name}.tflite", "wb") as f: f.write(lite_model_content) print("Wrote %sTFLite model of %d bytes." % ("optimized " if optimize_lite_model else "", len(lite_model_content)))
interpreter = tf.lite.Interpreter(model_content=lite_model_content) # This little helper wraps the TFLite Interpreter as a numpy-to-numpy function. def lite_model(images): interpreter.allocate_tensors() interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images) interpreter.invoke() return interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
#@markdown For rapid experimentation, start with a moderate number of examples. num_eval_examples = 50 #@param {type:"slider", min:0, max:700} eval_dataset = ((image, label) # TFLite expects batch size 1. for batch in train_ds for (image, label) in zip(*batch)) count = 0 count_lite_tf_agree = 0 count_lite_correct = 0 for image, label in eval_dataset: probs_lite = lite_model(image[None, ...])[0] probs_tf = model(image[None, ...]).numpy()[0] y_lite = np.argmax(probs_lite) y_tf = np.argmax(probs_tf) y_true = np.argmax(label) count +=1 if y_lite == y_tf: count_lite_tf_agree += 1 if y_lite == y_true: count_lite_correct += 1 if count >= num_eval_examples: break print("TFLite model agrees with original model on %d of %d examples (%g%%)." % (count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count)) print("TFLite model is accurate on %d of %d examples (%g%%)." % (count_lite_correct, count, 100.0 * count_lite_correct / count))