Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/tfx/guide/modelval.md
25118 views

ModelValidator TFX 流水线组件(已弃用)

ModelValidator 用于检查模型是否足以用于生产环境。我们仍然认为验证是有用的,但是由于模型 Evaluator 已经计算了要验证的所有指标,因此我们决定将两者融合,这样您就不必重复计算。

尽管我们已弃用 ModelValidator 并且不建议使用它,不过,如果您需要维护现有的 ModelValidator 组件,则示例配置如下:

import tfx import tensorflow_model_analysis as tfma from tfx.components.model_validator.component import ModelValidator ... model_validator = ModelValidator( examples=example_gen.outputs['output_data'], model=trainer.outputs['model'])

对于想要将配置迁移到 Evaluator 的用户,Evaluator 的类似配置如下所示:

from tfx import components import tensorflow_model_analysis as tfma ... eval_config = tfma.EvalConfig( model_specs=[ # This assumes a serving model with signature 'serving_default'. If # using estimator based EvalSavedModel, add signature_name: 'eval' and # remove the label_key. tfma.ModelSpec(label_key='<label_key>') ], metrics_specs=[ tfma.MetricsSpec( # The metrics added here are in addition to those saved with the # model (assuming either a keras model or EvalSavedModel is used). # Any metrics added into the saved model (for example using # model.compile(..., metrics=[...]), etc) will be computed # automatically. metrics=[ tfma.MetricConfig(class_name='ExampleCount'), tfma.MetricConfig( class_name='BinaryAccuracy', threshold=tfma.MetricThreshold( value_threshold=tfma.GenericValueThreshold( lower_bound={'value': 0.5}), change_threshold=tfma.GenericChangeThreshold( direction=tfma.MetricDirection.HIGHER_IS_BETTER, absolute={'value': -1e-10}))) ] ) ], slicing_specs=[ # An empty slice spec means the overall slice, i.e. the whole dataset. tfma.SlicingSpec(), # Data can be sliced along a feature column. In this case, data is # sliced along feature column trip_start_hour. tfma.SlicingSpec(feature_keys=['trip_start_hour']) ]) model_resolver = Resolver( strategy_class=latest_blessed_model_resolver.LatestBlessedModelResolver, model=Channel(type=Model), model_blessing=Channel(type=ModelBlessing) ).with_id('latest_blessed_model_resolver') model_analyzer = components.Evaluator( examples=examples_gen.outputs['examples'], model=trainer.outputs['model'], baseline_model=model_resolver.outputs['model'], # Change threshold will be ignored if there is no baseline (first run). eval_config=eval_config)