Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
amanchadha
GitHub Repository: amanchadha/coursera-natural-language-processing-specialization
Path: blob/master/4 - Natural Language Processing with Attention Models/Week 1/output_dir/train/events.out.tfevents.1600864780.521a100fa402
65 views
�K"	����A
brain.Event:2��G.��W�	`�����A*!

metrics/CrossEntropyLoss��&A!�9As̮�P	<����A*�
�

gin_configB�B�#### Parameters for Adam:

    Adam.b1 = 0.9
    Adam.b2 = 0.999
    Adam.clip_grad_norm = None
    Adam.eps = 1e-05
    Adam.weight_decay_rate = 1e-05
    
#### Parameters for AddLossWeights:

    # None.
    
#### Parameters for backend:

    backend.name = 'jax'
    
#### Parameters for BucketByLength:

    BucketByLength.length_axis = 0
    BucketByLength.strict_pad_on_len = False
    
#### Parameters for FilterByLength:

    FilterByLength.length_axis = 0
    
#### Parameters for LogSoftmax:

    LogSoftmax.axis = -1
    
#### Parameters for random_spans_helper:

    # None.
    
#### Parameters for SentencePieceVocabulary:

    # None.
    
#### Parameters for data.TFDS:

    # None.
    
#### Parameters for tf_inputs.TFDS:

    # None.
    
#### Parameters for data.Tokenize:

    # None.
    
#### Parameters for tf_inputs.Tokenize:

    tf_inputs.Tokenize.keys = None
    tf_inputs.Tokenize.n_reserved_ids = 0
    tf_inputs.Tokenize.vocab_type = 'subword'
    
#### Parameters for Vocabulary:

    # None.
    
#### Parameters for warmup_and_rsqrt_decay:

    # None.J

text��Q�,���E	�G����A*

training/learning_rateĚ'7u��/m]P	3H����A*"
 
training/steps per second��;I9Z7+��K	 I����A*

training/gradients_l2M��?�:\J#��wC	�I����A*


training/loss��&A�w5j)7�_	wJ����A*

training/weights_l2�E&إ.��W�	-�����A
*!

metrics/CrossEntropyLoss��#A���$,���E	�����A
*

training/learning_rateu��8Y|L/m]P	I����A
*"
 
training/steps per second�q�<41��+��K	#����A
*

training/gradients_l2�v@���$#��wC	�����A
*


training/loss��#A<���)7�_	�����A
*

training/weights_l2��Et��