Path: blob/master/translate_cache/neox/utils/trainer.zh.json
4923 views
{1"<h3>Get trainable parameters</h3>\n<ul><li><span translate=no>_^_0_^_</span> is the model to train </li>\n<p><em>Returns</em> a list of parameters for training</p></ul>\n": "<h3>\u83b7\u53d6\u53ef\u8bad\u7ec3\u7684\u53c2\u6570</h3>\n<ul><li><span translate=no>_^_0_^_</span>\u662f\u8981\u8bad\u7ec3\u7684\u6a21\u578b</li>\n<p><em>\u8fd4\u56de</em>\u8bad\u7ec3\u7684\u53c2\u6570\u5217\u8868</p></ul>\n",2"<p> </p>\n": "<p></p>\n",3"<p>Backward pass </p>\n": "<p>\u5411\u540e\u4f20\u7403</p>\n",4"<p>Calculate accuracy </p>\n": "<p>\u8ba1\u7b97\u7cbe\u5ea6</p>\n",5"<p>Calculate loss </p>\n": "<p>\u8ba1\u7b97\u635f\u5931</p>\n",6"<p>Filter parameters that require gradients </p>\n": "<p>\u8fc7\u6ee4\u9700\u8981\u6e10\u53d8\u7684\u53c2\u6570</p>\n",7"<p>Forward pass </p>\n": "<p>\u5411\u524d\u4f20\u7403</p>\n",8"<p>Get all parameters </p>\n": "<p>\u83b7\u53d6\u6240\u6709\u53c2\u6570</p>\n",9"<p>Get predictions </p>\n": "<p>\u83b7\u53d6\u9884\u6d4b</p>\n",10"<p>Iterate through the batches </p>\n": "<p>\u904d\u5386\u6279\u6b21</p>\n",11"<p>Move targets to the same device as output </p>\n": "<p>\u5c06\u76ee\u6807\u79fb\u52a8\u5230\u4e0e\u8f93\u51fa\u76f8\u540c\u7684\u8bbe\u5907\u4e0a</p>\n",12"<p>Optimize </p>\n": "<p>\u4f18\u5316</p>\n",13"<p>Set gradients to zero </p>\n": "<p>\u5c06\u6e10\u53d8\u8bbe\u7f6e\u4e3a\u96f6</p>\n",14"<p>Set model for train </p>\n": "<p>\u8bbe\u7f6e\u706b\u8f66\u6a21\u578b</p>\n",15"<p>tracker.add({'loss.scaled': loss}) </p>\n": "<p>tracker.add ({'loss.scaled': loss})</p>\n",16"<ul><li><span translate=no>_^_0_^_</span> train/valid </li>\n<li><span translate=no>_^_1_^_</span> is the sample </li>\n<p><em>Returns</em> the loss, output and the target</p></ul>\n": "<ul><li><span translate=no>_^_0_^_</span>\u8bad\u7ec3/\u6709\u6548</li>\n<li><span translate=no>_^_1_^_</span>\u662f\u6837\u672c</li>\n<p><em>\u8fd4\u56de</em>\u635f\u5931\u3001\u8f93\u51fa\u548c\u76ee\u6807</p></ul>\n",17"trainer.py": "trainer.py"18}1920