Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
labmlai
GitHub Repository: labmlai/annotated_deep_learning_paper_implementations
Path: blob/master/translate_cache/optimizers/adam_fp16.zh.json
4928 views
1
{
2
"<h1>Adam Optimizer for Half Precision Training</h1>\n": "<h1>\u534a\u7cbe\u5ea6\u8bad\u7ec3\u7684 Adam Optimizer</h1>\n",
3
"<h2>Adam Optimizer for Half Precision Training</h2>\n<p>We extend <a href=\"adam.html\">Adam Optimizer</a> but use FP32 to store gradients and moments.</p>\n": "<h2>\u534a\u7cbe\u5ea6\u8bad\u7ec3\u7684 Adam Optimizer</h2>\n<p>\u6211\u4eec\u6269\u5c55\u4e86 <a href=\"adam.html\">Adam Optimizer</a>\uff0c\u4f46\u4f7f\u7528 FP32 \u6765\u5b58\u50a8\u6e10\u53d8\u548c\u65f6\u523b\u3002</p>\n",
4
"<h2>Gradient Scaler with half precision gradients</h2>\n<p>We extend PyTorch gradient scaler to use FP32 gradients.</p>\n": "<h2>\u5177\u6709\u534a\u7cbe\u5ea6\u6e10\u53d8\u7684\u6e10\u53d8\u7f29\u653e\u5668</h2>\n<p>\u6211\u4eec\u5c06 PyTorch \u68af\u5ea6\u7f29\u653e\u5668\u6269\u5c55\u4e3a\u4f7f\u7528 FP32 \u6e10\u53d8\u3002</p>\n",
5
"<h3>Initialize a parameter state</h3>\n<ul><li><span translate=no>_^_0_^_</span> is the optimizer state of the parameter (tensor) </li>\n<li><span translate=no>_^_1_^_</span> stores optimizer attributes of the parameter group </li>\n<li><span translate=no>_^_2_^_</span> is the parameter tensor <span translate=no>_^_3_^_</span></li></ul>\n<p>All the state tensors use FP32.</p>\n": "<h3>\u521d\u59cb\u5316\u53c2\u6570\u72b6\u6001</h3>\n<ul><li><span translate=no>_^_0_^_</span>\u662f\u53c2\u6570\uff08\u5f20\u91cf\uff09\u7684\u4f18\u5316\u5668\u72b6\u6001</li>\n<li><span translate=no>_^_1_^_</span>\u5b58\u50a8\u53c2\u6570\u7ec4\u7684\u4f18\u5316\u7a0b\u5e8f\u5c5e\u6027</li>\n<li><span translate=no>_^_2_^_</span>\u662f\u53c2\u6570\u5f20\u91cf<span translate=no>_^_3_^_</span></li></ul>\n<p>\u6240\u6709\u72b6\u6001\u5f20\u91cf\u90fd\u4f7f\u7528 FP32\u3002</p>\n",
6
"<h3>Take an update step for a given parameter tensor</h3>\n<ul><li><span translate=no>_^_0_^_</span> is the optimizer state of the parameter (tensor) </li>\n<li><span translate=no>_^_1_^_</span> stores optimizer attributes of the parameter group </li>\n<li><span translate=no>_^_2_^_</span> is the current gradient tensor <span translate=no>_^_3_^_</span> for the parameter <span translate=no>_^_4_^_</span> </li>\n<li><span translate=no>_^_5_^_</span> is the parameter tensor <span translate=no>_^_6_^_</span></li></ul>\n": "<h3>\u5bf9\u7ed9\u5b9a\u53c2\u6570\u5f20\u91cf\u6267\u884c\u66f4\u65b0\u6b65\u9aa4</h3>\n<ul><li><span translate=no>_^_0_^_</span>\u662f\u53c2\u6570\uff08\u5f20\u91cf\uff09\u7684\u4f18\u5316\u5668\u72b6\u6001</li>\n<li><span translate=no>_^_1_^_</span>\u5b58\u50a8\u53c2\u6570\u7ec4\u7684\u4f18\u5316\u7a0b\u5e8f\u5c5e\u6027</li>\n<li><span translate=no>_^_2_^_</span>\u662f\u53c2\u6570\u7684\u5f53\u524d\u68af<span translate=no>_^_3_^_</span>\u5ea6\u5f20\u91cf<span translate=no>_^_4_^_</span></li>\n<li><span translate=no>_^_5_^_</span>\u662f\u53c2\u6570\u5f20\u91cf<span translate=no>_^_6_^_</span></li></ul>\n",
7
"<p> </p>\n": "<p></p>\n",
8
"<p>Calculate weight decay </p>\n": "<p>\u8ba1\u7b97\u4f53\u91cd\u8870\u51cf</p>\n",
9
"<p>Call the <a href=\"adam.html\">Adam Optimizer</a> initializer </p>\n": "<p>\u8c03\u7528 <a href=\"adam.html\">Adam \u4f18\u5316\u5668</a>\u521d\u59cb\u5316\u5668</p>\n",
10
"<p>Exponential moving average of gradients, <span translate=no>_^_0_^_</span> </p>\n": "<p>\u68af\u5ea6\u7684\u6307\u6570\u79fb\u52a8\u5e73\u5747\u7ebf\uff0c<span translate=no>_^_0_^_</span></p>\n",
11
"<p>Exponential moving average of squared gradient values, <span translate=no>_^_0_^_</span> </p>\n": "<p>\u68af\u5ea6\u5e73\u65b9\u503c\u7684\u6307\u6570\u79fb\u52a8\u5e73\u5747\u7ebf\uff0c<span translate=no>_^_0_^_</span></p>\n",
12
"<p>Get <span translate=no>_^_0_^_</span> and <span translate=no>_^_1_^_</span> </p>\n": "<p>\u83b7\u53d6<span translate=no>_^_0_^_</span>\u548c<span translate=no>_^_1_^_</span></p>\n",
13
"<p>Get the FP32 gradients if available </p>\n": "<p>\u83b7\u53d6 FP32 \u6e10\u53d8\uff08\u5982\u679c\u6709\uff09</p>\n",
14
"<p>Get the FP32 parameters </p>\n": "<p>\u83b7\u53d6 FP32 \u53c2\u6570</p>\n",
15
"<p>If we are using the <span translate=no>_^_0_^_</span> optimizer set <span translate=no>_^_1_^_</span> to the FP32 gradients </p>\n": "<p>\u5982\u679c\u6211\u4eec\u4f7f\u7528\u8bbe\u7f6e\u4e3a<span translate=no>_^_1_^_</span> FP32 \u6e10\u53d8\u7684<span translate=no>_^_0_^_</span>\u4f18\u5316\u5668</p>\n",
16
"<p>Increment <span translate=no>_^_0_^_</span> the number of optimizer steps </p>\n": "<p><span translate=no>_^_0_^_</span>\u589e\u52a0\u4f18\u5316\u5668\u6b65\u6570</p>\n",
17
"<p>Loop through parameters </p>\n": "<p>\u5faa\u73af\u6d4f\u89c8\u53c2\u6570</p>\n",
18
"<p>Maintain a FP32 copy of the parameters </p>\n": "<p>\u7ef4\u62a4\u53c2\u6570\u7684 FP32 \u526f\u672c</p>\n",
19
"<p>Not implemented for sparse tensors </p>\n": "<p>\u672a\u9488\u5bf9\u7a00\u758f\u5f20\u91cf\u5b9e\u73b0</p>\n",
20
"<p>Otherwise, convert the gradients to FP32 </p>\n": "<p>\u5426\u5219\uff0c\u5c06\u6e10\u53d8\u8f6c\u6362\u4e3a FP32</p>\n",
21
"<p>Otherwise, do not convert the gradients to FP32 </p>\n": "<p>\u5426\u5219\uff0c\u4e0d\u8981\u5c06\u6e10\u53d8\u8f6c\u6362\u4e3a FP32</p>\n",
22
"<p>Parameter to store 32 bit gradients. This get populated by the <span translate=no>_^_0_^_</span> defined below. </p>\n": "<p>\u7528\u4e8e\u5b58\u50a8 32 \u4f4d\u6e10\u53d8\u7684\u53c2\u6570\u3002\u8fd9\u7531\u4e0b\u9762<span translate=no>_^_0_^_</span>\u5b9a\u4e49\u7684\u586b\u5145\u3002</p>\n",
23
"<p>Perform <em>Adam</em> update </p>\n": "<p>\u6267\u884c <em>Adam</em> \u66f4\u65b0</p>\n",
24
"<p>Set the parameters </p>\n": "<p>\u8bbe\u7f6e\u53c2\u6570</p>\n",
25
"<p>Skip non-trainable parameters </p>\n": "<p>\u8df3\u8fc7\u4e0d\u53ef\u8bad\u7ec3\u7684\u53c2\u6570</p>\n",
26
"<p>This is the number of optimizer steps taken on the parameter, <span translate=no>_^_0_^_</span> </p>\n": "<p>\u8fd9\u662f\u4f18\u5316\u5668\u5bf9\u53c2\u6570\u91c7\u53d6\u7684\u6b65\u9aa4\u6570\uff0c<span translate=no>_^_0_^_</span></p>\n",
27
"<p>Unscale all the gradients </p>\n": "<p>\u53d6\u6d88\u7f29\u653e\u6240\u6709\u6e10\u53d8</p>\n",
28
"A simple PyTorch implementation/tutorial of Adam optimizer": "Adam \u4f18\u5316\u5668\u7684\u4e00\u4e2a\u7b80\u5355\u7684 PyTorch \u5b9e\u73b0/\u6559\u7a0b",
29
"Adam Optimizer for Half Precision Training": "\u534a\u7cbe\u5ea6\u8bad\u7ec3\u7684 Adam Optimizer"
30
}
31