Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
labmlai
GitHub Repository: labmlai/annotated_deep_learning_paper_implementations
Path: blob/master/translate_cache/experiments/mnist.zh.json
4923 views
1
{
2
"<h1>MNIST Experiment</h1>\n": "<h1>MNIST \u5b9e\u9a8c</h1>\n",
3
"<h3>Default optimizer configurations</h3>\n": "<h3>\u9ed8\u8ba4\u4f18\u5316\u5668\u914d\u7f6e</h3>\n",
4
"<h3>Initialization</h3>\n": "<h3>\u521d\u59cb\u5316</h3>\n",
5
"<h3>Training or validation step</h3>\n": "<h3>\u57f9\u8bad\u6216\u9a8c\u8bc1\u6b65\u9aa4</h3>\n",
6
"<p> <a id=\"MNISTConfigs\"></a></p>\n<h2>Trainer configurations</h2>\n": "<p><a id=\"MNISTConfigs\"></a></p>\n<h2>\u8bad\u7ec3\u5668\u914d\u7f6e</h2>\n",
7
"<p>Accuracy function </p>\n": "<p>\u7cbe\u5ea6\u51fd\u6570</p>\n",
8
"<p>Add a hook to log module outputs </p>\n": "<p>\u5411\u65e5\u5fd7\u6a21\u5757\u8f93\u51fa\u6dfb\u52a0\u94a9\u5b50</p>\n",
9
"<p>Add accuracy as a state module. The name is probably confusing, since it&#x27;s meant to store states between training and validation for RNNs. This will keep the accuracy metric stats separate for training and validation. </p>\n": "<p>\u589e\u52a0\u4f5c\u4e3a\u72b6\u6001\u6a21\u5757\u7684\u7cbe\u5ea6\u3002\u8fd9\u4e2a\u540d\u5b57\u53ef\u80fd\u4ee4\u4eba\u56f0\u60d1\uff0c\u56e0\u4e3a\u5b83\u65e8\u5728\u5b58\u50a8 RNN \u7684\u8bad\u7ec3\u548c\u9a8c\u8bc1\u4e4b\u95f4\u7684\u72b6\u6001\u3002\u8fd9\u5c06\u4f7f\u7cbe\u5ea6\u6307\u6807\u7edf\u8ba1\u6570\u636e\u5206\u5f00\uff0c\u4ee5\u4fbf\u8fdb\u884c\u8bad\u7ec3\u548c\u9a8c\u8bc1\u3002</p>\n",
10
"<p>Calculate and log accuracy </p>\n": "<p>\u8ba1\u7b97\u548c\u8bb0\u5f55\u7cbe\u5ea6</p>\n",
11
"<p>Calculate and log loss </p>\n": "<p>\u8ba1\u7b97\u5e76\u8bb0\u5f55\u635f\u5931</p>\n",
12
"<p>Calculate gradients </p>\n": "<p>\u8ba1\u7b97\u68af\u5ea6</p>\n",
13
"<p>Classification model </p>\n": "<p>\u5206\u7c7b\u6a21\u578b</p>\n",
14
"<p>Clear the gradients </p>\n": "<p>\u6e05\u9664\u6e10\u53d8</p>\n",
15
"<p>Get model outputs. </p>\n": "<p>\u83b7\u53d6\u6a21\u578b\u8f93\u51fa\u3002</p>\n",
16
"<p>Log the model parameters and gradients on last batch of every epoch </p>\n": "<p>\u8bb0\u5f55\u6bcf\u4e2a\u7eaa\u5143\u6700\u540e\u4e00\u6279\u7684\u6a21\u578b\u53c2\u6570\u548c\u68af\u5ea6</p>\n",
17
"<p>Loss function </p>\n": "<p>\u4e8f\u635f\u51fd\u6570</p>\n",
18
"<p>Move data to the device </p>\n": "<p>\u5c06\u6570\u636e\u79fb\u52a8\u5230\u8bbe\u5907</p>\n",
19
"<p>Number of epochs to train for </p>\n": "<p>\u8981\u8bad\u7ec3\u7684\u65f6\u4ee3\u6570</p>\n",
20
"<p>Number of times to switch between training and validation within an epoch </p>\n": "<p>\u4e00\u4e2a\u7eaa\u5143\u5185\u5728\u8bad\u7ec3\u548c\u9a8c\u8bc1\u4e4b\u95f4\u5207\u6362\u7684\u6b21\u6570</p>\n",
21
"<p>Optimizer </p>\n": "<p>\u4f18\u5316\u5668</p>\n",
22
"<p>Save the tracked metrics </p>\n": "<p>\u4fdd\u5b58\u8ddf\u8e2a\u7684\u6307\u6807</p>\n",
23
"<p>Set tracker configurations </p>\n": "<p>\u8bbe\u7f6e\u8ddf\u8e2a\u5668\u914d\u7f6e</p>\n",
24
"<p>Take optimizer step </p>\n": "<p>\u91c7\u53d6\u4f18\u5316\u5668\u6b65\u9aa4</p>\n",
25
"<p>Train the model </p>\n": "<p>\u8bad\u7ec3\u6a21\u578b</p>\n",
26
"<p>Training device </p>\n": "<p>\u8bad\u7ec3\u8bbe\u5907</p>\n",
27
"<p>Training/Evaluation mode </p>\n": "<p>\u8bad\u7ec3/\u8bc4\u4f30\u6a21\u5f0f</p>\n",
28
"<p>Update global step (number of samples processed) when in training mode </p>\n": "<p>\u5728\u8bad\u7ec3\u6a21\u5f0f\u4e0b\u66f4\u65b0\u5168\u5c40\u6b65\u957f\uff08\u5904\u7406\u7684\u6837\u672c\u6570\uff09</p>\n",
29
"<p>Whether to capture model outputs </p>\n": "<p>\u662f\u5426\u6355\u83b7\u6a21\u578b\u8f93\u51fa</p>\n",
30
"MNIST Experiment": "MNIST \u5b9e\u9a8c",
31
"This is a reusable trainer for MNIST dataset": "\u8fd9\u662f MNIST \u6570\u636e\u96c6\u7684\u53ef\u91cd\u590d\u4f7f\u7528\u7684\u8bad\u7ec3\u5668"
32
}
33