Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
labmlai
GitHub Repository: labmlai/annotated_deep_learning_paper_implementations
Path: blob/master/translate_cache/optimizers/mnist_experiment.ja.json
4923 views
1
{
2
"<h1>MNIST example to test the optimizers</h1>\n": "<h1>\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u3092\u30c6\u30b9\u30c8\u3059\u308b\u305f\u3081\u306e MNIST \u306e\u4f8b</h1>\n",
3
"<h2>Configurable Experiment Definition</h2>\n": "<h2>\u8a2d\u5b9a\u53ef\u80fd\u306a\u5b9f\u9a13\u5b9a\u7fa9</h2>\n",
4
"<h2>The model</h2>\n": "<h2>\u30e2\u30c7\u30eb</h2>\n",
5
"<p> Create a configurable optimizer. We can change the optimizer type and hyper-parameters using configurations.</p>\n": "<p>\u8a2d\u5b9a\u53ef\u80fd\u306a\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\u3092\u4f5c\u6210\u3057\u307e\u3059\u3002\u69cb\u6210\u3092\u4f7f\u7528\u3057\u3066\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u306e\u30bf\u30a4\u30d7\u3068\u30cf\u30a4\u30d1\u30fc\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u5909\u66f4\u3067\u304d\u307e\u3059</p>\u3002\n",
6
"<p>Add global step if we are in training mode </p>\n": "<p>\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30e2\u30fc\u30c9\u306e\u5834\u5408\u306f\u30b0\u30ed\u30fc\u30d0\u30eb\u30b9\u30c6\u30c3\u30d7\u3092\u8ffd\u52a0</p>\n",
7
"<p>Calculate the accuracy </p>\n": "<p>\u7cbe\u5ea6\u306e\u8a08\u7b97</p>\n",
8
"<p>Calculate the gradients </p>\n": "<p>\u52fe\u914d\u306e\u8a08\u7b97</p>\n",
9
"<p>Calculate the loss </p>\n": "<p>\u640d\u5931\u306e\u8a08\u7b97</p>\n",
10
"<p>Clear the gradients </p>\n": "<p>\u30b0\u30e9\u30c7\u30fc\u30b7\u30e7\u30f3\u3092\u30af\u30ea\u30a2</p>\n",
11
"<p>Get the batch </p>\n": "<p>\u30d0\u30c3\u30c1\u3092\u5165\u624b</p>\n",
12
"<p>Log the loss </p>\n": "<p>\u640d\u5931\u3092\u8a18\u9332\u3059\u308b</p>\n",
13
"<p>Log the parameter and gradient L2 norms once per epoch </p>\n": "<p>\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u3068\u52fe\u914d\u306e L2 \u30ce\u30eb\u30e0\u3092\u30a8\u30dd\u30c3\u30af\u3054\u3068\u306b 1 \u56de\u8a18\u9332\u3057\u307e\u3059\u3002</p>\n",
14
"<p>Optimize if we are in training mode </p>\n": "<p>\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30e2\u30fc\u30c9\u306e\u5834\u5408\u306f\u6700\u9069\u5316</p>\n",
15
"<p>Run the model and specify whether to log the activations </p>\n": "<p>\u30e2\u30c7\u30eb\u3092\u5b9f\u884c\u3057\u3001\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u3092\u30ed\u30b0\u306b\u8a18\u9332\u3059\u308b\u304b\u3069\u3046\u304b\u3092\u6307\u5b9a\u3057\u307e\u3059</p>\n",
16
"<p>Save logs </p>\n": "<p>\u30ed\u30b0\u3092\u4fdd\u5b58</p>\n",
17
"<p>Specify the optimizer </p>\n": "<p>\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\u3092\u6307\u5b9a</p>\n",
18
"<p>Take optimizer step </p>\n": "<p>\u6700\u9069\u5316\u306e\u4e00\u6b69\u3092\u8e0f\u307f\u51fa\u3059</p>\n",
19
"MNIST example to test the optimizers": "\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u3092\u30c6\u30b9\u30c8\u3059\u308b\u305f\u3081\u306e MNIST \u306e\u4f8b",
20
"This is a simple MNIST example with a CNN model to test the optimizers.": "\u3053\u308c\u306f\u3001CNN \u30e2\u30c7\u30eb\u3092\u4f7f\u7528\u3057\u3066\u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc\u3092\u30c6\u30b9\u30c8\u3059\u308b\u7c21\u5358\u306a MNIST \u306e\u4f8b\u3067\u3059\u3002"
21
}
22