Path: blob/master/translate_cache/activations/fta/experiment.ja.json
4928 views
{1"<h1><a href=\"index.html\">Fuzzy Tiling Activation</a> Experiment</h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb\"><span translate=no>_^_0_^_</span></a></p>\n<p>Here we train a transformer that uses <a href=\"index.html\">Fuzzy Tiling Activation</a> in the <a href=\"../../transformers/feed_forward.html\">Feed-Forward Network</a>. We use it for a language model and train it on Tiny Shakespeare dataset for demonstration.</p>\n<p>However, this is probably not the ideal task for FTA, and we believe FTA is more suitable for modeling data with continuous variables.</p>\n": "<h1><a href=\"index.html\">\u30d5\u30a1\u30b8\u30fc\u30bf\u30a4\u30ea\u30f3\u30b0\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u5b9f\u9a13</a></h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb\"><span translate=no>_^_0_^_</span></a></p>\n<p><a href=\"../../transformers/feed_forward.html\">\u3053\u3053\u3067\u306f\u3001<a href=\"index.html\">\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3067\u30d5\u30a1\u30b8\u30fc\u30bf\u30a4\u30ea\u30f3\u30b0\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u3092\u4f7f\u7528\u3059\u308b\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3092\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3057\u307e\u3059</a>\u3002</a>\u3053\u308c\u3092\u8a00\u8a9e\u30e2\u30c7\u30eb\u3068\u3057\u3066\u4f7f\u7528\u3057\u3001Tiny Shakespeare\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3067\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3057\u3066\u30c7\u30e2\u30f3\u30b9\u30c8\u30ec\u30fc\u30b7\u30e7\u30f3\u3092\u884c\u3044\u307e\u3059</p>\u3002\n<p>\u305f\u3060\u3057\u3001\u3053\u308c\u306f\u304a\u305d\u3089\u304fFTA\u306b\u3068\u3063\u3066\u7406\u60f3\u7684\u306a\u30bf\u30b9\u30af\u3067\u306f\u306a\u304f\u3001\u9023\u7d9a\u5909\u6570\u3092\u542b\u3080\u30c7\u30fc\u30bf\u306e\u30e2\u30c7\u30eb\u5316\u306b\u306fFTA\u306e\u65b9\u304c\u9069\u3057\u3066\u3044\u308b\u3068\u8003\u3048\u3066\u3044\u307e\u3059\u3002</p>\n",2"<h2>Auto-Regressive model</h2>\n<p>This is an autoregressive transformer model that uses Feed-Forward Networks with (Fuzzy Tiling Activations)(index.html).</p>\n": "<h2>\u81ea\u5df1\u56de\u5e30\u30e2\u30c7\u30eb</h2>\n<p>\u3053\u308c\u306f\u3001\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3068 (\u30d5\u30a1\u30b8\u30fc\u30bf\u30a4\u30ea\u30f3\u30b0\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3) (index.html) \u3092\u4f7f\u7528\u3059\u308b\u81ea\u5df1\u56de\u5e30\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u30e2\u30c7\u30eb\u3067\u3059\u3002</p>\n",3"<h2>Configurations</h2>\n<p>This inherits from <a href=\"../../experiments/nlp_autoregression.html#NLPAutoRegressionConfigs\"><span translate=no>_^_0_^_</span></a></p>\n": "<h2>\u30b3\u30f3\u30d5\u30a3\u30ae\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3</h2>\n<p>\u3053\u308c\u306f\u4ee5\u4e0b\u304b\u3089\u7d99\u627f\u3055\u308c\u307e\u3059 <a href=\"../../experiments/nlp_autoregression.html#NLPAutoRegressionConfigs\"><span translate=no>_^_0_^_</span></a></p>\n",4"<h2>FFN module with <a href=\"index.html\">FTA</a> activation</h2>\n": "<h2><a href=\"index.html\">FTA</a> \u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u6a5f\u80fd\u4ed8\u304d FFN \u30e2\u30b8\u30e5\u30fc\u30eb</h2>\n",5"<h4>Create and run the experiment</h4>\n": "<h4>\u5b9f\u9a13\u3092\u4f5c\u6210\u3057\u3066\u5b9f\u884c\u3059\u308b</h4>\n",6"<h4>Initialize the model</h4>\n": "<h4>\u30e2\u30c7\u30eb\u3092\u521d\u671f\u5316</h4>\n",7"<p> </p>\n": "<p></p>\n",8"<p><span translate=no>_^_0_^_</span> </p>\n": "<p><span translate=no>_^_0_^_</span></p>\n",9"<p><span translate=no>_^_0_^_</span> and <span translate=no>_^_1_^_</span> for DeepNorm </p>\n": "<p><span translate=no>_^_0_^_</span><span translate=no>_^_1_^_</span>\u305d\u3057\u3066\u30c7\u30a3\u30fc\u30d7\u30ce\u30fc\u30e0\u7528</p>\n",10"<p>Activation function <span translate=no>_^_0_^_</span> </p>\n": "<p>\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u6a5f\u80fd <span translate=no>_^_0_^_</span></p>\n",11"<p>Adam optimizer with no warmup </p>\n": "<p>\u30a6\u30a9\u30fc\u30e0\u30a2\u30c3\u30d7\u306a\u3057\u306e Adam \u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc</p>\n",12"<p>Apply dropout </p>\n": "<p>\u30c9\u30ed\u30c3\u30d7\u30a2\u30a6\u30c8\u3092\u9069\u7528</p>\n",13"<p>Batch size <span translate=no>_^_0_^_</span> </p>\n": "<p>\u30d0\u30c3\u30c1\u30b5\u30a4\u30ba <span translate=no>_^_0_^_</span></p>\n",14"<p>Create FTA activation module </p>\n": "<p>FTA \u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u30e2\u30b8\u30e5\u30fc\u30eb\u3092\u4f5c\u6210</p>\n",15"<p>Create auto-regressive mask </p>\n": "<p>\u81ea\u52d5\u56de\u5e30\u30de\u30b9\u30af\u306e\u4f5c\u6210</p>\n",16"<p>Create configs </p>\n": "<p>\u30b3\u30f3\u30d5\u30a3\u30b0\u306e\u4f5c\u6210</p>\n",17"<p>Create experiment </p>\n": "<p>\u5b9f\u9a13\u3092\u4f5c\u6210</p>\n",18"<p>Create the transformer. We re-use <a href=\"../../transformers/models.html#TransformerLayer\"><span translate=no>_^_0_^_</span></a> and <a href=\"../../transformers/mha.html\"><span translate=no>_^_1_^_</span></a> implementations. </p>\n": "<p>\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3092\u4f5c\u6210\u3057\u307e\u3059\u3002<a href=\"../../transformers/models.html#TransformerLayer\"><span translate=no>_^_0_^_</span><a href=\"../../transformers/mha.html\"><span translate=no>_^_1_^_</span></a></a>\u518d\u5229\u7528\u3057\u3066\u5b9f\u88c5\u3057\u307e\u3059</p>\u3002\n",19"<p>Embedding size </p>\n": "<p>\u57cb\u3081\u8fbc\u307f\u30b5\u30a4\u30ba</p>\n",20"<p>FTA </p>\n": "<p>\u81ea\u7531\u8cbf\u6613\u5354\u5b9a</p>\n",21"<p>Feed forward layer size </p>\n": "<p>\u30d5\u30a3\u30fc\u30c9\u30d5\u30a9\u30ef\u30fc\u30c9\u30ec\u30a4\u30e4\u30fc\u30b5\u30a4\u30ba</p>\n",22"<p>Get logits </p>\n": "<p>\u30ed\u30b8\u30c3\u30c8\u3092\u53d6\u5f97</p>\n",23"<p>Get the token embeddings </p>\n": "<p>\u30c8\u30fc\u30af\u30f3\u306e\u57cb\u3081\u8fbc\u307f\u3092\u5165\u624b</p>\n",24"<p>Hidden layer dropout </p>\n": "<p>\u96a0\u3057\u30ec\u30a4\u30e4\u30fc\u306e\u30c9\u30ed\u30c3\u30d7\u30a2\u30a6\u30c8</p>\n",25"<p>Layer one parameterized by weight <span translate=no>_^_0_^_</span> and bias <span translate=no>_^_1_^_</span> </p>\n": "<p>\u91cd\u307f\u3068\u30d0\u30a4\u30a2\u30b9\u3067\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u30ec\u30a4\u30e4\u30fc 1 <span translate=no>_^_0_^_</span> <span translate=no>_^_1_^_</span></p>\n",26"<p>Layer two parameterized by weight <span translate=no>_^_0_^_</span> and bias <span translate=no>_^_1_^_</span> </p>\n": "<p>\u91cd\u307f\u3068\u30d0\u30a4\u30a2\u30b9\u3067\u30d1\u30e9\u30e1\u30fc\u30bf\u5316\u3055\u308c\u305f\u30ec\u30a4\u30e4\u30fc 2 <span translate=no>_^_0_^_</span> <span translate=no>_^_1_^_</span></p>\n",27"<p>Model </p>\n": "<p>\u30e2\u30c7\u30eb</p>\n",28"<p>Move to the device </p>\n": "<p>\u30c7\u30d0\u30a4\u30b9\u306b\u79fb\u52d5</p>\n",29"<p>Number of heads in the attention </p>\n": "<p>\u6ce8\u76ee\u3055\u308c\u3066\u3044\u308b\u30d8\u30c3\u30c9\u306e\u6570</p>\n",30"<p>Number of layers </p>\n": "<p>\u30ec\u30a4\u30e4\u30fc\u6570</p>\n",31"<p>Override configurations </p>\n": "<p>\u30aa\u30fc\u30d0\u30fc\u30e9\u30a4\u30c9\u8a2d\u5b9a</p>\n",32"<p>Prompt separator is blank </p>\n": "<p>\u30d7\u30ed\u30f3\u30d7\u30c8\u30bb\u30d1\u30ec\u30fc\u30bf\u304c\u7a7a\u767d</p>\n",33"<p>Readout layer </p>\n": "<p>\u8aad\u307f\u51fa\u3057\u5c64</p>\n",34"<p>Return results </p>\n": "<p>\u7d50\u679c\u3092\u8fd4\u3059</p>\n",35"<p>Run training </p>\n": "<p>\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3092\u5b9f\u884c</p>\n",36"<p>Set model(s) for saving and loading </p>\n": "<p>\u4fdd\u5b58\u304a\u3088\u3073\u8aad\u307f\u8fbc\u307f\u7528\u306e\u30e2\u30c7\u30eb\u3092\u8a2d\u5b9a\u3057\u307e\u3059</p>\n",37"<p>Size of each attention head </p>\n": "<p>\u5404\u30a2\u30c6\u30f3\u30b7\u30e7\u30f3\u30d8\u30c3\u30c9\u306e\u30b5\u30a4\u30ba</p>\n",38"<p>Start the experiment </p>\n": "<p>\u5b9f\u9a13\u3092\u59cb\u3081\u308b</p>\n",39"<p>Starting prompt for sampling </p>\n": "<p>\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u306e\u958b\u59cb\u30d7\u30ed\u30f3\u30d7\u30c8</p>\n",40"<p>Subsequent mask, will mask out tokens from seeing future tokens </p>\n": "<p>\u6b21\u306b\u30de\u30b9\u30af\u3059\u308b\u3068\u3001\u30c8\u30fc\u30af\u30f3\u304c\u30de\u30b9\u30af\u3055\u308c\u3001\u5c06\u6765\u306e\u30c8\u30fc\u30af\u30f3\u304c\u898b\u3048\u306a\u304f\u306a\u308a\u307e\u3059</p>\n",41"<p>Switch between training and validation for <span translate=no>_^_0_^_</span> times per epoch </p>\n": "<p>\u30a8\u30dd\u30c3\u30af\u3054\u3068\u306b\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u3068\u691c\u8a3c\u3092\u5207\u308a\u66ff\u3048\u308b <span translate=no>_^_0_^_</span></p>\n",42"<p>The mask will be initialized on the first call </p>\n": "<p>\u30de\u30b9\u30af\u306f\u6700\u521d\u306e\u547c\u3073\u51fa\u3057\u3067\u521d\u671f\u5316\u3055\u308c\u307e\u3059</p>\n",43"<p>Token embedding layer </p>\n": "<p>\u30c8\u30fc\u30af\u30f3\u57cb\u3081\u8fbc\u307f\u30ec\u30a4\u30e4\u30fc</p>\n",44"<p>Train for 32 epochs </p>\n": "<p>32 \u30a8\u30dd\u30c3\u30af\u306e\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0</p>\n",45"<p>Transformer encoder </p>\n": "<p>\u30c8\u30e9\u30f3\u30b9\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc</p>\n",46"<p>Transformer with <span translate=no>_^_0_^_</span> layers </p>\n": "<p><span translate=no>_^_0_^_</span>\u5c64\u4ed8\u304d\u5909\u5727\u5668</p>\n",47"<p>Use Tiny Shakespeare dataset </p>\n": "<p>\u30bf\u30a4\u30cb\u30fc\u30fb\u30b7\u30a7\u30a4\u30af\u30b9\u30d4\u30a2\u30fb\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092\u4f7f\u3046</p>\n",48"<p>Use a context size of <span translate=no>_^_0_^_</span> </p>\n": "<p>\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u30b5\u30a4\u30ba\u3092\u6b21\u306e\u5024\u306b\u3057\u3066\u304f\u3060\u3055\u3044 <span translate=no>_^_0_^_</span></p>\n",49"<p>Use character level tokenizer </p>\n": "<p>\u30ad\u30e3\u30e9\u30af\u30bf\u30fc\u30ec\u30d9\u30eb\u306e\u30c8\u30fc\u30af\u30ca\u30a4\u30b6\u30fc\u3092\u4f7f\u3046</p>\n",50"<ul><li><span translate=no>_^_0_^_</span> are the input tokens of shape <span translate=no>_^_1_^_</span></li></ul>\n": "<ul><li><span translate=no>_^_0_^_</span>\u5f62\u72b6\u306e\u5165\u529b\u30c8\u30fc\u30af\u30f3\u3067\u3059 <span translate=no>_^_1_^_</span></li></ul>\n",51"<ul><li><span translate=no>_^_0_^_</span> is the number of tokens in the vocabulary </li>\n<li><span translate=no>_^_1_^_</span> is the embedding size </li>\n<li><span translate=no>_^_2_^_</span> is the number of transformer layers </li>\n<li><span translate=no>_^_3_^_</span> is the layer. We use <span translate=no>_^_4_^_</span> copies of this for the transformer.</li></ul>\n": "<ul><li><span translate=no>_^_0_^_</span>\u30dc\u30ad\u30e3\u30d6\u30e9\u30ea\u5185\u306e\u30c8\u30fc\u30af\u30f3\u306e\u6570\u3067\u3059</li>\n<li><span translate=no>_^_1_^_</span>\u306f\u57cb\u3081\u8fbc\u307f\u30b5\u30a4\u30ba</li>\n<li><span translate=no>_^_2_^_</span>\u5909\u5727\u5668\u5c64\u306e\u6570\u3067\u3059</li>\n<li><span translate=no>_^_3_^_</span>\u30ec\u30a4\u30e4\u30fc\u3067\u3059\u3002<span translate=no>_^_4_^_</span>\u5909\u5727\u5668\u306b\u306f\u3053\u308c\u306e\u30b3\u30d4\u30fc\u3092\u4f7f\u3044\u307e\u3059</li></ul>\u3002\n",52"<ul><li><span translate=no>_^_0_^_</span> is the number of features in a token embedding </li>\n<li><span translate=no>_^_1_^_</span> is the number of features in the hidden layer of the FFN </li>\n<li><span translate=no>_^_2_^_</span> is FTA activation module </li>\n<li><span translate=no>_^_3_^_</span> is dropout probability for the hidden layer</li></ul>\n": "<ul><li><span translate=no>_^_0_^_</span>\u30c8\u30fc\u30af\u30f3\u57cb\u3081\u8fbc\u307f\u306b\u542b\u307e\u308c\u308b\u6a5f\u80fd\u306e\u6570</li>\n<li><span translate=no>_^_1_^_</span>\u306f FFN \u306e\u96a0\u308c\u30ec\u30a4\u30e4\u30fc\u306b\u3042\u308b\u30d5\u30a3\u30fc\u30c1\u30e3\u306e\u6570\u3067\u3059</li>\n<li><span translate=no>_^_2_^_</span>FTA \u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3059\u304b</li>\n<li><span translate=no>_^_3_^_</span>\u306f\u96a0\u308c\u5c64\u306e\u30c9\u30ed\u30c3\u30d7\u30a2\u30a6\u30c8\u78ba\u7387\u3067\u3059</li></ul>\n",53"Fuzzy Tiling Activation Experiment": "\u30d5\u30a1\u30b8\u30fc\u30bf\u30a4\u30ea\u30f3\u30b0\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u5b9f\u9a13",54"Training a transformer with FTA in FFN on Tiny Shakespeare.": "\u30bf\u30a4\u30cb\u30fc\u30fb\u30b7\u30a7\u30a4\u30af\u30b9\u30d4\u30a2\u306eFFN\u3067\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3092FTA\u3067\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u4e2d\u3002"55}5657