Path: blob/master/translate_cache/neox/samples/generate.ja.json
4923 views
{1"<h1>Generate Text with GPT-NeoX</h1>\n<p>This shows how to generate text from GPT-NeoX with a single GPU.</p>\n<p>This needs a GPU with more than 45GB memory.</p>\n": "<h1>GPT-\u30cd\u30aa\u30c3\u30af\u30b9\u3067\u30c6\u30ad\u30b9\u30c8\u3092\u751f\u6210</h1>\n<p>\u3053\u308c\u306f\u3001\u5358\u4e00\u306eGPU\u3067GPT-Neox\u304b\u3089\u30c6\u30ad\u30b9\u30c8\u3092\u751f\u6210\u3059\u308b\u65b9\u6cd5\u3092\u793a\u3057\u3066\u3044\u307e\u3059\u3002</p>\n<p>\u3053\u308c\u306b\u306f\u300145 GB \u4ee5\u4e0a\u306e\u30e1\u30e2\u30ea\u3092\u642d\u8f09\u3057\u305f GPU \u304c\u5fc5\u8981\u3067\u3059\u3002</p>\n",2"<h2>Generate text</h2>\n": "<h2>\u30c6\u30ad\u30b9\u30c8\u3092\u751f\u6210</h2>\n",3"<h3>Predict the next token</h3>\n<ul><li><span translate=no>_^_0_^_</span> is the model </li>\n<li><span translate=no>_^_1_^_</span> are the input token ids </li>\n<li><span translate=no>_^_2_^_</span> is the device of the model</li></ul>\n": "<h3>\u6b21\u306e\u30c8\u30fc\u30af\u30f3\u3092\u4e88\u6e2c</h3>\n<ul><li><span translate=no>_^_0_^_</span>\u30e2\u30c7\u30eb\u3067\u3059</li>\n<li><span translate=no>_^_1_^_</span>\u306f\u5165\u529b\u30c8\u30fc\u30af\u30f3 ID</li>\n<li><span translate=no>_^_2_^_</span>\u30e2\u30c7\u30eb\u306e\u30c7\u30d0\u30a4\u30b9\u3067\u3059</li></ul>\n",4"<p> </p>\n": "<p></p>\n",5"<p>Append the predicted token </p>\n": "<p>\u4e88\u6e2c\u30c8\u30fc\u30af\u30f3\u3092\u8ffd\u52a0</p>\n",6"<p>Device </p>\n": "<p>\u7aef\u672b</p>\n",7"<p>Eval model </p>\n": "<p>\u8a55\u4fa1\u30e2\u30c7\u30eb</p>\n",8"<p>Get next token. Note that we only feed the last token to the model because we cache the key/value pairs of previous tokens. </p>\n": "<p>\u6b21\u306e\u30c8\u30fc\u30af\u30f3\u3092\u5165\u624b\u3057\u3066\u304f\u3060\u3055\u3044\u3002\u4ee5\u524d\u306e\u30c8\u30fc\u30af\u30f3\u306e\u30ad\u30fc\u3068\u5024\u306e\u30da\u30a2\u3092\u30ad\u30e3\u30c3\u30b7\u30e5\u3059\u308b\u306e\u3067\u3001\u6700\u5f8c\u306e\u30c8\u30fc\u30af\u30f3\u306e\u307f\u3092\u30e2\u30c7\u30eb\u306b\u30d5\u30a3\u30fc\u30c9\u3059\u308b\u3053\u3068\u306b\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044</p>\u3002\n",9"<p>Get the tokens </p>\n": "<p>\u30c8\u30fc\u30af\u30f3\u3092\u5165\u624b</p>\n",10"<p>Get token ids </p>\n": "<p>\u30c8\u30fc\u30af\u30f3 ID \u3092\u53d6\u5f97</p>\n",11"<p>Imports </p>\n": "<p>\u8f38\u5165</p>\n",12"<p>List of layers to load. This is used for testing. You can assign a subset of layers like <span translate=no>_^_0_^_</span> so that it only loads the first to transformer layers. </p>\n": "<p>\u30ed\u30fc\u30c9\u3059\u308b\u30ec\u30a4\u30e4\u30fc\u306e\u30ea\u30b9\u30c8\u3002\u3053\u308c\u306f\u30c6\u30b9\u30c8\u306b\u4f7f\u7528\u3055\u308c\u307e\u3059\u3002<span translate=no>_^_0_^_</span>\u306e\u3088\u3046\u306b\u30ec\u30a4\u30e4\u30fc\u306e\u30b5\u30d6\u30bb\u30c3\u30c8\u3092\u5272\u308a\u5f53\u3066\u3066\u3001\u6700\u521d\u306e\u30ec\u30a4\u30e4\u30fc\u306e\u307f\u3092\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u30ec\u30a4\u30e4\u30fc\u306b\u8aad\u307f\u8fbc\u3080\u3053\u3068\u304c\u3067\u304d\u307e\u3059</p>\u3002\n",13"<p>Load layers </p>\n": "<p>\u30ec\u30a4\u30e4\u30fc\u3092\u30ed\u30fc\u30c9</p>\n",14"<p>Predict 100 tokens </p>\n": "<p>\u30c8\u30fc\u30af\u30f3\u3092100\u500b\u4e88\u6e2c\u3059\u308b</p>\n",15"<p>Print </p>\n": "<p>\u30d7\u30ea\u30f3\u30c8</p>\n",16"<p>Prompt to complete </p>\n": "<p>\u5b8c\u4e86\u3092\u4fc3\u3059\u30d7\u30ed\u30f3\u30d7\u30c8</p>\n",17"<p>Return predicted token </p>\n": "<p>\u4e88\u6e2c\u30c8\u30fc\u30af\u30f3\u3092\u8fd4\u3059</p>\n",18"<p>Run the model </p>\n": "<p>\u30e2\u30c7\u30eb\u3092\u5b9f\u884c</p>\n",19"<p>Set the state to use cached activations </p>\n": "<p>\u30ad\u30e3\u30c3\u30b7\u30e5\u3055\u308c\u305f\u30a2\u30af\u30c6\u30a3\u30d9\u30fc\u30b7\u30e7\u30f3\u3092\u4f7f\u7528\u3059\u308b\u3088\u3046\u306b\u72b6\u614b\u3092\u8a2d\u5b9a\u3057\u307e\u3059</p>\n",20"<p>Setup <a href=\"../utils/cache.html\">cache</a> to cache intermediate key/value pairs for faster generation </p>\n": "<p><a href=\"../utils/cache.html\">\u751f\u6210\u3092\u9ad8\u901f\u5316\u3059\u308b\u305f\u3081\u306b\u4e2d\u9593\u30ad\u30fc\u3068\u5024\u306e\u30da\u30a2\u3092\u30ad\u30e3\u30c3\u30b7\u30e5\u3059\u308b\u3088\u3046\u306b\u30ad\u30e3\u30c3\u30b7\u30e5\u3092\u8a2d\u5b9a</a></p>\n",21"Generate Text with GPT-NeoX": "GPT-\u30cd\u30aa\u30c3\u30af\u30b9\u3067\u30c6\u30ad\u30b9\u30c8\u3092\u751f\u6210"22}2324