Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/ko/r1/tutorials/eager/custom_training.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

์‚ฌ์šฉ์ž ์ •์˜ ํ•™์Šต: ๊ธฐ์ดˆ

Note: ์ด ๋ฌธ์„œ๋Š” ํ…์„œํ”Œ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๋ฒˆ์—ญํ–ˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฒˆ์—ญ ํ™œ๋™์˜ ํŠน์„ฑ์ƒ ์ •ํ™•ํ•œ ๋ฒˆ์—ญ๊ณผ ์ตœ์‹  ๋‚ด์šฉ์„ ๋ฐ˜์˜ํ•˜๊ธฐ ์œ„ํ•ด ๋…ธ๋ ฅํ•จ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๊ณต์‹ ์˜๋ฌธ ๋ฌธ์„œ์˜ ๋‚ด์šฉ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฒˆ์—ญ์— ๊ฐœ์„ ํ•  ๋ถ€๋ถ„์ด ์žˆ๋‹ค๋ฉด tensorflow/docs ๊นƒํ—™ ์ €์žฅ์†Œ๋กœ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ฌธ์„œ ๋ฒˆ์—ญ์ด๋‚˜ ๋ฆฌ๋ทฐ์— ์ฐธ์—ฌํ•˜๋ ค๋ฉด [email protected]๋กœ ๋ฉ”์ผ์„ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.

์ด์ „ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋จธ์‹ ๋Ÿฌ๋‹์„ ์œ„ํ•œ ๊ธฐ๋ณธ ๊ตฌ์„ฑ ์š”์†Œ์ธ ์ž๋™ ๋ฏธ๋ถ„(automatic differentiation)์„ ์œ„ํ•œ ํ…์„œํ”Œ๋กœ API๋ฅผ ์•Œ์•„๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ์ด์ „ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์†Œ๊ฐœ๋˜์—ˆ๋˜ ํ…์„œํ”Œ๋กœ์˜ ๊ธฐ๋ณธ ์š”์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ„๋‹จํ•œ ๋จธ์‹ ๋Ÿฌ๋‹์„ ์ˆ˜ํ–‰ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

ํ…์„œํ”Œ๋กœ๋Š” ๋ฐ˜๋ณต๋˜๋Š” ์ฝ”๋“œ๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ์œ ์šฉํ•œ ์ถ”์ƒํ™”๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ณ ์ˆ˜์ค€ ์‹ ๊ฒฝ๋ง(neural network) API์ธ tf.keras๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ ๊ฒฝ๋ง์„ ๋‹ค๋ฃฐ ๋•Œ ์ด๋Ÿฌํ•œ ๊ณ ์ˆ˜์ค€์˜ API์„ ๊ฐ•ํ•˜๊ฒŒ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ์งง์€ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ํƒ„ํƒ„ํ•œ ๊ธฐ์ดˆ๋ฅผ ๊ธฐ๋ฅด๊ธฐ ์œ„ํ•ด ๊ธฐ๋ณธ์ ์ธ ์š”์†Œ๋งŒ์œผ๋กœ ์‹ ๊ฒฝ๋ง ํ›ˆ๋ จ์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

์„ค์ •

import tensorflow.compat.v1 as tf

๋ณ€์ˆ˜

ํ…์„œํ”Œ๋กœ์˜ ํ…์„œ(Tensor)๋Š” ์ƒํƒœ๊ฐ€ ์—†๊ณ , ๋ณ€๊ฒฝ์ด ๋ถˆ๊ฐ€๋Šฅํ•œ(immutable stateless) ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์€ ์ƒํƒœ๊ฐ€ ๋ณ€๊ฒฝ๋ (stateful) ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋ธ ํ•™์Šต์—์„œ ์˜ˆ์ธก์„ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•œ ๋™์ผํ•œ ์ฝ”๋“œ๋Š” ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ฒŒ(ํฌ๋งํ•˜๊ฑด๋Œ€ ๋” ๋‚ฎ์€ ์†์‹ค๋กœ ๊ฐ€๋Š” ๋ฐฉํ–ฅ์œผ๋กœ)๋™์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ์‚ฐ ๊ณผ์ •์„ ํ†ตํ•ด ๋ณ€ํ™”๋˜์–ด์•ผ ํ•˜๋Š” ์ƒํƒœ๋ฅผ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๋ช…๋ นํ˜• ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด์ธ ํŒŒ์ด์ฌ์„ ์‚ฌ์šฉ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

# ํŒŒ์ด์ฌ ๊ตฌ๋ฌธ ์‚ฌ์šฉ x = tf.zeros([10, 10]) x += 2 # ์ด๊ฒƒ์€ x = x + 2์™€ ๊ฐ™์œผ๋ฉฐ, x์˜ ์ดˆ๊ธฐ๊ฐ’์„ ๋ณ€๊ฒฝํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. print(x)

ํ…์„œํ”Œ๋กœ๋Š” ์ƒํƒœ๋ฅผ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ๋Š” ์—ฐ์‚ฐ์ž๊ฐ€ ๋‚ด์žฅ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์ด๋Ÿฌํ•œ ์—ฐ์‚ฐ์ž๋Š” ์ƒํƒœ๋ฅผ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•œ ์ €์ˆ˜์ค€ ํŒŒ์ด์ฌ ํ‘œํ˜„๋ณด๋‹ค ์‚ฌ์šฉํ•˜๊ธฐ๊ฐ€ ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋ธ์—์„œ ๊ฐ€์ค‘์น˜๋ฅผ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด์„œ ํ…์„œํ”Œ๋กœ ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ํŽธํ•˜๊ณ  ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค.

ํ…์„œํ”Œ๋กœ ๋ณ€์ˆ˜๋Š” ๊ฐ’์„ ์ €์žฅํ•˜๋Š” ๊ฐ์ฒด๋กœ ํ…์„œํ”Œ๋กœ ์—ฐ์‚ฐ์— ์‚ฌ์šฉ๋  ๋•Œ ์ €์žฅ๋œ ์ด ๊ฐ’์„ ์ฝ์–ด์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. tf.assign_sub, tf.scatter_update ๋“ฑ์€ ํ…์„œํ”Œ๋กœ ๋ณ€์ˆ˜์— ์ €์žฅ๋˜์žˆ๋Š” ๊ฐ’์„ ์กฐ์ž‘ํ•˜๋Š” ์—ฐ์‚ฐ์ž์ž…๋‹ˆ๋‹ค.

v = tf.Variable(1.0) assert v.numpy() == 1.0 # ๊ฐ’์„ ์žฌ๋ฐฐ์—ดํ•ฉ๋‹ˆ๋‹ค. v.assign(3.0) assert v.numpy() == 3.0 # tf.square()์™€ ๊ฐ™์€ ํ…์„œํ”Œ๋กœ ์—ฐ์‚ฐ์— `v`๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žฌํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. v.assign(tf.square(v)) assert v.numpy() == 9.0

๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ ์—ฐ์‚ฐ์€ ๊ทธ๋ž˜๋””์–ธํŠธ๊ฐ€ ๊ณ„์‚ฐ๋  ๋•Œ ์ž๋™์ ์œผ๋กœ ์ถ”์ ๋ฉ๋‹ˆ๋‹ค. ์ž„๋ฒ ๋”ฉ(embedding)์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋ณ€์ˆ˜์˜ ๊ฒฝ์šฐ ๊ธฐ๋ณธ์ ์œผ๋กœ ํฌ์†Œ ํ…์„œ(sparse tensor)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—…๋ฐ์ดํŠธ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์—ฐ์‚ฐ๊ณผ ๋ฉ”๋ชจ๋ฆฌ์— ๋”์šฑ ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค.

๋˜ํ•œ ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์ฝ”๋“œ๋ฅผ ์ฝ๋Š” ๋…์ž์—๊ฒŒ ์ƒํƒœ๊ฐ€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ค์ฃผ๋Š” ์†์‰ฌ์šด ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.

์˜ˆ: ์„ ํ˜• ๋ชจ๋ธ ํ›ˆ๋ จ

์ง€๊ธˆ๊นŒ์ง€ ๋ช‡ ๊ฐ€์ง€ ๊ฐœ๋…์„ ์„ค๋ช…ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๊ณ  ํ•™์Šต์‹œํ‚ค๊ธฐ ์œ„ํ•ด ---Tensor, GradientTape, Variable --- ๋“ฑ์„ ์‚ฌ์šฉํ•˜์˜€๊ณ , ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ์˜ ๊ณผ์ •์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค.

  1. ๋ชจ๋ธ ์ •์˜

  2. ์†์‹ค ํ•จ์ˆ˜ ์ •์˜

  3. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ

  4. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ์‹คํ–‰, ๋ฐ์ดํ„ฐ์— ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด "์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer)"๋ฅผ ์‚ฌ์šฉํ•œ ๋ณ€์ˆ˜ ์กฐ์ •

์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ์„ ํ˜• ๋ชจ๋ธ์˜ ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. f(x) = x * W + b, ๋ชจ๋ธ์€ W ์™€ b ๋‘ ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ์„ ํ˜•๋ชจ๋ธ์ด๋ฉฐ, ์ž˜ ํ•™์Šต๋œ ๋ชจ๋ธ์ด W = 3.0 and b = 2.0์˜ ๊ฐ’์„ ๊ฐ–๋„๋ก ํ•ฉ์„ฑ ๋ฐ์ดํ„ฐ๋ฅผ ๋งŒ๋“ค๊ฒ ์Šต๋‹ˆ๋‹ค.

๋ชจ๋ธ ์ •์˜

๋ณ€์ˆ˜์™€ ์—ฐ์‚ฐ์„ ์บก์Аํ™”ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐ„๋‹จํ•œ ํด๋ž˜์Šค๋ฅผ ์ •์˜ํ•ด๋ด…์‹œ๋‹ค.

class Model(object): def __init__(self): # ๋ณ€์ˆ˜๋ฅผ (5.0, 0.0)์œผ๋กœ ์ดˆ๊ธฐํ™” ํ•ฉ๋‹ˆ๋‹ค. # ์‹ค์ œ๋กœ๋Š” ์ž„์˜์˜ ๊ฐ’์œผ๋กœ ์ดˆ๊ธฐํ™” ๋˜์–ด์•ผํ•ฉ๋‹ˆ๋‹ค. self.W = tf.Variable(5.0) self.b = tf.Variable(0.0) def __call__(self, x): return self.W * x + self.b model = Model() assert model(3.0).numpy() == 15.0

์†์‹ค ํ•จ์ˆ˜ ์ •์˜

์†์‹ค ํ•จ์ˆ˜๋Š” ์ฃผ์–ด์ง„ ์ž…๋ ฅ์— ๋Œ€ํ•œ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์ด ์›ํ•˜๋Š” ์ถœ๋ ฅ๊ณผ ์–ผ๋งˆ๋‚˜ ์ž˜ ์ผ์น˜ํ•˜๋Š”์ง€๋ฅผ ์ธก์ •ํ•ฉ๋‹ˆ๋‹ค. ํ‰๊ท  ์ œ๊ณฑ ์˜ค์ฐจ(mean square error)๋ฅผ ์ ์šฉํ•œ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

def loss(predicted_y, desired_y): return tf.reduce_mean(tf.square(predicted_y - desired_y))

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ

์•ฝ๊ฐ„์˜ ์žก์Œ๊ณผ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ํ•ฉ์นฉ๋‹ˆ๋‹ค.

TRUE_W = 3.0 TRUE_b = 2.0 NUM_EXAMPLES = 1000 inputs = tf.random_normal(shape=[NUM_EXAMPLES]) noise = tf.random_normal(shape=[NUM_EXAMPLES]) outputs = inputs * TRUE_W + TRUE_b + noise

๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ ์ „์—, ๋ชจ๋ธ์˜ ํ˜„์žฌ ์ƒํƒœ๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ์‹œ๋‹ค. ๋ชจ๋ธ์˜ ์˜ˆ์ธก์„ ๋นจ๊ฐ„์ƒ‰์œผ๋กœ, ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ํŒŒ๋ž€์ƒ‰์œผ๋กœ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค.

import matplotlib.pyplot as plt plt.scatter(inputs, outputs, c='b') plt.scatter(inputs, model(inputs), c='r') plt.show() print('ํ˜„์žฌ ์†์‹ค: '), print(loss(model(inputs), outputs).numpy())

ํ›ˆ๋ จ ๋ฃจํ”„ ์ •์˜

์ด์ œ ๋„คํŠธ์›Œํฌ์™€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๊ฐ€ ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋ณ€์ˆ˜(W ์™€ b)๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ธฐ ์œ„ํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ์‹œ์ผœ ๋ณด์ฃ . ๊ทธ๋ฆฌ๊ณ  ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•(gradient descent)์„ ์‚ฌ์šฉํ•˜์—ฌ ์†์‹ค์„ ๊ฐ์†Œ์‹œํ‚ต๋‹ˆ๋‹ค. ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์œผ๋ฉฐ, tf.train.Optimizer ์— ๊ตฌํ˜„๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜๋Š”๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ธฐ๋ณธ์ ์ธ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

def train(model, inputs, outputs, learning_rate): with tf.GradientTape() as t: current_loss = loss(model(inputs), outputs) dW, db = t.gradient(current_loss, [model.W, model.b]) model.W.assign_sub(learning_rate * dW) model.b.assign_sub(learning_rate * db)

๋งˆ์ง€๋ง‰์œผ๋กœ, ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๊ณ , W ์™€ b์˜ ๋ณ€ํ™” ๊ณผ์ •์„ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค.

model = Model() # ๋„์‹ํ™”๋ฅผ ์œ„ํ•ด W๊ฐ’๊ณผ b๊ฐ’์˜ ๋ณ€ํ™”๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. Ws, bs = [], [] epochs = range(10) for epoch in epochs: Ws.append(model.W.numpy()) bs.append(model.b.numpy()) current_loss = loss(model(inputs), outputs) train(model, inputs, outputs, learning_rate=0.1) print('์—ํฌํฌ %2d: W=%1.2f b=%1.2f, ์†์‹ค=%2.5f' % (epoch, Ws[-1], bs[-1], current_loss)) # ์ €์žฅ๋œ ๊ฐ’๋“ค์„ ๋„์‹ํ™”ํ•ฉ๋‹ˆ๋‹ค. plt.plot(epochs, Ws, 'r', epochs, bs, 'b') plt.plot([TRUE_W] * len(epochs), 'r--', [TRUE_b] * len(epochs), 'b--') plt.legend(['W', 'b', 'true W', 'true_b']) plt.show()

๋‹ค์Œ ๋‹จ๊ณ„

์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋ณ€์ˆ˜๋ฅผ ๋‹ค๋ฃจ์—ˆ์œผ๋ฉฐ, ์ง€๊ธˆ๊นŒ์ง€ ๋…ผ์˜๋œ ํ…์„œํ”Œ๋กœ์˜ ๊ธฐ๋ณธ ์š”์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ„๋‹จํ•œ ์„ ํ˜• ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๊ณ  ํ›ˆ๋ จ์‹œ์ผฐ์Šต๋‹ˆ๋‹ค.

์ด๋ก ์ ์œผ๋กœ, ํ…์„œํ”Œ๋กœ๋ฅผ ๋จธ์‹ ๋Ÿฌ๋‹ ์—ฐ๊ตฌ์— ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์•Œ์•„์•ผ ํ•  ๊ฒƒ์ด ๋งค์šฐ ๋งŽ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ์‹ ๊ฒฝ๋ง์— ์žˆ์–ด tf.keras์™€ ๊ฐ™์€ ๊ณ ์ˆ˜์ค€ API๋Š” ๊ณ ์ˆ˜์ค€ ๊ตฌ์„ฑ ์š”์†Œ("์ธต"์œผ๋กœ ๋ถˆ๋ฆฌ๋Š”)๋ฅผ ์ œ๊ณตํ•˜๊ณ , ์ €์žฅ ๋ฐ ๋ณต์›์„ ์œ„ํ•œ ์œ ํ‹ธ๋ฆฌํ‹ฐ, ์†์‹ค ํ•จ์ˆ˜ ๋ชจ์Œ, ์ตœ์ ํ™” ์ „๋žต ๋ชจ์Œ ๋“ฑ์„ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋”์šฑ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค.