Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
DataScienceUWL
GitHub Repository: DataScienceUWL/DS775
Path: blob/main/Lessons/Lesson 11 - Decision Analysis/Lesson_11.ipynb
871 views
Kernel: Python 3 (system-wide)
# Execute First # for playing videos, customize height and width if desired # for best results keep 16:9 ratio def play_video(vid_name, w=640, h=360): from IPython.display import display, IFrame from IPython.core.display import HTML vid_path = "https://media.uwex.edu/content/ds/ds775_r19/" + vid_name + "/index.html" print(vid_path) hlink = '<a href = ' + vid_path + ' target = """_blank""">Open video in new tab</a>' display(IFrame(vid_path, width=w, height=h)) display(HTML(hlink))

Lesson 11: Decision Analysis

Decision analysis is a framework for rational decision making when the outcomes are uncertain. We'll introduce all of the ideas in the context of a prototype example explained below. This is the same example used in the textbook, but we'll add some different perspectives. Read through the example to get an idea of the problem before we start discussing it.

We present a series of short videos below. Because there are many videos we elected to put the Self Assessment Questions and Solutions in two separate notebooks in the same folder as this lesson notebook. We refer to the self assessment questions by number in this notebook. We suggest you work through this notebook and the Self Assessment Questions in parallel pausing after each video to attempt the corresponding self assessment question. The Silver Decision json files for GoFerBroke and for the example in the self-assessments are in the Silver Decisions Files subdirectory.

Prototype Problem - GOFERBROKE

The GOFERBROKE COMPANY owns a tract of land that may contain oil. A consulting geologist has reported to management that she believes there is one chance in four of oil. Because of this prospect, another oil company has offered to purchase the land for $90,000. However, Goferbroke is considering holding the land in order to drill for oil itself. The cost of drilling is $100,000. If oil is found, the resulting expected revenue will be $800,000, so the company’s expected profit (after deducting the cost of drilling) will be $700,000. A loss of $100,000 (the drilling cost) will be incurred if the land is dry (no oil).

With or without experimentation?

Often some testing is done to reduce the level of uncertainty about the outcome. For example, a new product might be introduced in a test market to gain insight into whether it is likely to be successful. The testing is referred to as experimentation and we will talk about decisions with experimentation and without experimentation.

Decision Analysis without Experimentation

Before we try to choose an optimal decision it helps to summarize the problem information in a payoff table or a decision tree to prepare for analysis. We show how to do this in the video below.

# execute for video play_video("ds775_lesson11_prototype-problem-goferbroke")
<IPython.lib.display.IFrame at 0x7f0ced8709a0>
display(IFrame("https://media.uwex.edu/content/ds/ds775_r19/ds775_lesson11_prototype-problem-goferbroke/index.html",width=640,height=360))
<IPython.lib.display.IFrame at 0x7f0ced871b70>

Try Self Assessment Question 1

Decision Strategies

In this section we present four different strategies for decision making in this context.

Pessimistic Decision Strategy (Maximin)

At each chance node assume that the worst case outcome occurs and then choose the alternative that optimizes among these worst case outcomes. Think "best of the worst."

For maximizing a payoff this is called a maximin strategy. At each chance node choose the outcome with the smallest payoff then maximize these minimum payoffs.

# execute for video play_video("ds775_lesson11_pessimistic-strategy")
<IPython.lib.display.IFrame at 0x7f0ced872fb0>

Try Self Assessment Question 2

Optimistic Decision Strategy (Maximax)

This one isn't discussed in the textbook.

At each chance node assume that the best case outcome occurs and then choose the alternative that optimizes among these best case outcomes. Think "best of the best."

For maximizing a payoff this is called a maximax strategy. At each chance node choose the outcome with the largest payoff then maximize these maximum payoffs.

# execute for video play_video("ds775_lesson11_optimistic-strategy")
<IPython.lib.display.IFrame at 0x7f0ced8720b0>

Try Self Assessment Question 3

Maximum Likelihood Strategy

At each chance node assume the most probable outcome occurs then choose the alternative that optimizes among these most probable outcomes. Think "best of the most likely."

# execute for video play_video("ds775_lesson11_maximum-likelihood")
<IPython.lib.display.IFrame at 0x7f0ced8718a0>

Try Self Assessment Question 4

Bayes' Decision Rule

At each chance node compute the expected (average) payoff then choose the alternative that optimizes among these expected payoffs. Think "best of the averages."

If you're unfamiliar with the idea of the expected value of a discrete random variable you can watch the video below otherwise skip the video and watch the next one which shows the strategy applied to the Goferbroke problem. Bayes decision rule is the most commonly used strategy and we'll focus on it for the remainder of this unit.

# execute for video play_video("ds775_lesson11_random-variable")
<IPython.lib.display.IFrame at 0x7f0ced871ed0>
# execute for video play_video("ds775_lesson11_bayes-decision-rule")
<IPython.lib.display.IFrame at 0x7f0ced870a90>

Try Self Assessment Question 5

What if you're minimizing instead?

Suppose you're trying to minimize cost instead of maximizing payoff. You've got two possible options:

  • Option 1: Negate all of the costs and maximize the negative cost. This will yield the decision strategy that minimizes cost.

  • Option 2: Adjust the decision strategy to minimize.

For example, if you want to use the Pessimistic Decision Strategy to minimize cost then at each chance node choose the outcome with the largest cost then minmize these maximum costs. This would be a Minimax strategy.

Minimization Example

A nuclear power company is deciding whether to build a nuclear plant at site A or site B. The cost of building the plant at site A is $10 million but there is a 20% chance of an earthquake at site A. If an earthquake occurs, then construction will be halted and the plant will be built (for additional cost) at site B. The cost of building the plant at site B is $20 million.

# execute for video play_video("ds775_lesson11_minimize-cost-example")
<IPython.lib.display.IFrame at 0x7f0ced8712a0>

Sensitivity Analysis

How does the optimal strategy change if a payoff amount changes or if the prior probabilities of the states of nature change? Is the optimal strategy robust or is it sensitive to a small change in one of the parameters?

In the Goferbroke problem if the probability of oil is larger then we should be more likely to drill and the probability of oil is smaller we should be more likely to sell the land, but at what point should we change strategies.

In the example shown in the next video we'll vary the prior probability of oil, pp, to see how the optimal strategy depends on this parameter. The Desmos graph we use in the video can be found here: Desmos Goferbroke Sensitivity.

# execute for video play_video("ds775_lesson11_sensitivity-analysis")
<IPython.lib.display.IFrame at 0x7f0ced871420>

Try Self Assessment Question 6

Decision Analysis with Experimentation

If possible, we'd like to do some testing to reduce the level of uncertainty about the outcomes. Conducting a test is called experimentation in this context (this language comes from statistics). For example we might pay a geological consultant to collect additional information and predict whether or not there is oil. Ultimately we want to use these new probabilities to analyze a larger decision tree like the one below:

# execute for video play_video("ds775_lesson11_decision-analysis-w-experiment")
<IPython.lib.display.IFrame at 0x7f0ced871420>

Try Self Assessment Question 7

Decision Analysis with Experimentation

If possible, we'd like to do some testing to reduce the level of uncertainty about the outcomes. Conducting a test is called experimentation in this context (this language comes from statistics). For example we might pay a geological consultant to collect additional information and predict whether or not there is oil. To get an idea if it is worth doing the testing/experimentation we can compute the Expected Value of Perfect Information.

Expected Value of Perfect Information (EVPI)

EVPI answers the question: "How much larger is the expected (average) payoff if we have a perfect diagnostic test?"

EVPI = expected payoff with perfect information - expected payoff without experimentation

The video below shows how to find EVPI for the Goferbroke problem:

# execute for video play_video("ds775_lesson11_expected-value-perfect-info")
<IPython.lib.display.IFrame at 0x7f0ced871ff0>

Given the results of the experiment we can adjust the probabilities of the outcomes to reflect the experimental findings. For example, if the geological consultant predicts that there is oil at the current site then we may adjust the probability of finding oil upward. These post-experiment probabilities are called posterior probabilities. The original probabilities we have from before the experiment are called prior probabilities. Bayes' Theorem can be used to go between prior and posterior probabilities, but before we introduce that let's review a bit about conditional probability first.

Try Self Assessment Question 8

Conditional Probability

Conditional probability is a measure of the probability of an event occurring, given that another event has already occurred. We give a simple example using dice in the example below but if you want additional background we suggest you study Chapter 3 in the free textbook OpenIntro Statistics.

Dice Example

In the next video we'll compute the conditional probabilities intuitively and with formulas.

# execute for video play_video("ds775_lesson11_probability-basics")
<IPython.lib.display.IFrame at 0x7f0ced870250>

Our review will consist of working through just a couple of examples but if you want additional background we suggest you study Chapter 3 in the free textbook OpenIntro Statistics.

Try Self Assessment Question 9

Understanding Posterior Probabilities

In the context of decision analysis the conditional probabilities of interest are called posterior probabilities. We discuss these a bit in the next video

# execute for video play_video("ds775_lesson11_posterior-probabilities")
<IPython.lib.display.IFrame at 0x7f0ced871720>

Goferbroke Example with Contingency Table

Contingency tables, also known as classification matrices, are a great way to work with and understand conditional probabilities and Bayes' Theorem. In the video below we derive the posterior probabilities for the Goferbroke problem using a contingency table.

# execute for video play_video("ds775_lesson11_posterior-probabilities-from-table")
<IPython.lib.display.IFrame at 0x7f0ced8717e0>

Try Self Assessment Question 10

The contingency table approach implicitly uses Bayes' Theorem to compute the posterior probabilities. The posterior probabilities are the conditional probabilities of states of nature given the experimental findings. The formula version of Bayes' Theorem shows how to compute the posterior probability of the state of nature SjS_j given the experimental finding FiF_i. The entries on the right side of the formula are all prior probabilities.

P(SjFi)=P(SjFi)P(Fi)=P(SjFi)P(FiS1)+P(FiS2)+=P(FiSj)P(Sj)P(FiS1)P(S1)+P(FiS2)P(S2)+P(S_j | F_i) = \frac{P(S_j \cap F_i)}{P(F_i)} = \frac{P(S_j \cap F_i)}{P(F_i \cap S_1) + P(F_i \cap S_2) + \ldots} = \frac{P(F_i | S_j) P(S_j)}{P(F_i | S_1) P(S_1) + P(F_i | S_2) P(S_2) + \ldots}

In the next video we'll use the contingency table approach to motivate Bayes' Theorem.

# execute for video play_video("ds775_lesson11_from-contingency-table-to-bayes")
<IPython.lib.display.IFrame at 0x7f0ced871b10>

Try Self Assessment Question 11

Goferbroke Example with Probability Trees

An alternative way to understand and derive posterior probabilities is to exploit the natural tree structure of conditional probabilities:

# execute for video play_video("ds775_lesson11_posterior-probabilities-flipping-trees")
<IPython.lib.display.IFrame at 0x7f0ced870ac0>

Flipping a probability tree in Silver Decisions

Silver Decisions can also be used to compute the posterior probabilities by flipping a probability tree. This is shown in the video below:

# execute for video play_video("ds775_lesson11_flipping-probability-trees-silver")
<IPython.lib.display.IFrame at 0x7f0ced8720b0>

Try Self Assessment Question 12

Another Example for Practice

The video below walks through a posterior probability example that comes from diagnostic testing.

# execute for video play_video("ds775_lesson11_posterior-probabilities-final-example")
<IPython.lib.display.IFrame at 0x7f0ced8731c0>

Putting it together in Silver Decisions

Now we are going to switch gears and analyze both decisions together. Should the Goferbroke company hire the consultant and should they drill for oil? In the video below we'll show how to enter and analyze a complete decision tree in Silver Decisions for the Goferbroke problem. We'll also demonstrate how to find the Expected Value of the Experiment (EVE):

EVE answers the question: "How much larger is the expected (average) payoff if we gather information by performing an experiment?"

EVE = expected payoff with experimentation - expected payoff without experimentation

# execute for video play_video("ds775_lesson11_full-goforbroke-in-silver-decisions")
<IPython.lib.display.IFrame at 0x7f0ced871f30>

Sensitivity Analysis Including Experimentation

Thus far we haven't encountered any computations which aren't simple enough to do by hand. However it's often important to gain understanding of how the optimal strategy changes as the numbers changed. For example if one of the payoffs increases or if the probabilities of the states of nature change. To provide an example we'll analyze how the optimal strategy depends on the prior probability of finding oil in the Goferbroke problem. This kind of sensitivity analysis is complex enough that it makes it worthwhile to use decision analysis software such as Silver Decisions.

Before we demonstrate Silver Decisions we'll show you how to derive posterior probabilities when you have an unspecified parameter such as the prior probability of finding oil, pp.

# execute for video play_video("ds775_lesson11_posterior-probabilities-sensitivity-analysis")
<IPython.lib.display.IFrame at 0x7f0ced871270>

In the next video we show how to complete a sensitivity analysis using Silver Decisions.

# execute for video play_video("ds775_lesson11_silver-decisions-sensitivity-analysis")
<IPython.lib.display.IFrame at 0x7f0ced870a90>