Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/09. Machine Learning with Python/03. Classification/02. Decision Trees.ipynb
Views: 4598
Decision Trees
Objectives
After completing this lab you will be able to:
Develop a classification model using Decision Tree Algorithm
In this lab exercise, you will learn a popular machine learning algorithm, Decision Trees. You will use this classification algorithm to build a model from the historical data of patients, and their response to different medications. Then you will use the trained decision tree to predict the class of an unknown patient, or to find a proper drug for a new patient.
Import the Following Libraries:
- numpy (as np)
- pandas
- DecisionTreeClassifier from sklearn.tree
About the dataset
Imagine that you are a medical researcher compiling data for a study. You have collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of 5 medications, Drug A, Drug B, Drug c, Drug x and y.Part of your job is to build a model to find out which drug might be appropriate for a future patient with the same illness. The features of this dataset are Age, Sex, Blood Pressure, and the Cholesterol of the patients, and the target is the drug that each patient responded to.
It is a sample of multiclass classifier, and you can use the training part of the dataset to build a decision tree, and then use it to predict the class of an unknown patient, or to prescribe a drug to a new patient.
Now, read the data using pandas dataframe:
Practice
What is the size of data?Pre-processing
Using my_data as the Drug.csv data read by pandas, declare the following variables:
- X as the Feature Matrix (data of my_data)
- y as the response vector (target)
Remove the column containing the target name since it doesn't contain numeric values.
As you may figure out, some features in this dataset are categorical, such as Sex or BP. Unfortunately, Sklearn Decision Trees does not handle categorical variables. We can still convert these features to numerical values using pandas.get_dummies() to convert the categorical variable into dummy/indicator variables.
Now we can fill the target variable.
Setting up the Decision Tree
We will be using train/test split on our decision tree. Let's import train_test_split from sklearn.cross_validation.Now train_test_split will return 4 different parameters. We will name them:
X_trainset, X_testset, y_trainset, y_testset
The train_test_split will need the parameters:
X, y, test_size=0.3, and random_state=3.
The X and y are the arrays required before the split, the test_size represents the ratio of the testing dataset, and the random_state ensures that we obtain the same splits.
Practice
Print the shape of X_trainset and y_trainset. Ensure that the dimensions match.Print the shape of X_testset and y_testset. Ensure that the dimensions match.
Modeling
We will first create an instance of the DecisionTreeClassifier called drugTree.Inside of the classifier, specify criterion="entropy" so we can see the information gain of each node.
Next, we will fit the data with the training feature matrix X_trainset and training response vector y_trainset
Prediction
Let's make some predictions on the testing dataset and store it into a variable called predTree.You can print out predTree and y_testset if you want to visually compare the predictions to the actual values.
Evaluation
Next, let's import metrics from sklearn and check the accuracy of our model.Accuracy classification score computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.
In multilabel classification, the function returns the subset accuracy. If the entire set of predicted labels for a sample strictly matches with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
Visualization
Let's visualize the tree