Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/Machine Learning Unsupervised Methods/Day1 ML Introduction treating a Data Set.ipynb
3074 views
Kernel: Python 3 (ipykernel)

Machine Learning

Machine learning is an application that provides Computers the ability to automatically learn and improve from experience without being explicitly programmed.

ML Approach

image.png

Supervised , Unsupervised , Reinforcement Learning

image.png

Supervised Machine Learning

Supervised Machine Learning is a set of algorithms that train on historical data and then predict output using the training dataset. Because of its accuracy and low time complexity, it is one of the most common machine learning types.

image.png

Applications

  • Spam filtering

  • Facial recognition

  • Disease identification

  • Fraud detection

Some common ML Algorithms

  • Linear regression,Logistic Regression, KNN, Decision Tree

Steps

1- import Data 2- Data Stats 3- Data Preparation 4- Features , Target 5- Train and Test 6- Evaluate Data(Yprec, Ytest) 7- Errors 8- Model Optimzation(Cross Validation or Hyper Tunning)

  1. Test Data should always be a subset of actual data

Model Correction

  1. Kfold

Lets Start Machine Learning with simple Iris Data set

Scikit -Learn

  • A library for machine learning for python language

  • Contains tools for machine learning algorithm and stats modelling

Installation

  • conda install scikit-learn

KNN Introduction

  • K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions).

image-2.png

Lets understand how KNN Works

image.png

Important features

K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.

  • K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories.

  • K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm.

  • K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification problems.

  • K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data.

  • It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset.

  • KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the new data.

    KNN Algo

The K-NN working can be explained on the basis of the below algorithm:

  • Step-1: Select the number K of the neighbors

  • Step-2: Calculate the Euclidean distance of K number of neighbors

  • Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.

image.png

  • Step-4: Among these k neighbors, count the number of the data points in each category.

  • Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.

  • Step-6: Our model is ready.

  • Classified by a majority vote of its neighbors, with the case being assigned to the class most common amongst its K nearest neighbors measured by a distance function. If K = 1, then the case is simply assigned to the class of its nearest neighbor. image.png

  • All distance measures are only valid for continuous variables. In the instance of categorical variables the Hamming distance must be used. It also brings up the issue of standardization of the numerical variables between 0 and 1 when there is a mixture of numerical and categorical variables in the dataset.

image.png

Color X1=R , X2 = B, X3 = G S=B 1,0,1 K=10

How to select the value of K in the K-NN Algorithm?

  • A small value of K means that noise will have a higher influence on the result i.e., the probability of overfitting is very high. A large value of K makes it computationally expensive and defeats the basic idea behind KNN (that points that are near might have similar classes ). A simple approach to select k is k = n^(1/2).

  • There is no particular way to determine the best value for "K", so we need to try some values to find the best out of them. The most preferred value for K is 5.

  • A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of outliers in the model.

  • Large values for K are good, but it may find some difficulties.

Importing Required Modules

  • A,B,C : Targets - Y SEPAL - L ,W PETAL - L,B - features - X

  • Test(Accuracy), Train (Model Training)

  • Xtest,Ytest , Size: 1000 test: 20 ( A,B,C) train=80

Example

Methods

  • fit(X, y) : Fit the k-nearest neighbors classifier from the training dataset.

  • get_params([deep]): Get parameters for this estimator.

  • kneighbors([X, n_neighbors, return_distance]): Find the K-neighbors of a point.

  • kneighbors_graph([X, n_neighbors, mode]): Compute the (weighted) graph of k-Neighbors for points in X.

  • predict(X): Predict the class labels for the provided data.

  • predict_proba(X): Return probability estimates for the test data X.

  • score(X, y[, sample_weight]): Return the mean accuracy on the given test data and labels.

  • set_params(**params): Set the parameters of this estimator

Working Of KNN Algorithm

X = [[0], [1], [2], [3],[4],[5],[6],[7],[8],[9]] y = [0, 0, 0, 0,1,0,1,1,1,1] from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=3) neigh.fit(X, y)
print("Prediction=",neigh.predict([[1.1],[8.9]]))
print("Prediction Probability = ",neigh.predict_proba([[4.2],[8.9]]))
print("Closed Neighbours ", neigh.kneighbors([[4.2]]))
X = [[0], [1], [2], [3],[4],[5]] y = [0, 0, 1, 1,0,0] from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=2) neigh.fit(X, y) print("Prediction=",neigh.predict([[1.2],[8.9],[3.4]])) print("Prediction Probability = ",neigh.predict_proba([[1.2],[8.9],[3.4]])) ## Here we are fetching closed neighbours to 3.2(Input) and their distance from 3.2(Input) print("Closed Neighbours ", neigh.kneighbors([[1.2]]))

we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]

samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] from sklearn.neighbors import NearestNeighbors neigh = NearestNeighbors(n_neighbors=1) neigh.fit(samples) print(neigh.kneighbors([[1., 1., 1.]]))
  • it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0).

Use Case

KNeighborsClassifier can compute the nearest neighbors internally, but precomputing them can have several benefits, such as finer parameter control, caching for multiple use, or custom implementations.

from sklearn.datasets import load_iris import pandas as pd import matplotlib iris_dataset = load_iris()
iris_dataset.keys()
dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names', 'filename', 'data_module'])
print(iris_dataset['DESCR'])
.. _iris_dataset: Iris plants dataset -------------------- **Data Set Characteristics:** :Number of Instances: 150 (50 in each of three classes) :Number of Attributes: 4 numeric, predictive attributes and the class :Attribute Information: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - class: - Iris-Setosa - Iris-Versicolour - Iris-Virginica :Summary Statistics: ============== ==== ==== ======= ===== ==================== Min Max Mean SD Class Correlation ============== ==== ==== ======= ===== ==================== sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) ============== ==== ==== ======= ===== ==================== :Missing Attribute Values: None :Class Distribution: 33.3% for each of 3 classes. :Creator: R.A. Fisher :Donor: Michael Marshall (MARSHALL%[email protected]) :Date: July, 1988 The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher's paper. Note that it's the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points. This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. .. topic:: References - Fisher, R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950). - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis. (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218. - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System Structure and Classification Rule for Recognition in Partially Exposed Environments". IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-2, No. 1, 67-71. - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions on Information Theory, May 1972, 431-433. - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II conceptual clustering system finds 3 classes in the data. - Many, many more ...
print("Target names: {}".format(iris_dataset['target_names'])) # Class will be target names
Target names: ['setosa' 'versicolor' 'virginica']
print("Feature names: {}".format(iris_dataset['feature_names'])) # attributes will be features
Feature names: ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']

Data Normalization

  • Normalization refers to rescaling real valued numeric attributes into the range 0 and 1.

  • It is useful to scale the input attributes for a model that relies on the magnitude of values, such as distance measures used in k-nearest neighbors and in the preparation of coefficients in regression.

1000,10,20,1 (0-1) 1000/1000,10/1000,2/1000 1,

# normalize the data attributes from sklearn import preprocessing normalized_X = preprocessing.normalize(iris_dataset.data) #normalized_X

Xtest:ypred Xtrain:model ytest:actual data ytrain: model A-P = gap

from sklearn.model_selection import train_test_split #a=random.seed() X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'],iris_dataset['target'],random_state=1,test_size=0.25) ## WE CAN MENTION Test size while splitting our data

X-train, Y_train(data training) X_test: yPred Ytest: Actual values

random_state is used for initializing the internal random number generator, which will decide the splitting of data into train and test indices in your case. it can be any value but usually we take it as 0 or 1.

print("X_train shape: {}".format(X_train.shape)) print("y_train shape: {}".format(y_train.shape))
X_train shape: (112, 4) y_train shape: (112,)
iris = pd.DataFrame(X_train,columns=iris_dataset.feature_names) iris.head(2)
Species = pd.DataFrame(iris_dataset.target_names)#,index=[1,2,3] ,columns=["ID","Species"]) Species
iris.isnull().sum()
sepal length (cm) 0 sepal width (cm) 0 petal length (cm) 0 petal width (cm) 0 dtype: int64

Training Data

from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X_train, y_train)
  • n_neighbors: To define the required neighbors of the algorithm. Usually, it takes 5.

  • metric='minkowski': This is the default parameter and it decides the distance between the points.

  • p=2: It is equivalent to the standard Euclidean metric.

Prediction

X_test.shape X_train.shape
(112, 4)
y_pred = knn.predict(X_test) print("Test set predictions:\n{}".format(y_pred))
Test set predictions: [0 1 1 0 2 1 2 0 0 2 1 0 2 1 1 0 1 1 0 0 1 1 1 0 2 1 0 0 1 2 1 2 1 2 2 0 1 0]
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
Test set score: 1.00

Accuracy:

Confusion matix:

TP FP

TN FN

Class-3

Three con-matrix

TP: Predicted class and true FP: Predicted class and False

TP+TN/(TP+FP+TN+FN)

Predict it for New Values

import numpy as np X_new = np.array([[1.4, 6.4, 4.3, 3.4]]) prediction = knn.predict(X_new) print("Prediction: {}".format(prediction)) print("Predicted target name: {}".format(iris_dataset['target_names'][prediction]))
Prediction: [1] Predicted target name: ['versicolor']
[TP FP FN TN] A,B,C THREE CONFUSION A = TP+TN/TP+TN+FP+FN Precision: TP/(TP+FP)(data has given postive outcomes) Recall: TP/(TP+FN)

(26+18)/(27+18)

from sklearn.metrics import multilabel_confusion_matrix multilabel_confusion_matrix(y_test, y_pred, labels=[0 ,1,2])
array([[[25, 0], [ 0, 13]], [[22, 0], [ 0, 16]], [[29, 0], [ 0, 9]]], dtype=int64)
## Model parameters study : from sklearn import metrics count_misclassified = (y_test != y_pred).sum() print('Misclassified samples: {}'.format(count_misclassified)) print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
Misclassified samples: 0 Accuracy: 1.0

Overfit and Underfitting

Trained: Acuracy: 98 Tested Values: 67 During training : Ovefitted Data is doing good during training but not performing in good during testing

Underfitting

Train: 67 Test: 98 Underfitting

Insights

Known as training accuracy when you train and test the model on the same data 97% of our predictions are correct

Methods to Boost the Accuracy of a Model

  • Add more data. Having more data is always a good idea

  • Treat missing and Outlier values

  • Feature Engineering

  • Feature Selection

  • Multiple algorithms

  • PCA(Dimension Reduction)

  • Algorithm Tuning

  • Ensemble methods

    -Bagging (Bootstrap Aggregating) -Boosting
  • Cross Validation

Check Accurracy with different values of K

from sklearn.metrics import accuracy_score for K in range(12): K_value = K+1 neigh = KNeighborsClassifier(n_neighbors = K_value) neigh.fit(X_train, y_train) y_pred = neigh.predict(X_test) print("Accuracy is ", accuracy_score(y_test,y_pred)*100,"% for K-Value:",K_value)
Accuracy is 100.0 % for K-Value: 1 Accuracy is 100.0 % for K-Value: 2 Accuracy is 100.0 % for K-Value: 3 Accuracy is 100.0 % for K-Value: 4 Accuracy is 100.0 % for K-Value: 5 Accuracy is 100.0 % for K-Value: 6 Accuracy is 97.36842105263158 % for K-Value: 7 Accuracy is 100.0 % for K-Value: 8 Accuracy is 97.36842105263158 % for K-Value: 9 Accuracy is 97.36842105263158 % for K-Value: 10 Accuracy is 97.36842105263158 % for K-Value: 11 Accuracy is 97.36842105263158 % for K-Value: 12

KNeighborsClassifier can compute the nearest neighbors internally, but precomputing them can have several benefits, such as finer parameter control, caching for multiple use, or custom implementations.

  • Here we use the caching property of pipelines to cache the nearest neighbors graph between multiple fits of KNeighborsClassifier.

  • The first call is slow since it computes the neighbors graph, while subsequent call are faster as they do not need to recompute the graph.

  • Here the durations are small since the dataset is small, but the gain can be more substantial when the dataset grows larger, or when the grid of parameter to search is large.

K-Fold cross-validation is a technique used in machine learning for model evaluation. It helps in assessing the performance of a model across different subsets of the dataset. Here's how it works

image.png

from sklearn.model_selection import KFold from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score # Define the number of splits (k) k = 5 kf = KFold(n_splits=k, shuffle=True, random_state=1) # Initialize KNN classifier knn = KNeighborsClassifier(n_neighbors=5) # You can set your desired hyperparameters here # Initialize a list to store the accuracy scores accuracy_scores = [] # Iterate through the cross-validation splits for train_index, test_index in kf.split(X_train): X_train_fold, X_val_fold = X_train[train_index], X_train[test_index] y_train_fold, y_val_fold = y_train[train_index], y_train[test_index] # Fit the model on the training data knn.fit(X_train_fold, y_train_fold) # Predict on the validation set y_pred = knn.predict(X_val_fold) # Calculate accuracy and store it accuracy = accuracy_score(y_val_fold, y_pred) print("Accuracy scores for each fold:",accuracy) accuracy_scores.append(accuracy) # Calculate and print the average accuracy average_accuracy = sum(accuracy_scores) / len(accuracy_scores) print(f"Average accuracy over {k}-fold cross-validation: {average_accuracy}")
Accuracy scores for each fold: 0.9565217391304348 Accuracy scores for each fold: 1.0 Accuracy scores for each fold: 0.9545454545454546 Accuracy scores for each fold: 0.9545454545454546 Accuracy scores for each fold: 0.8636363636363636 Average accuracy over 5-fold cross-validation: 0.9458498023715416

Model Optimization

Hyperparameter tuning, also known as hyperparameter optimization, is a crucial step in the machine learning model development process. It involves finding the best set of hyperparameters for a given model and dataset.

Grid search is a tuning technique that attempts to compute the optimum values of hyperparameters. It is an exhaustive search that is performed on a the specific parameter values of a model. The model is also known as an estimator

from sklearn.model_selection import GridSearchCV from sklearn.neighbors import KNeighborsClassifier # Define the hyperparameters and their ranges param_grid = { 'n_neighbors': [3, 5, 7, 9], # List of possible k values 'weights': ['uniform', 'distance'], # Weighting options 'metric': ['euclidean', 'manhattan'] # Distance metrics } # Initialize the KNN classifier knn = KNeighborsClassifier() # Initialize GridSearchCV grid_search = GridSearchCV(knn, param_grid, cv=4, scoring='accuracy') # Fit the model grid_search.fit(X_train, y_train) # Get the best hyperparameters best_params = grid_search.best_params_ print(f"The best hyperparameters are: {best_params}") # Get the best model best_knn = grid_search.best_estimator_ # Evaluate the model on the test set accuracy = best_knn.score(X_test, y_test) print(f"Accuracy on test set: {accuracy}")
The best hyperparameters are: {'metric': 'euclidean', 'n_neighbors': 7, 'weights': 'uniform'} Accuracy on test set: 0.9736842105263158