Path: blob/master/Machine Learning Unsupervised Methods/Day1 ML Introduction treating a Data Set.ipynb
3074 views
Machine Learning
Machine learning is an application that provides Computers the ability to automatically learn and improve from experience without being explicitly programmed.
ML Approach
Supervised , Unsupervised , Reinforcement Learning
Supervised Machine Learning
Supervised Machine Learning is a set of algorithms that train on historical data and then predict output using the training dataset. Because of its accuracy and low time complexity, it is one of the most common machine learning types.
Applications
Spam filtering
Facial recognition
Disease identification
Fraud detection
Some common ML Algorithms
Linear regression,Logistic Regression, KNN, Decision Tree
Steps
1- import Data 2- Data Stats 3- Data Preparation 4- Features , Target 5- Train and Test 6- Evaluate Data(Yprec, Ytest) 7- Errors 8- Model Optimzation(Cross Validation or Hyper Tunning)
Test Data should always be a subset of actual data
Model Correction
Kfold
Lets Start Machine Learning with simple Iris Data set
Scikit -Learn
A library for machine learning for python language
Contains tools for machine learning algorithm and stats modelling
Installation
conda install scikit-learn
KNN Introduction
K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions).
Lets understand how KNN Works
Important features
K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.
K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories.
K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm.
K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification problems.
K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data.
It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset.
KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the new data.
KNN Algo
The K-NN working can be explained on the basis of the below algorithm:
Step-1: Select the number K of the neighbors
Step-2: Calculate the Euclidean distance of K number of neighbors
Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
Step-4: Among these k neighbors, count the number of the data points in each category.
Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.
Step-6: Our model is ready.
Classified by a majority vote of its neighbors, with the case being assigned to the class most common amongst its K nearest neighbors measured by a distance function. If K = 1, then the case is simply assigned to the class of its nearest neighbor.
All distance measures are only valid for continuous variables. In the instance of categorical variables the Hamming distance must be used. It also brings up the issue of standardization of the numerical variables between 0 and 1 when there is a mixture of numerical and categorical variables in the dataset.
How to select the value of K in the K-NN Algorithm?
A small value of K means that noise will have a higher influence on the result i.e., the probability of overfitting is very high. A large value of K makes it computationally expensive and defeats the basic idea behind KNN (that points that are near might have similar classes ). A simple approach to select k is k = n^(1/2).
There is no particular way to determine the best value for "K", so we need to try some values to find the best out of them. The most preferred value for K is 5.
A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of outliers in the model.
Large values for K are good, but it may find some difficulties.
Importing Required Modules
A,B,C : Targets - Y SEPAL - L ,W PETAL - L,B - features - X
Test(Accuracy), Train (Model Training)
Xtest,Ytest , Size: 1000 test: 20 ( A,B,C) train=80
Example
Methods
fit(X, y) : Fit the k-nearest neighbors classifier from the training dataset.
get_params([deep]): Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]): Find the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]): Compute the (weighted) graph of k-Neighbors for points in X.
predict(X): Predict the class labels for the provided data.
predict_proba(X): Return probability estimates for the test data X.
score(X, y[, sample_weight]): Return the mean accuracy on the given test data and labels.
set_params(**params): Set the parameters of this estimator
Working Of KNN Algorithm
we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]
it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0).
Use Case
KNeighborsClassifier can compute the nearest neighbors internally, but precomputing them can have several benefits, such as finer parameter control, caching for multiple use, or custom implementations.
Data Normalization
Normalization refers to rescaling real valued numeric attributes into the range 0 and 1.
It is useful to scale the input attributes for a model that relies on the magnitude of values, such as distance measures used in k-nearest neighbors and in the preparation of coefficients in regression.
1000,10,20,1 (0-1) 1000/1000,10/1000,2/1000 1,
Xtest:ypred Xtrain:model ytest:actual data ytrain: model A-P = gap
X-train, Y_train(data training) X_test: yPred Ytest: Actual values
random_state is used for initializing the internal random number generator, which will decide the splitting of data into train and test indices in your case. it can be any value but usually we take it as 0 or 1.
Training Data
n_neighbors: To define the required neighbors of the algorithm. Usually, it takes 5.
metric='minkowski': This is the default parameter and it decides the distance between the points.
p=2: It is equivalent to the standard Euclidean metric.
Prediction
Accuracy:
Confusion matix:
TP FP
TN FN
Class-3
Three con-matrix
TP: Predicted class and true FP: Predicted class and False
TP+TN/(TP+FP+TN+FN)
Predict it for New Values
(26+18)/(27+18)
Overfit and Underfitting
Trained: Acuracy: 98 Tested Values: 67 During training : Ovefitted Data is doing good during training but not performing in good during testing
Underfitting
Train: 67 Test: 98 Underfitting
Insights
Known as training accuracy when you train and test the model on the same data 97% of our predictions are correct
Methods to Boost the Accuracy of a Model
Add more data. Having more data is always a good idea
Treat missing and Outlier values
Feature Engineering
Feature Selection
Multiple algorithms
PCA(Dimension Reduction)
Algorithm Tuning
Ensemble methods
Cross Validation
Check Accurracy with different values of K
KNeighborsClassifier can compute the nearest neighbors internally, but precomputing them can have several benefits, such as finer parameter control, caching for multiple use, or custom implementations.
Here we use the caching property of pipelines to cache the nearest neighbors graph between multiple fits of KNeighborsClassifier.
The first call is slow since it computes the neighbors graph, while subsequent call are faster as they do not need to recompute the graph.
Here the durations are small since the dataset is small, but the gain can be more substantial when the dataset grows larger, or when the grid of parameter to search is large.