• For any query, contact us at
  • +91-9872993883
  • +91-8283824812
  • info@ris-ai.com

K Nearest Neighbors - Classification

>>>Learn Machine learning on finger tips in limited time period *** CAll NOW (9872993883)!!!!! ***


...After reading this entire blog you will be able to learn Machine Learning Alogirthm in very easy STEPS. Anybody with no prior knowledge of ML can do this algorithm easily.

What is K Nearest Neighbors - Classification?

K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories. K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm.

K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification problems.

K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data.

It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset. KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the new data.

Working of K Nearest Neighbors - Classification

K Nearest Neighbors - Classification

>STEP WISE representation on how K Nearest Neighbors - Classification works in very easy language in given bel

1. Importing Different Libraries

In [2]:
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.model_selection import train_test_split

2. Read data from csv file

*(Comma Separated Values file) is a type of plain text file that uses specific structuring to arrange tabular data Here is the link to download csv file ( )

In [4]:
df = pd.read_csv('heart_stalog.csv')

There is string values in this csv file as shown below after applying head code

In [10]:
print(df.head())
    age  sex  chest  resting_blood_pressure  serum_cholestoral  \
0  70.0  1.0    4.0                   130.0              322.0   
1  67.0  0.0    3.0                   115.0              564.0   
2  57.0  1.0    2.0                   124.0              261.0   
3  64.0  1.0    4.0                   128.0              263.0   
4  74.0  0.0    2.0                   120.0              269.0   

   fasting_blood_sugar  resting_electrocardiographic_results  \
0                  0.0                                   2.0   
1                  0.0                                   2.0   
2                  0.0                                   0.0   
3                  0.0                                   0.0   
4                  0.0                                   2.0   

   maximum_heart_rate_achieved  exercise_induced_angina  oldpeak  slope  \
0                        109.0                      0.0      2.4    2.0   
1                        160.0                      0.0      1.6    2.0   
2                        141.0                      0.0      0.3    1.0   
3                        105.0                      1.0      0.2    2.0   
4                        121.0                      1.0      0.2    1.0   

   number_of_major_vessels  thal       class  
0                      3.0   3.0  b'present'  
1                      0.0   7.0   b'absent'  
2                      0.0   7.0  b'present'  
3                      1.0   7.0   b'absent'  
4                      1.0   3.0   b'absent'  

As we observe there are string values in this file.

3. Firstly convert it into binary with the help of label encoder like given below:

In [11]:
label_encoder = preprocessing.LabelEncoder()
df['class'] = label_encoder.fit_transform(df['class'])

4. Print encoded binary data in new file

In [12]:
df.to_csv("new.csv", index=None)

K Nearest Neighbors – Classification

5. Partition of input features and target data

In [13]:
x = df.iloc[:, 4:13] #input features
y = df.iloc[:, -1]   #target feature

6. We take training dataset as x_train and y_train, and testing data sets as x_test and y_test by taking size 0.2

In [14]:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)

7. Fit train data in k-neighbour classifier present with different hyperparameters and to predict the output.

In [15]:
# implementing kneighborsclassifier for prediction(0=absent,1=present)
model = KNeighborsClassifier()
model.fit(x_train, y_train)
prc = model.predict(x_test)
print(prc)
[1 1 1 1 1 0 0 1 1 0 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 1 0 1 0 0 1 0
 1 1 1 0 0 0 1 1 1 0 0 0 0 0 1 1 1]

8. Finding accuracy of the models

In [16]:
from sklearn import metrics
print("accuracy:", metrics.accuracy_score(y_test, prc))
accuracy: 0.5740740740740741

Conclusion

KNN works by finding the distances between a query and all the examples in the data, selecting the specified number examples (K) closest to the query, then votes for the most frequent label (in the case of classification) or averages the labels (in the case of regression).

In the case of classification and regression, we saw that choosing the right K for our data is done by trying several Ks and picking the one that works best. Finally, we looked at an example of how the KNN algorithm could be used in recommender systems, an application of KNN-search.

Resources You Will Ever Need