Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/ML/Notebook/Neural Network.ipynb
3087 views
Kernel: Python 3

Neural Network

Deep learning, a subfield of machine learning that is a set of algorithms that is inspired by the structure and function of the brain. These algorithms are usually called Artificial Neural Networks (ANN). Deep learning is one of the hottest fields in data science with many case studies that have astonishing results in robotics, image recognition and Artificial Intelligence (AI).

  • Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input.

  • The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.

  • In the process of learning, a neural network finds the right function, or the correct manner of transforming x into y. image.png

  • As seen from above diagram we conisider three layers in neural network :INPUT , HIDDEN & OUTPUT

Features

  • Cluster and classify.

  • Group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on.

  • Feature Extraction

  • Develope Algorithms for reinforcement learning, classification and regression (Deep Neural Networks)

General way to solve problems with Neural Networks

Neural networks is a special type of machine learning (ML) algorithm. So, like every ML algorithm, it follows the usual ML workflow of data preprocessing, model building and model evaluation. For the sake of conciseness, I have listed out a To-D0 list of how to approach a Neural Network problem.

  • Check if it is a problem where Neural Network gives you uplift over traditional algorithm

  • Do a survey of which Neural Network architecture is most suitable for the required problem

  • Define Neural Network architecture through whichever language / library you choose.

  • Convert data to right format and divide it in batches

  • Pre-process the data according to your needs

  • Augment Data to increase size and make better trained models

  • Feed batches to Neural Network

  • Train and monitor changes in training and validation data sets

  • Test your model, and save it for future use

About Libaries

  • Keras is a powerful easy-to-use Python library for developing and evaluating deep learning models. It wraps the efficient numerical computation libraries Theano and TensorFlow and allows you to define and train neural network models in a few short lines of code.

Implementation

Lets First Initialize with a simple random generated data.

Problem

  • We have a data that describes patient medical record data for Pima Indians and whether they had an onset of diabetes within five years.

  • As such, it is a binary classification problem (onset of diabetes as 1 or not as 0). All of the input variables that describe each patient are numerical. This makes it easy to use directly with neural networks that expect numerical input and output values, and ideal for our first neural network in Keras.

WHY RANDOM SEED?

  • This means they make use of randomness, such as initializing to random weights, and in turn the same network trained on the same data can produce different results.

  • The random initialization allows the network to learn a good approximation for the function being learned.

  • Randomness is used Neural Network performs better with it than without.

from keras.models import Sequential from keras.layers import Dense import numpy as np import pandas as pd #fix random seed for reproducibility np.random.seed(7)
d=pd.read_csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv") d.head()

Define Model

Models in Keras are defined as a sequence of layers.

  • We create a Sequential model and add layers one at a time until we are happy with our network topology.

  • The first thing to get right is to ensure the input layer has the right number of inputs. This can be specified when creating the first layer with the input_dim argument and setting it to 8 for the 8 input variables.

  • We can piece it all together by adding each layer. The first layer has 12 neurons and expects 8 input variables. The second hidden layer has 8 neurons and finally, the output layer has 1 neuron to predict the class (onset of diabetes or not).

Compile Model

Now that the model is defined, we can compile it.

Compiling the model uses the efficient numerical libraries under the covers (the so-called backend) such as Theano or TensorFlow. The backend automatically chooses the best way to represent the network for training and making predictions to run on your hardware, such as CPU or GPU or even distributed.

When compiling, we must specify some additional properties required when training the network. Remember training a network means finding the best set of weights to make predictions for this problem.

We must specify the loss function to use to evaluate a set of weights, the optimizer used to search through different weights for the network and any optional metrics we would like to collect and report during training.

In this case, we will use logarithmic loss, which for a binary classification problem is defined in Keras as “binary_crossentropy“. We will also use the efficient gradient descent algorithm “adam” for no other reason that it is an efficient default.

Fit Model

We have defined our model and compiled it ready for efficient computation.Now it is time to execute the model on some data.We can train or fit our model on our loaded data by calling the fit() function on the model.

  • The training process will run for a fixed number of iterations through the dataset called epochs, that we must specify using the nepochs argument.

  • We can also set the number of instances that are evaluated before a weight update in the network is performed, called the batch size and set using the batch_size argument.

  • For this problem, we will run for a small number of iterations (150) and use a relatively small batch size of 10. Again, these can be chosen experimentally by trial and error.

Evaluate Model

We have trained our neural network on the entire dataset and we can evaluate the performance of the network on the same dataset.

  • This will only give us an idea of how well we have modeled the dataset (e.g. train accuracy), but no idea of how well the algorithm might perform on new data. We have done this for simplicity, but ideally, you could separate your data into train and test datasets for training and evaluation of your model.

  • Evaluate your model on your training dataset using the evaluate() function on your model and pass it the same input and output used to train the model.

Finally Make Predictions

  • epochs : give the number of times the model is trained over the entire dataset

# fix random seed for reproducibility seed = 7 np.random.seed(seed) # load pima indians dataset dataset = np.loadtxt("https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv", delimiter=",") # split into input (X) and output (Y) variables X = dataset[:,0:8] Y = dataset[:,8] # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit the model model.fit(X, Y, epochs=150, batch_size=10, verbose=2) # evaluate the model scores = model.evaluate(X, Y) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) # calculate predictions predictions = model.predict(X) # round predictions rounded = [round(x[0]) for x in predictions] print(rounded)
C:\Users\HP\Anaconda3\lib\site-packages\ipykernel_launcher.py:12: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(12, input_dim=8, activation="relu", kernel_initializer="uniform")` if sys.path[0] == '': C:\Users\HP\Anaconda3\lib\site-packages\ipykernel_launcher.py:13: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(8, activation="relu", kernel_initializer="uniform")` del sys.path[0] C:\Users\HP\Anaconda3\lib\site-packages\ipykernel_launcher.py:14: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(1, activation="sigmoid", kernel_initializer="uniform")`
Epoch 1/150 - 0s - loss: 0.6773 - acc: 0.6510 Epoch 2/150 - 0s - loss: 0.6584 - acc: 0.6510 Epoch 3/150 - 0s - loss: 0.6467 - acc: 0.6510 Epoch 4/150 - 0s - loss: 0.6389 - acc: 0.6510 Epoch 5/150 - 0s - loss: 0.6325 - acc: 0.6510 Epoch 6/150 - 0s - loss: 0.6188 - acc: 0.6510 Epoch 7/150 - 0s - loss: 0.6186 - acc: 0.6510 Epoch 8/150 - 0s - loss: 0.6134 - acc: 0.6510 Epoch 9/150 - 0s - loss: 0.6083 - acc: 0.6510 Epoch 10/150 - 0s - loss: 0.6149 - acc: 0.6510 Epoch 11/150 - 0s - loss: 0.6051 - acc: 0.6510 Epoch 12/150 - 0s - loss: 0.6038 - acc: 0.6510 Epoch 13/150 - 0s - loss: 0.6004 - acc: 0.6510 Epoch 14/150 - 0s - loss: 0.6033 - acc: 0.6510 Epoch 15/150 - 0s - loss: 0.5992 - acc: 0.6510 Epoch 16/150 - 0s - loss: 0.5988 - acc: 0.6510 Epoch 17/150 - 0s - loss: 0.5979 - acc: 0.6510 Epoch 18/150 - 0s - loss: 0.6031 - acc: 0.6510 Epoch 19/150 - 0s - loss: 0.5972 - acc: 0.6510 Epoch 20/150 - 0s - loss: 0.5979 - acc: 0.6510 Epoch 21/150 - 0s - loss: 0.5948 - acc: 0.6510 Epoch 22/150 - 0s - loss: 0.5938 - acc: 0.6510 Epoch 23/150 - 0s - loss: 0.5936 - acc: 0.6510 Epoch 24/150 - 0s - loss: 0.5988 - acc: 0.6510 Epoch 25/150 - 0s - loss: 0.5919 - acc: 0.6510 Epoch 26/150 - 0s - loss: 0.5975 - acc: 0.6510 Epoch 27/150 - 0s - loss: 0.5946 - acc: 0.6510 Epoch 28/150 - 0s - loss: 0.5891 - acc: 0.6510 Epoch 29/150 - 0s - loss: 0.5918 - acc: 0.6510 Epoch 30/150 - 0s - loss: 0.5905 - acc: 0.6510 Epoch 31/150 - 0s - loss: 0.5901 - acc: 0.6510 Epoch 32/150 - 0s - loss: 0.5898 - acc: 0.6510 Epoch 33/150 - 0s - loss: 0.5841 - acc: 0.6510 Epoch 34/150 - 0s - loss: 0.5890 - acc: 0.6510 Epoch 35/150 - 0s - loss: 0.5896 - acc: 0.6510 Epoch 36/150 - 0s - loss: 0.5849 - acc: 0.6510 Epoch 37/150 - 0s - loss: 0.5809 - acc: 0.6510 Epoch 38/150 - 0s - loss: 0.5922 - acc: 0.6510 Epoch 39/150 - 0s - loss: 0.5833 - acc: 0.6940 Epoch 40/150 - 0s - loss: 0.5864 - acc: 0.7044 Epoch 41/150 - 0s - loss: 0.5815 - acc: 0.6940 Epoch 42/150 - 0s - loss: 0.5806 - acc: 0.7057 Epoch 43/150 - 0s - loss: 0.5772 - acc: 0.7109 Epoch 44/150 - 0s - loss: 0.5861 - acc: 0.7031 Epoch 45/150 - 0s - loss: 0.5778 - acc: 0.7122 Epoch 46/150 - 0s - loss: 0.5757 - acc: 0.7057 Epoch 47/150 - 0s - loss: 0.5782 - acc: 0.7188 Epoch 48/150 - 0s - loss: 0.5732 - acc: 0.7005 Epoch 49/150 - 0s - loss: 0.5733 - acc: 0.7122 Epoch 50/150 - 0s - loss: 0.5744 - acc: 0.7161 Epoch 51/150 - 0s - loss: 0.5729 - acc: 0.7135 Epoch 52/150 - 0s - loss: 0.5713 - acc: 0.7148 Epoch 53/150 - 0s - loss: 0.5728 - acc: 0.7148 Epoch 54/150 - 0s - loss: 0.5715 - acc: 0.7083 Epoch 55/150 - 0s - loss: 0.5714 - acc: 0.7096 Epoch 56/150 - 0s - loss: 0.5725 - acc: 0.7161 Epoch 57/150 - 0s - loss: 0.5694 - acc: 0.7109 Epoch 58/150 - 0s - loss: 0.5722 - acc: 0.7057 Epoch 59/150 - 0s - loss: 0.5661 - acc: 0.7161 Epoch 60/150 - 0s - loss: 0.5665 - acc: 0.7018 Epoch 61/150 - 0s - loss: 0.5604 - acc: 0.7044 Epoch 62/150 - 0s - loss: 0.5623 - acc: 0.7044 Epoch 63/150 - 0s - loss: 0.5630 - acc: 0.7096 Epoch 64/150 - 0s - loss: 0.5607 - acc: 0.7122 Epoch 65/150 - 0s - loss: 0.5583 - acc: 0.7057 Epoch 66/150 - 0s - loss: 0.5525 - acc: 0.7122 Epoch 67/150 - 0s - loss: 0.5508 - acc: 0.7122 Epoch 68/150 - 0s - loss: 0.5532 - acc: 0.7057 Epoch 69/150 - 0s - loss: 0.5469 - acc: 0.7240 Epoch 70/150 - 0s - loss: 0.5566 - acc: 0.7005 Epoch 71/150 - 0s - loss: 0.5452 - acc: 0.7201 Epoch 72/150 - 0s - loss: 0.5458 - acc: 0.7174 Epoch 73/150 - 0s - loss: 0.5411 - acc: 0.7187 Epoch 74/150 - 0s - loss: 0.5425 - acc: 0.7044 Epoch 75/150 - 0s - loss: 0.5397 - acc: 0.7227 Epoch 76/150 - 0s - loss: 0.5351 - acc: 0.7214 Epoch 77/150 - 0s - loss: 0.5356 - acc: 0.7214 Epoch 78/150 - 0s - loss: 0.5311 - acc: 0.7279 Epoch 79/150 - 0s - loss: 0.5377 - acc: 0.7188 Epoch 80/150 - 0s - loss: 0.5310 - acc: 0.7279 Epoch 81/150 - 0s - loss: 0.5275 - acc: 0.7253 Epoch 82/150 - 0s - loss: 0.5290 - acc: 0.7214 Epoch 83/150 - 0s - loss: 0.5223 - acc: 0.7344 Epoch 84/150 - 0s - loss: 0.5203 - acc: 0.7370 Epoch 85/150 - 0s - loss: 0.5265 - acc: 0.7253 Epoch 86/150 - 0s - loss: 0.5296 - acc: 0.7331 Epoch 87/150 - 0s - loss: 0.5233 - acc: 0.7370 Epoch 88/150 - 0s - loss: 0.5173 - acc: 0.7370 Epoch 89/150 - 0s - loss: 0.5332 - acc: 0.7409 Epoch 90/150 - 0s - loss: 0.5130 - acc: 0.7357 Epoch 91/150 - 0s - loss: 0.5141 - acc: 0.7461 Epoch 92/150 - 0s - loss: 0.5130 - acc: 0.7578 Epoch 93/150 - 0s - loss: 0.5113 - acc: 0.7448 Epoch 94/150 - 0s - loss: 0.5123 - acc: 0.7604 Epoch 95/150 - 0s - loss: 0.5066 - acc: 0.7578 Epoch 96/150 - 0s - loss: 0.5102 - acc: 0.7565 Epoch 97/150 - 0s - loss: 0.5041 - acc: 0.7630 Epoch 98/150 - 0s - loss: 0.5019 - acc: 0.7565 Epoch 99/150 - 0s - loss: 0.4987 - acc: 0.7617 Epoch 100/150 - 0s - loss: 0.4988 - acc: 0.7513 Epoch 101/150 - 0s - loss: 0.4975 - acc: 0.7643 Epoch 102/150 - 0s - loss: 0.4976 - acc: 0.7578 Epoch 103/150 - 0s - loss: 0.5075 - acc: 0.7526 Epoch 104/150 - 0s - loss: 0.4962 - acc: 0.7760 Epoch 105/150 - 0s - loss: 0.5173 - acc: 0.7396 Epoch 106/150 - 0s - loss: 0.5007 - acc: 0.7578 Epoch 107/150 - 0s - loss: 0.4945 - acc: 0.7656 Epoch 108/150 - 0s - loss: 0.4951 - acc: 0.7617 Epoch 109/150 - 0s - loss: 0.4931 - acc: 0.7630 Epoch 110/150 - 0s - loss: 0.4897 - acc: 0.7591 Epoch 111/150 - 0s - loss: 0.4972 - acc: 0.7578 Epoch 112/150 - 0s - loss: 0.4926 - acc: 0.7526 Epoch 113/150 - 0s - loss: 0.4908 - acc: 0.7643 Epoch 114/150 - 0s - loss: 0.4929 - acc: 0.7708 Epoch 115/150 - 0s - loss: 0.4868 - acc: 0.7617 Epoch 116/150 - 0s - loss: 0.4902 - acc: 0.7734 Epoch 117/150 - 0s - loss: 0.4865 - acc: 0.7786 Epoch 118/150 - 0s - loss: 0.4879 - acc: 0.7656 Epoch 119/150 - 0s - loss: 0.4815 - acc: 0.7721 Epoch 120/150 - 0s - loss: 0.4866 - acc: 0.7565 Epoch 121/150 - 0s - loss: 0.4923 - acc: 0.7721 Epoch 122/150 - 0s - loss: 0.4845 - acc: 0.7773 Epoch 123/150 - 0s - loss: 0.4799 - acc: 0.7799 Epoch 124/150 - 0s - loss: 0.4727 - acc: 0.7904 Epoch 125/150 - 0s - loss: 0.4780 - acc: 0.7682 Epoch 126/150 - 0s - loss: 0.4766 - acc: 0.7643 Epoch 127/150 - 0s - loss: 0.4772 - acc: 0.7747 Epoch 128/150 - 0s - loss: 0.4710 - acc: 0.7812 Epoch 129/150 - 0s - loss: 0.4832 - acc: 0.7799 Epoch 130/150 - 0s - loss: 0.4744 - acc: 0.7799 Epoch 131/150 - 0s - loss: 0.4776 - acc: 0.7773 Epoch 132/150 - 0s - loss: 0.4718 - acc: 0.7799 Epoch 133/150 - 0s - loss: 0.4817 - acc: 0.7747 Epoch 134/150 - 0s - loss: 0.4771 - acc: 0.7786 Epoch 135/150 - 0s - loss: 0.4717 - acc: 0.7721 Epoch 136/150 - 0s - loss: 0.4686 - acc: 0.7695 Epoch 137/150 - 0s - loss: 0.4673 - acc: 0.7747 Epoch 138/150 - 0s - loss: 0.4720 - acc: 0.7839 Epoch 139/150 - 0s - loss: 0.4592 - acc: 0.7943 Epoch 140/150 - 0s - loss: 0.4705 - acc: 0.7708 Epoch 141/150 - 0s - loss: 0.4660 - acc: 0.7839 Epoch 142/150 - 0s - loss: 0.4747 - acc: 0.7669 Epoch 143/150 - 0s - loss: 0.4682 - acc: 0.7760 Epoch 144/150 - 0s - loss: 0.4656 - acc: 0.7734 Epoch 145/150 - 0s - loss: 0.4705 - acc: 0.7799 Epoch 146/150 - 0s - loss: 0.4671 - acc: 0.7904 Epoch 147/150 - 0s - loss: 0.4689 - acc: 0.7708 Epoch 148/150 - 0s - loss: 0.4667 - acc: 0.7747 Epoch 149/150 - 0s - loss: 0.4633 - acc: 0.7786 Epoch 150/150 - 0s - loss: 0.4595 - acc: 0.7826 768/768 [==============================] - 0s 116us/step acc: 79.04% [1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]

EXAMPLE 2

  • We are using a network with 1 input, 10 neurons in the hidden layer, and 1 output. The network will use a mean squared error loss function and will be trained using the efficient ADAM algorithm.

  • The network needs about 1,000 epochs to solve this problem effectively, but we will only train it for 100 epochs. This is to ensure we get a model that makes errors when making predictions.

  • After the network is trained, we will make predictions on the dataset and print the mean squared error

import pandas as pd from keras.models import Sequential from keras.layers import Dense from sklearn.metrics import mean_squared_error # fix random seed for reproducibility #from numpy.random import seed #seed(1)
Using TensorFlow backend.
# create sequence length = 10 sequence = [i/float(length) for i in range(length)] # create X/y pairs df = pd.DataFrame(sequence) df = pd.concat([df.shift(1), df], axis=1) df.dropna(inplace=True) # convert to MLPfriendly format values = df.values X, y = values[:,0], values[:,1]
seed = 7 np.random.seed(seed) model = Sequential() model.add(Dense(10, input_dim=1)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') # fit network model.fit(X, y, epochs=100, batch_size=len(X), verbose=0) # forecast yhat = model.predict(X, verbose=0) print(mean_squared_error(y, yhat[:,0])) # calculate predictions predictions = model.predict(X) # round predictions rounded = [round(x[0]) for x in predictions] print(rounded)'''
# fit MLP to dataset and print error def fit_model(X, y): # design network model = Sequential() model.add(Dense(10, input_dim=1)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') # fit network model.fit(X, y, epochs=100, batch_size=len(X), verbose=0) # forecast yhat = model.predict(X, verbose=0) print(mean_squared_error(y, yhat[:,0])) # repeat experiment repeats = 10 for _ in range(repeats): fit_model(X, y)
0.1515518448436962 0.0037723240081637905 0.0055053199995830394 0.04998553417015786 0.2219504934124317 0.02248514390364262 0.11758562182893417 0.031132008190624956 0.1904165192533181 0.016771260792335478

Seed the Random Number Generator

  • Use a fixed seed for the random number generator.

  • Random numbers are generated using a pseudo-random number generator. A random number generator is a mathematical function that will generate a long sequence of numbers that are random enough for general purpose use, such as in machine learning algorithms.

  • The specific way to set the random number generator differs depending on the backend

Seed Random Numbers with the Theano Backend

  • Generally, Keras gets its source of randomness from the NumPy random number generator.

  • For the most part, so does the Theano backend.

  • We can seed the NumPy random number generator by calling the seed() function from the random module

from numpy.random import seed seed(1)

Note: The importing and calling of the seed function is best done at the top of your code file.

This is a best practice because it is possible that some randomness is used when various Keras or Theano (or other) libraries are imported as part of their initialization, even before they are directly used.