Path: blob/master/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/week7/__pycache__/improv_utils.cpython-36.pyc
24909 views
3
Y��Y�* � @ sh d dl Z d dlZd dlZd dlZdd� Zddd�Zdd� Zd d
� Z dd� Z
d
d� Zdd� Zddd�Z
dS )� Nc C s� t jdd�} tj| d d d � �}tj| d d d � �}t jdd�}tj|d d d � �}tj|d d d � �}tj|d d d � �}|jd |jd
f�}|jd |jd
f�}|||||fS )Nzdatasets/train_signs.h5�r�train_set_x�train_set_yzdatasets/test_signs.h5�
test_set_x�
test_set_y�list_classes� r )�h5py�File�np�array�reshape�shape)�
train_dataset�train_set_x_orig�train_set_y_orig�test_dataset�test_set_x_orig�test_set_y_orig�classes� r �'/home/jovyan/work/week7/improv_utils.py�load_dataset s r �@ c C s" | j d }g }tjj|� ttjj|��}| dd�|f }|dd�|f j|j d |f�}tj|| �} xdt d| �D ]V}
|dd�|
| |
| | �f }|dd�|
| |
| | �f }||f}
|j
|
� qtW || dk�r|dd�| | |�f }|dd�| | |�f }||f}
|j
|
� |S )a�
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
mini_batch_size - size of the mini-batches, integer
seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
r Nr )r r �random�seed�list�permutationr
�math�floor�range�append)�X�Y�mini_batch_sizer �m�mini_batchesr �
shuffled_X�
shuffled_Y�num_complete_minibatches�k�mini_batch_X�mini_batch_Y�
mini_batchr r r �random_mini_batches s$
r. c C s t j|�| jd� j} | S )Nr �����)r �eyer
�T)r# �Cr r r �convert_to_one_hot? s r3 c C s� t j|d �}t j|d �}t j|d �}t j|d �}t j|d �}t j|d �}||||||d�}t jdd d
g�} t| |�}
t j|
�}t j� �}|j|| | id�}
W d Q R X |
S )N�W1�b1�W2�b2�W3�b3)r4 r5 r6 r7 r8 r9 �floati 0 r )� feed_dict)�tf�convert_to_tensor�placeholder�forward_propagation�argmax�Session�run)r"