吴恩达深度学习第一课第四周课后作业2参考

2023-10-30

Deep Neural Network for Image Classification: Application
深度神经网络应用
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!

You will use use the functions you’d implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.

After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.

Let’s get started!

1 - Packages

Let’s first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the “Building your Deep Neural Network: Step by Step” assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
导入程序使用的包和模块:

import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *

%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

%load_ext autoreload
%autoreload 2

np.random.seed(1)

2 - Dataset

You will use the same “Cat vs non-Cat” dataset as in “Logistic Regression as a Neural Network” (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!

Problem Statement: You are given a dataset (“data.h5”) containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).

Let’s get more familiar with the dataset. Load the data by running the cell below.

train_x_orig, train_y, test_x_orig, test_y, classes = load_data()

The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
显示一张图片:

index = 7
plt.imshow(train_x_orig[index])#要求输入64*64*3的数组
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") +  " picture.")

结果:

y = 1. It's a cat picture.

这里写图片描述

得到训练样本数、图片分辨率、测试数据样本数的值:

m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]

print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))

结果,训练集209,测试集50

Number of training examples: 209
Number of testing examples: 50
Each image is of size: (64, 64, 3)
train_x_orig shape: (209, 64, 64, 3)
train_y shape: (1, 209)
test_x_orig shape: (50, 64, 64, 3)
test_y shape: (1, 50)

As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. 将图片数组展开为一列。

这里写图片描述

train_x_flatten=train_x_orig.reshape(train_x_orig.shape[0],-1).T#行209,列根据数组自行展开(12288)
test_x_flatten=test_x_orig.reshape(test_x_orig.shape[0],-1).T
train_x=train_x_flatten/255 #将数组范围限定在0~1
test_x=test_x_flatten/255
print("train_x's shape : ",train_x.shape)
print("test_x's shape : ",test_x.shape)

结果

train_x's shape :  (12288, 209)
test_x's shape :  (12288, 50)

12,288 equals 64×64×3 which is the size of one reshaped image vector.

3 - Architecture of your model

建立一个两层神经网络和一个多层神经网络
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.

You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network

You will then compare the performance of these models, and also try out different values for L.

Let’s look at the two architectures.

3.1 - 2-layer neural network

这里写图片描述

Detailed Architecture of figure 2:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: [x0,x1,...,x12287]T is then multiplied by the weight matrix W[1] of size ( n[1] ,12288).
- You then add a bias term and take its relu to get the following vector: [a[1]0,a[1]1,...,a[1]n[1]1]T .
- You then repeat the same process.
- You multiply the resulting vector by W[2] and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.

3.2 - L-layer deep neural network

It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:

Detailed Architecture of figure 3:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: [x0,x1,...,x12287]T is then multiplied by the weight matrix W[1] and then you add the intercept b[1] . The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each ( W[1] , b[1] ) depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.

3.3 - General methodology

As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels

Let’s now implement those two models!

4 - Two-layer neural network

Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

### CONSTANTS DEFINING THE MODEL ####
n_x = 12288     # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)

两层神经网咯模型函数

def two_layer_model(X,Y,layers_dims,learning_rate=0.0075,num_iterations=3000,print_cost=True):

    np.random.seed(1)
    grads = {}
    costs = []                              # to keep track of the cost
    m = X.shape[1]                           # number of examples
    (n_x, n_h, n_y) = layers_dims
    #m=X.shape[1]
    #n_x,n_h,n_y=layer_dims
    parameters=initialize_parameters(n_x,n_h,n_y)
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    #Z,cache=linear_forward(A,W,b)
    for i in range(0,num_iterations):
        A1, cache1=linear_activation_forward(X,W1,b1,'relu')
        A2, cache2=linear_activation_forward(A1,W2,b2,'sigmoid')
        cost=compute_cost(A2, Y)
        dA2=-(np.divide(Y,A2)-np.divide(1-Y,1-A2))
        dA1, dW2, db2=linear_activation_backward(dA2,cache2,'sigmoid')
        dA0, dW1, db1=linear_activation_backward(dA1,cache1,'relu')
        grads['dW1'] = dW1
        grads['db1'] = db1
        grads['dW2'] = dW2
        grads['db2'] = db2
        parameters = update_parameters(parameters, grads, learning_rate)
        W1 = parameters["W1"]
        b1 = parameters["b1"]
        W2 = parameters["W2"]
        b2 = parameters["b2"]
        if print_cost and i % 100 == 0:
            print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
        if print_cost and i % 100 == 0:
            costs.append(cost)
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per tens)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()

    return parameters

测试:

parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)

结果:

Cost after iteration 0: 0.693049735659989
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.5158304772764729
Cost after iteration 600: 0.47549013139433266
Cost after iteration 700: 0.4339163151225749
Cost after iteration 800: 0.40079775362038844
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366413
Cost after iteration 1100: 0.30527536361962654
Cost after iteration 1200: 0.2749137728213016
Cost after iteration 1300: 0.2468176821061486
Cost after iteration 1400: 0.19850735037466108
Cost after iteration 1500: 0.17448318112556668
Cost after iteration 1600: 0.17080762978097128
Cost after iteration 1700: 0.11306524562164709
Cost after iteration 1800: 0.09629426845937152
Cost after iteration 1900: 0.0834261795972687
Cost after iteration 2000: 0.07439078704319084
Cost after iteration 2100: 0.06630748132267933
Cost after iteration 2200: 0.059193295010381744
Cost after iteration 2300: 0.053361403485605606
Cost after iteration 2400: 0.04855478562877022

迭代次数增加cost减小,收敛
这里写图片描述

predictions_train = predict(train_x, train_y, parameters)
Accuracy: 1.0
predictions_test = predict(test_x, test_y, parameters)
Accuracy: 0.72

存在过拟合问题
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called “early stopping” and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
精度72%,使用测试集测试精度
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let’s see if you can do even better with an L-layer model.

5 - L-layer Neural Network

多层神经网络模型
Question: Use the helper functions you have implemented previously to build an L-layer neural network with the following structure: [LINEAR -> RELU]×(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] #  5-layer model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
    """
    Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.

    Arguments:
    X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
    layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
    learning_rate -- learning rate of the gradient descent update rule
    num_iterations -- number of iterations of the optimization loop
    print_cost -- if True, it prints the cost every 100 steps

    Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.
    """

    np.random.seed(1)
    costs = []                         # keep track of cost

    # Parameters initialization.
    ### START CODE HERE ###
    parameters = initialize_parameters_deep(layers_dims)
    ### END CODE HERE ###

    # Loop (gradient descent)
    for i in range(0, num_iterations):

        # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
        ### START CODE HERE ### (≈ 1 line of code)
        AL, caches = L_model_forward(X, parameters)
        ### END CODE HERE ###

        # Compute cost.
        ### START CODE HERE ### (≈ 1 line of code)
        cost = compute_cost(AL, Y)
        ### END CODE HERE ###

        # Backward propagation.
        ### START CODE HERE ### (≈ 1 line of code)
        grads =  L_model_backward(AL, Y, caches)
        ### END CODE HERE ###

        # Update parameters.
        ### START CODE HERE ### (≈ 1 line of code)
        parameters = update_parameters(parameters, grads, learning_rate)
        ### END CODE HERE ###

        # Print the cost every 100 training example
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
        if print_cost and i % 100 == 0:
            costs.append(cost)

    # plot the cost
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per tens)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()

    return parameters

You will now train the model as a 5-layer neural network.

Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the “Cost after iteration 0” matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
测试:

parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True

结果:

Cost after iteration 0: 0.771749
Cost after iteration 100: 0.672053
Cost after iteration 200: 0.648263
Cost after iteration 300: 0.611507
Cost after iteration 400: 0.567047
Cost after iteration 500: 0.540138
Cost after iteration 600: 0.527930
Cost after iteration 700: 0.465477
Cost after iteration 800: 0.369126
Cost after iteration 900: 0.391747
Cost after iteration 1000: 0.315187
Cost after iteration 1100: 0.272700
Cost after iteration 1200: 0.237419
Cost after iteration 1300: 0.199601
Cost after iteration 1400: 0.189263
Cost after iteration 1500: 0.161189
Cost after iteration 1600: 0.148214
Cost after iteration 1700: 0.137775
Cost after iteration 1800: 0.129740
Cost after iteration 1900: 0.121225
Cost after iteration 2000: 0.113821
Cost after iteration 2100: 0.107839
Cost after iteration 2200: 0.102855
Cost after iteration 2300: 0.100897
Cost after iteration 2400: 0.092878

这里写图片描述
测试:

pred_train = predict(train_x, train_y, parameters)
pred_test = predict(test_x, test_y, parameters)

结果:

Accuracy: 0.985645933014
Accuracy: 0.8

精度提高到80%
Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.

This is good performance for this task. Nice job!

Though in the next course on “Improving deep neural networks” you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you’ll also learn in the next course).

6 Results Analysis

First, let’s take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
总结不能够识别图片特征

print_mislabeled_images(classes, test_x, test_y, pred_test)

这里写图片描述

A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)

7 Test with your own image (optional/ungraded exercise)

测试自己的图片
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
3. Change your image’s name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!

## START CODE HERE ##
my_image = "my_image2.jpg" # change this to the name of your image file 
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##

fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)

plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") +  "\" picture.")
Accuracy: 1.0
y = 1.0, your L-layer model predicts a "cat" picture.

这里写图片描述

References:

for auto-reloading external module: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

吴恩达深度学习第一课第四周课后作业2参考 的相关文章

随机推荐

  • 一致性模型

    一致性模型
  • 熬夜总结的2022java面试题

    java面试宝典 前言 java基础 什么是面向对象 值传递和引用传递 和equals的区别是什么 重载和重写的区别 抽象类和接口的区别 构造器 Constructor 是否可被 override java静态变量 代码块 和静态方法的执行
  • Django(1)-创建django项目

    前提 已安装django 创建项目 django admin startproject mysite django 运行后 在当前目录下生成了一个项目 asgi py 为项目创建AGSI兼容web服务器入口 settings py 项目的配
  • Win11共享打印机错误0x00000040

    在办公打印的时候 想要连接共享打印机 但是遇到了错误0x00000040指定的网络名不再可用的提示 该如何解决呢 方法一 1 按键盘上的 Win X 组合键 或右键点击任务栏上的Windows 徽标 在打开的隐藏菜单项中 选择运行 2 运行
  • SpringCloud基础9——服务异步通信-高级篇

    导航 黑马Java笔记 踩坑汇总 JavaSE JavaWeb SSM SpringBoot 瑞吉外卖 SpringCloud SpringCloudAlibaba 黑马旅游 谷粒商城 目录 服务异步通信 高级篇 1 消息可靠性 1 1 生
  • linux脚本里ps进程多出一个,运行shell脚本时进程数量变多

    写了一个很简单的脚本 用于统计memcache进程的数量 bin bash echo ps aux grep memcache grep v grep wc l 然而在执行时却遇到了问题 work oss memcache status p
  • Linuxcentos7.5二进制安装mysql8.0.23(切勿继续踩坑)

    场景 这个量有点大 闲话不多说 都是小细节 开始 首先下载mysql8 0 23 这个你们可以去官网下载800多m 也可以使用我分享的这个 链接 https pan baidu com s 1S1ZQyjv9pOSr5zBsgt0lRA 提
  • c语言从文件中读取数据到链表_C语言

    点击上方 C语言中文社区 选择 设为星标 技术干货第一时间送达 作者 ancientear 原文 https www jianshu com p e43e795808aa 要求设计的管理系统能够实现以下功能 1 每一条记录包括一个学生的学号
  • 【Linux】解决运行sudo时提示sudo: unable to resolve host

    因为开发需要 把主机名从oldname修改成newname后 再运行sudo时会报一行错误 sudo unable to resolve host newname 解决方法 sudo gedit etc hosts打开 etc hosts
  • Linux LDAP搭建与使用

    Linux LDAP搭建与使用 标签 空格分隔 LDAP ubuntu ldap安装 执行以下命令安装ldap apt get install slapd ldap utils migrationtools dpkg reconfigure
  • 用keras进行猫狗识别(一)

    Keras是一个高层神经网络API Keras由纯Python编写而成并基Tensorflow Theano以及CNTK后端 Keras 为支持快速实验而生 能够把你的idea迅速转换为结果 如果你有如下需求 请选择Keras 简易和快速的
  • VMA与page fault

    一 红黑树与VMA 红黑树的应用 广泛用于 C 的 STL 中 set 和 map 是用红黑树实现的 Linux 的的进程调度 用红黑树管理进程控制块 进程的虚拟内存空间都存储在一颗红黑树上 每个虚拟内存空间都对应红黑树的一个节点 左指针指
  • 05 神经网络语言模型(独热编码+词向量的起源)

    博客配套视频链接 https space bilibili com 383551518 spm id from 333 1007 0 0 b 站直接看 配套 github 链接 https github com nickchen121 Pr
  • C# 获取qq邮箱的未读邮件

    第一步 先在QQ邮箱进行设置 获取授权码 第一步 打开QQ邮箱并点击设置 第二步 点击账户 并滑到下面 第三步 开启POP3 IMAP SMTP Exchange CardDAV服务并生成授权码 在这个页面找到下图这个位置 在IMAP SM
  • 2PC 两阶段提交

    这是使用Java实现两阶段提交的简单代码示例 public abstract class BaseTwoPhaseCommit public abstract void commit public abstract void rollbac
  • 华为 进入和退出Fastboot、eRecovery和Recovery升级模式

    手机关机状态下 可以进入Fastboot eRecovery Recovery 升级这几种模式 需要连接电脑 Fastboot模式 长按音量下键 电源键 eRecovery 模式 长按音量上键 电源键 不需要连接电脑 Recovery 模式
  • java bufferedimage颜色_java – 如何在BufferedImage中使颜色透明并保存为PNG

    我最近这样做 是为了回答我的项目经理的一个问题 将灰色变为透明度的功能是 private Image TransformGrayToTransparency BufferedImage image ImageFilter filter ne
  • 用Python开发了一个进销存管理的小软件

    研究生毕业之后 就进入国企工作 工作内容偏产品和售前 几乎没写过代码了 有个朋友是开游泳馆的 也会有少量商品的售卖 问我能不能给她开发一个小软件 记录商品的入库出库 统计下金额 恰好工作中今年也用到了python写一个小工具 觉得非常好用
  • 因为一个函数strlen的陷阱,我懂得了看源码的重要性

    因为一个函数strlen的陷阱 我懂得了看源码的重要性 在程序开发中 我们经常会使用各种函数库来提高效率 其中字符串处理函数是开发中最常用的函数之一 在这些函数中 strlen是一个很重要的函数 它用来计算字符串的长度 然而 有时候使用st
  • 吴恩达深度学习第一课第四周课后作业2参考

    Deep Neural Network for Image Classification Application 深度神经网络应用 When you finish this you will have finished the last p