01.神经网络和深度学习——week2 神经网络基础(编程作业)

2023-11-15

Part 1:Python Basics with Numpy (optional)

1. Building basic functions with numpy

1.1 Sigmoid function,np.exp()

exe. Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

import math 
# x are real numbers
def basic_sigmoid(x):
    ans = 1/(1+1/math.exp(x))
    return ans

exe. Implement the sigmoid function using numpy.

# x can be real numers, matrices and vectors
import numpy
def sigmoid(x):
    ans = 1/(1+1/np.exp(x))
    return ans

1.2 Sigmoid gradient

exe. Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

def sigmoid_derivative(x):
    s = sigmoid(x)
    ans = s * (1-s)
    return ans

1.3 Reshaping arrays

exe. Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (lengthheight3, 1).

def image2vector(image):
    v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
    return v

1.4 Normalizing rows

exe. Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

def normalizeRows(x):
    x_norm = np.linalg.norm(x, axis=1, keepdims=True)
    x = x / x_norm
    return x

1.5 Broadcasting and the softmax function

exe. Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

def softmax(x):
    x_exp = np.exp(x)
    x_sum = np.sum(x_exp, axis=1, keepdims=True)
    s = x_exp / x_sum
    return s

reminder:

  • np.exp(x) works for any np.array x and applies the exponential function to every coordinate
  • the sigmoid function and its gradient
  • image2vector is commonly used in deep learning
  • np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
  • numpy has efficient built-in functions
  • broadcasting is extremely useful

2. Vectorization

2.1 dot、outer、elementwise

In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.

loop版本:

import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]

#dot:一维,计算内积,得到一个值;多维,满足矩阵相乘。
tic = time.process_time()
dot = 0
for i in range(len(x1)):
    dot += x1[i]*x2[i]
toc = time.process_time()
print("dot = " + str(dot) + "\n----------------computation time = " +  str(1000 * (toc - tic)) + "ms")

#outer:对于多维向量,全部展开变为一维向量;第一个参数表示倍数,使得第二个向量每次变为几倍;第一个参数确定结果的行,第二个参数确定结果的列
tic = time.process_time()
outer = np.zeros((len(x1), len(x2)))
for i in range
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

01.神经网络和深度学习——week2 神经网络基础(编程作业) 的相关文章

  • 素数表

    2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 1
  • C#条码设计-CODE93

    由于最近一段时间在忙考试 一直也没有时间接着写 现在考完试了 慢慢将其他的编码规则发上来 希望对各位能有所帮助 今天要跟大家分享的CODE93编码规则 一 了解一下CODE93的发展 1 Code 93於1982年 基於code 39之上而
  • Ubuntu14.04 64位机上配置OpenCV3.4.2+OpenCV_Contrib3.4.2+Python3.4.3操作步骤

    Ubuntu 14 04 64位上默认安装了两个版本的python 一个是python2 7 6 另外一个是python3 4 3 这里使用OpenCV最新的稳定版本3 4 2在Ubuntu上安装python3 4 3支持OpenCV的操作
  • 1.10 throws和throw:声明和抛出异常

    Java 中的异常处理除了捕获异常和处理异常之外 还包括声明异常和拋出异常 实现声明和抛出异常的关键字非常相似 它们是 throws 和 throw 可以通过 throws 关键字在方法上声明该方法要拋出的异常 然后在方法内部通过 thro

随机推荐