Part 1:Python Basics with Numpy (optional)
1. Building basic functions with numpy
1.1 Sigmoid function,np.exp()
exe. Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
import math
# x are real numbers
def basic_sigmoid(x):
ans = 1/(1+1/math.exp(x))
return ans
exe. Implement the sigmoid function using numpy.
# x can be real numers, matrices and vectors
import numpy
def sigmoid(x):
ans = 1/(1+1/np.exp(x))
return ans
1.2 Sigmoid gradient
exe. Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:
def sigmoid_derivative(x):
s = sigmoid(x)
ans = s * (1-s)
return ans
1.3 Reshaping arrays
exe. Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (lengthheight3, 1).
def image2vector(image):
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
return v
1.4 Normalizing rows
exe. Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
def normalizeRows(x):
x_norm = np.linalg.norm(x, axis=1, keepdims=True)
x = x / x_norm
return x
1.5 Broadcasting and the softmax function
exe. Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
def softmax(x):
x_exp = np.exp(x)
x_sum = np.sum(x_exp, axis=1, keepdims=True)
s = x_exp / x_sum
return s
reminder:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2. Vectorization
2.1 dot、outer、elementwise
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
loop版本:
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
#dot:一维,计算内积,得到一个值;多维,满足矩阵相乘。
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot += x1[i]*x2[i]
toc = time.process_time()
print("dot = " + str(dot) + "\n----------------computation time = " + str(1000 * (toc - tic)) + "ms")
#outer:对于多维向量,全部展开变为一维向量;第一个参数表示倍数,使得第二个向量每次变为几倍;第一个参数确定结果的行,第二个参数确定结果的列
tic = time.process_time()
outer = np.zeros((len(x1), len(x2)))
for i in range