Emojify - v2参考答案

2023-11-12

Emojify!

Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.

Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing “Congratulations on the promotion! Lets get coffee and talk. Love you!” the emojifier can automatically turn this into “Congratulations on the promotion! �� Lets get coffee and talk. ☕️ Love you! ❤️”

You will implement a model which inputs a sentence (such as “Let’s go see the baseball game tonight!”) and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the “heart” symbol rather than the “love” symbol. But using word vectors, you’ll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don’t even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.

In this exercise, you’ll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.

Lets get started! Run the following cell to load the package you are going to use.

import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt

%matplotlib inline

1 - Baseline model: Emojifier-V1

1.1 - Dataset EMOJISET

Let’s start by building a simple baseline classifier.

You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence


Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here.

Let’s load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).

X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())

Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.

index = 1
print(X_train[index], label_to_emoji(Y_train[index]))
I am proud of your achievements ��

1.2 - Overview of the Emojifier-V1

In this part, you are going to implement a baseline model called “Emojifier-v1”.



Figure 2: Baseline model (Emojifier-V1).

The input of the model is a string corresponding to a sentence (e.g. “I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.

To get our labels into a format suitable for training a softmax classifier, lets convert Y Y from its current shape current shape (m,1)(m,1) into a “one-hot representation” (m,5) ( m , 5 ) , where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for “Y-one-hot” in the variable names Y_oh_train and Y_oh_test:

Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)

Let’s see what convert_to_one_hot() did. Feel free to change index to print out different values.

index = 50
print(Y_train[index], "is converted into one hot", Y_oh_train[index])
print(X_train[index], label_to_emoji(Y_train[index]))
0 is converted into one hot [ 1.  0.  0.  0.  0.]
I missed you ❤️

All the data is now ready to be fed into the Emojify-V1 model. Let’s implement the model!

1.3 - Implementing Emojifier-V1

As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.

word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')

You’ve loaded:
- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.

Run the following cell to check if it works.

word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
the index of cucumber in the vocabulary is 113317
the 289846th word in the vocabulary is potatos

Exercise: Implement sentence_to_avg(). You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values.

# GRADED FUNCTION: sentence_to_avg

def sentence_to_avg(sentence, word_to_vec_map):
    """
    Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
    and averages its value into a single vector encoding the meaning of the sentence.

    Arguments:
    sentence -- string, one training example from X
    word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation

    Returns:
    avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
    """

    ### START CODE HERE ###
    # Step 1: Split sentence into list of lower case words (≈ 1 line)
    words = sentence.lower().split()

    # Initialize the average word vector, should have the same shape as your word vectors.
    avg = np.zeros((50,))

    # Step 2: average the word vectors. You can loop over the words in the list "words".
    for w in words:
        avg += word_to_vec_map[w]
    avg = avg / len(words)

    ### END CODE HERE ###

    return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg)
avg =  [-0.008005    0.56370833 -0.50427333  0.258865    0.55131103  0.03104983
 -0.21013718  0.16893933 -0.09590267  0.141784   -0.15708967  0.18525867
  0.6495785   0.38371117  0.21102167  0.11301667  0.02613967  0.26037767
  0.05820667 -0.01578167 -0.12078833 -0.02471267  0.4128455   0.5152061
  0.38756167 -0.898661   -0.535145    0.33501167  0.68806933 -0.2156265
  1.797155    0.10476933 -0.36775333  0.750785    0.10282583  0.348925
 -0.27262833  0.66768    -0.10706167 -0.283635    0.59580117  0.28747333
 -0.3366635   0.23393817  0.34349183  0.178405    0.1166155  -0.076433
  0.1445417   0.09808667]

Expected Output:

**avg= ** [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667]

Model

You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax’s parameters.

Exercise: Implement the model() function described in Figure (2). Assuming here that Yoh Y o h (“Y one hot”) is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:

z(i)=W.avg(i)+b z ( i ) = W . a v g ( i ) + b

a(i)=softmax(z(i)) a ( i ) = s o f t m a x ( z ( i ) )

L(i)=k=0ny1Yoh(i)klog(a(i)k) L ( i ) = − ∑ k = 0 n y − 1 Y o h k ( i ) ∗ l o g ( a k ( i ) )

It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let’s not bother this time.

We provided you a function softmax().

# GRADED FUNCTION: model

def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
    """
    Model to train word vector representations in numpy.

    Arguments:
    X -- input data, numpy array of sentences as strings, of shape (m, 1)
    Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
    word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
    learning_rate -- learning_rate for the stochastic gradient descent algorithm
    num_iterations -- number of iterations

    Returns:
    pred -- vector of predictions, numpy-array of shape (m, 1)
    W -- weight matrix of the softmax layer, of shape (n_y, n_h)
    b -- bias of the softmax layer, of shape (n_y,)
    """

    np.random.seed(1)

    # Define number of training examples
    m = Y.shape[0]                          # number of training examples
    n_y = 5                                 # number of classes  
    n_h = 50                                # dimensions of the GloVe vectors 

    # Initialize parameters using Xavier initialization
    W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
    b = np.zeros((n_y,))

    # Convert Y to Y_onehot with n_y classes
    Y_oh = convert_to_one_hot(Y, C = n_y) 

    # Optimization loop
    for t in range(num_iterations):                       # Loop over the number of iterations
        for i in range(m):                                # Loop over the training examples

            ### START CODE HERE ### (≈ 4 lines of code)
            # Average the word vectors of the words from the i'th training example
            avg = sentence_to_avg(X[i], word_to_vec_map)

            # Forward propagate the avg through the softmax layer
            z = np.dot(W, avg) + b
            a = softmax(z)

            # Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
            cost = -np.sum(Y_oh[i] * np.log(a))
            ### END CODE HERE ###

            # Compute gradients 
            dz = a - Y_oh[i]
            dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
            db = dz

            # Update parameters with Stochastic Gradient Descent
            W = W - learning_rate * dW
            b = b - learning_rate * db

        if t % 100 == 0:
            print("Epoch: " + str(t) + " --- cost = " + str(cost))
            pred = predict(X, Y, W, b, word_to_vec_map)

    return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)

X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
 'Lets go party and drinks','Congrats on the new job','Congratulations',
 'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
 'You totally deserve this prize', 'Let us go play football',
 'Are you down for football this afternoon', 'Work hard play harder',
 'It is suprising how people can be dumb sometimes',
 'I am very disappointed','It is the best day in my life',
 'I think I will end up alone','My life is so boring','Good job',
 'Great so awesome'])

print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
(132,)
(132,)
(132, 5)
never talk to me again
<class 'numpy.ndarray'>
(20,)
(20,)
(132, 5)
<class 'numpy.ndarray'>

Run the next cell to train your model and learn the softmax parameters (W,b).

pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
Epoch: 0 --- cost = 1.95204988128
Accuracy: 0.348484848485
Epoch: 100 --- cost = 0.0797181872601
Accuracy: 0.931818181818
Epoch: 200 --- cost = 0.0445636924368
Accuracy: 0.954545454545
Epoch: 300 --- cost = 0.0343226737879
Accuracy: 0.969696969697
[[ 3.]
 [ 2.]
 [ 3.]
 [ 0.]
 [ 4.]
 [ 0.]
 [ 3.]
 [ 2.]
 [ 3.]
 [ 1.]
 [ 3.]
 [ 3.]
 [ 1.]
 [ 3.]
 [ 2.]
 [ 3.]
 [ 2.]
 [ 3.]
 [ 1.]
 [ 2.]
 [ 3.]
 [ 0.]
 [ 2.]
 [ 2.]
 [ 2.]
 [ 1.]
 [ 4.]
 [ 3.]
 [ 3.]
 [ 4.]
 [ 0.]
 [ 3.]
 [ 4.]
 [ 2.]
 [ 0.]
 [ 3.]
 [ 2.]
 [ 2.]
 [ 3.]
 [ 4.]
 [ 2.]
 [ 2.]
 [ 0.]
 [ 2.]
 [ 3.]
 [ 0.]
 [ 3.]
 [ 2.]
 [ 4.]
 [ 3.]
 [ 0.]
 [ 3.]
 [ 3.]
 [ 3.]
 [ 4.]
 [ 2.]
 [ 1.]
 [ 1.]
 [ 1.]
 [ 2.]
 [ 3.]
 [ 1.]
 [ 0.]
 [ 0.]
 [ 0.]
 [ 3.]
 [ 4.]
 [ 4.]
 [ 2.]
 [ 2.]
 [ 1.]
 [ 2.]
 [ 0.]
 [ 3.]
 [ 2.]
 [ 2.]
 [ 0.]
 [ 3.]
 [ 3.]
 [ 1.]
 [ 2.]
 [ 1.]
 [ 2.]
 [ 2.]
 [ 4.]
 [ 3.]
 [ 3.]
 [ 2.]
 [ 4.]
 [ 0.]
 [ 0.]
 [ 3.]
 [ 3.]
 [ 3.]
 [ 3.]
 [ 2.]
 [ 0.]
 [ 1.]
 [ 2.]
 [ 3.]
 [ 0.]
 [ 2.]
 [ 2.]
 [ 2.]
 [ 3.]
 [ 2.]
 [ 2.]
 [ 2.]
 [ 4.]
 [ 1.]
 [ 1.]
 [ 3.]
 [ 3.]
 [ 4.]
 [ 1.]
 [ 2.]
 [ 1.]
 [ 1.]
 [ 3.]
 [ 1.]
 [ 0.]
 [ 4.]
 [ 0.]
 [ 3.]
 [ 3.]
 [ 4.]
 [ 4.]
 [ 1.]
 [ 4.]
 [ 3.]
 [ 0.]
 [ 2.]]

Expected Output (on a subset of iterations):

**Epoch: 0** cost = 1.95204988128 Accuracy: 0.348484848485
**Epoch: 100** cost = 0.0797181872601 Accuracy: 0.931818181818
**Epoch: 200** cost = 0.0445636924368 Accuracy: 0.954545454545
**Epoch: 300** cost = 0.0343226737879 Accuracy: 0.969696969697

Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.

1.4 - Examining test set performance

print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
Training set:
Accuracy: 0.977272727273
Test set:
Accuracy: 0.857142857143

Expected Output:

**Train set accuracy** 97.7
**Test set accuracy** 85.7

Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.

In the training set, the algorithm saw the sentence “I love you” with the label ❤️. You can check however that the word “adore” does not appear in the training set. Nonetheless, lets see what happens if you write “I adore you.”

X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])

pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
Accuracy: 0.833333333333

i adore you ❤️
i love you ❤️
funny lol ��
lets play with a ball ⚾
food is ready ��
not feeling happy ��

Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too—feel free to modify the inputs above and try out a variety of input sentences. How well does it work?

Note though that it doesn’t get “not feeling happy” correct. This algorithm ignores word ordering, so is not good at understanding phrases like “not happy.”

Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class (“actual” class) is mislabeled by the algorithm with a different class (“predicted” class).

print(Y_test.shape)
print('           '+ label_to_emoji(0)+ '    ' + label_to_emoji(1) + '    ' +  label_to_emoji(2)+ '    ' + label_to_emoji(3)+'   ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
(56,)
           ❤️    ⚾    ��    ��   ��
Predicted  0.0  1.0  2.0  3.0  4.0  All
Actual                                 
0            6    0    0    1    0    7
1            0    8    0    0    0    8
2            2    0   16    0    0   18
3            1    1    2   12    0   16
4            0    0    1    0    6    7
All          9    9   19   13    6   56

png


What you should remember from this part:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as “This movie is not good and not enjoyable” because it doesn’t understand combinations of words–it just averages all the words’ embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.

2 - Emojifier-V2: Using LSTMs in Keras:

Let’s build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.

Run the following cell to load the Keras packages.

import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
Using TensorFlow backend.

2.1 - Overview of the model

Here is the Emojifier-v2 you will implement:


Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier.

2.2 Keras and mini-batching

In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it’s just not possible to do them both at the same time.

The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with “0”s so that each input sentence is of length 20. Thus, a sentence “i love you” would be represented as (ei,elove,eyou,0⃗ ,0⃗ ,,0⃗ ) ( e i , e l o v e , e y o u , 0 → , 0 → , … , 0 → ) . In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.

2.3 - The Embedding layer

In Keras, the embedding matrix is represented as a “layer”, and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we’ll show you how Keras allows you to either train or leave fixed this layer.

The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.


Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional.

The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).

The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.

Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).

# GRADED FUNCTION: sentences_to_indices

def sentences_to_indices(X, word_to_index, max_len):
    """
    Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
    The output shape should be such that it can be given to `Embedding()` (described in Figure 4). 

    Arguments:
    X -- array of sentences (strings), of shape (m, 1)
    word_to_index -- a dictionary containing the each word mapped to its index
    max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. 

    Returns:
    X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
    """

    m = X.shape[0]                                   # number of training examples

    ### START CODE HERE ###
    # Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
    X_indices = np.zeros((m, max_len))

    for i in range(m):                               # loop over training examples

        # Convert the ith training sentence in lower case and split is into words. You should get a list of words.
        sentence_words =X[i].lower().split()

        # Initialize j to 0
        j = 0

        # Loop over the words of sentence_words
        for w in sentence_words:
            # Set the (i,j)th entry of X_indices to the index of the correct word.
            X_indices[i, j] = word_to_index[w]
            # Increment j to j + 1
            j = j + 1

    ### END CODE HERE ###

    return X_indices

Run the following cell to check what sentences_to_indices() does, and check your results.

X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices)
X1 = ['funny lol' 'lets play baseball' 'food is ready for you']
X1_indices = [[ 155345.  225122.       0.       0.       0.]
 [ 220930.  286375.   69714.       0.       0.]
 [ 151204.  192973.  302254.  151349.  394475.]]

Expected Output:

**X1 =** [‘funny lol’ ‘lets play football’ ‘food is ready for you’]
**X1_indices =** [[ 155345. 225122. 0. 0. 0.]
[ 220930. 286375. 151266. 0. 0.]
[ 151204. 192973. 302254. 151349. 394475.]]

Let’s build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence.

Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.
3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix

# GRADED FUNCTION: pretrained_embedding_layer

def pretrained_embedding_layer(word_to_vec_map, word_to_index):
    """
    Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.

    Arguments:
    word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
    word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)

    Returns:
    embedding_layer -- pretrained layer Keras instance
    """

    vocab_len = len(word_to_index) + 1                  # adding 1 to fit Keras embedding (requirement)
    emb_dim = word_to_vec_map["cucumber"].shape[0]      # define dimensionality of your GloVe word vectors (= 50)

    ### START CODE HERE ###
    # Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
    emb_matrix = np.zeros((vocab_len, emb_dim))

    # Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
    for word, index in word_to_index.items():
        emb_matrix[index, :] = word_to_vec_map[word]

    # Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False. 
    embedding_layer = Embedding(vocab_len, emb_dim, trainable=False)
    ### END CODE HERE ###

    # Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
    embedding_layer.build((None,))

    # Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
    embedding_layer.set_weights([emb_matrix])

    return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
weights[0][1][3] = -0.3403

Expected Output:

**weights[0][1][3] =** -0.3403

2.3 Building the Emojifier-V2

Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.


Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier.

Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().

# GRADED FUNCTION: Emojify_V2

def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
    """
    Function creating the Emojify-v2 model's graph.

    Arguments:
    input_shape -- shape of the input, usually (max_len,)
    word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
    word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)

    Returns:
    model -- a model instance in Keras
    """

    ### START CODE HERE ###
    # Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
    sentence_indices = Input(input_shape, dtype='int32')

    # Create the embedding layer pretrained with GloVe Vectors (≈1 line)
    embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)

    # Propagate sentence_indices through your embedding layer, you get back the embeddings
    embeddings = embedding_layer(sentence_indices)   

    # Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
    # Be careful, the returned output should be a batch of sequences.
    X = LSTM(128, return_sequences=True)(embeddings)
    # Add dropout with a probability of 0.5
    X = Dropout(0.5)(X)
    # Propagate X trough another LSTM layer with 128-dimensional hidden state
    # Be careful, the returned output should be a single hidden state, not a batch of sequences.
    X = LSTM(128, return_sequences=False)(X)
    # Add dropout with a probability of 0.5
    X = Dropout(0.5)(X)
    # Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
    X = Dense(5)(X)
    # Add a softmax activation
    X = Activation('softmax')(X)

    # Create Model instance which converts sentence_indices into X.
    model = Model(inputs=sentence_indices, outputs=X)

    ### END CODE HERE ###

    return model

Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses “20,223,927” parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.

model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 10)                0         
_________________________________________________________________
embedding_2 (Embedding)      (None, 10, 50)            20000050  
_________________________________________________________________
lstm_1 (LSTM)                (None, 10, 128)           91648     
_________________________________________________________________
dropout_1 (Dropout)          (None, 10, 128)           0         
_________________________________________________________________
lstm_2 (LSTM)                (None, 128)               131584    
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 645       
_________________________________________________________________
activation_1 (Activation)    (None, 5)                 0         
=================================================================
Total params: 20,223,927
Trainable params: 223,877
Non-trainable params: 20,000,050
_________________________________________________________________

As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics:

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

It’s time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).

X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)

Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.

model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
Epoch 1/50
132/132 [==============================] - 0s - loss: 1.6084 - acc: 0.1970     
Epoch 2/50
132/132 [==============================] - 0s - loss: 1.5322 - acc: 0.2955     
Epoch 3/50
132/132 [==============================] - 0s - loss: 1.5009 - acc: 0.3258     
Epoch 4/50
132/132 [==============================] - 0s - loss: 1.4384 - acc: 0.3561     
Epoch 5/50
132/132 [==============================] - 0s - loss: 1.3467 - acc: 0.4621     
Epoch 6/50
132/132 [==============================] - 0s - loss: 1.2325 - acc: 0.5076     
Epoch 7/50
132/132 [==============================] - 0s - loss: 1.1759 - acc: 0.4394     
Epoch 8/50
132/132 [==============================] - 0s - loss: 1.0546 - acc: 0.5682     
Epoch 9/50
132/132 [==============================] - 0s - loss: 0.8769 - acc: 0.7121     
Epoch 10/50
132/132 [==============================] - 0s - loss: 0.8225 - acc: 0.6970     
Epoch 11/50
132/132 [==============================] - 0s - loss: 0.7028 - acc: 0.7500     
Epoch 12/50
132/132 [==============================] - 0s - loss: 0.6003 - acc: 0.8030     
Epoch 13/50
132/132 [==============================] - 0s - loss: 0.4922 - acc: 0.8333     
Epoch 14/50
132/132 [==============================] - 0s - loss: 0.5099 - acc: 0.8333     
Epoch 15/50
132/132 [==============================] - 0s - loss: 0.4783 - acc: 0.8258     
Epoch 16/50
132/132 [==============================] - 0s - loss: 0.3545 - acc: 0.8636     
Epoch 17/50
132/132 [==============================] - 0s - loss: 0.3905 - acc: 0.8636     
Epoch 18/50
132/132 [==============================] - 0s - loss: 0.6448 - acc: 0.8106     
Epoch 19/50
132/132 [==============================] - 0s - loss: 0.5173 - acc: 0.8182     
Epoch 20/50
132/132 [==============================] - 0s - loss: 0.3928 - acc: 0.8409     
Epoch 21/50
132/132 [==============================] - 0s - loss: 0.4750 - acc: 0.8182     
Epoch 22/50
132/132 [==============================] - 0s - loss: 0.3932 - acc: 0.8636     
Epoch 23/50
132/132 [==============================] - 0s - loss: 0.3834 - acc: 0.8561     
Epoch 24/50
132/132 [==============================] - 0s - loss: 0.3077 - acc: 0.9091     
Epoch 25/50
132/132 [==============================] - 0s - loss: 0.3548 - acc: 0.8864     
Epoch 26/50
132/132 [==============================] - 0s - loss: 0.2406 - acc: 0.9394     
Epoch 27/50
132/132 [==============================] - 0s - loss: 0.3221 - acc: 0.8864     
Epoch 28/50
132/132 [==============================] - 0s - loss: 0.2399 - acc: 0.9318     
Epoch 29/50
132/132 [==============================] - 0s - loss: 0.3940 - acc: 0.8712     
Epoch 30/50
132/132 [==============================] - 0s - loss: 0.2709 - acc: 0.9091     
Epoch 31/50
132/132 [==============================] - 0s - loss: 0.2931 - acc: 0.8864     
Epoch 32/50
132/132 [==============================] - 0s - loss: 0.2094 - acc: 0.9242     
Epoch 33/50
132/132 [==============================] - 0s - loss: 0.2138 - acc: 0.9470     
Epoch 34/50
132/132 [==============================] - 0s - loss: 0.1528 - acc: 0.9545     
Epoch 35/50
132/132 [==============================] - 0s - loss: 0.1628 - acc: 0.9621     
Epoch 36/50
132/132 [==============================] - 0s - loss: 0.1816 - acc: 0.9394     
Epoch 37/50
132/132 [==============================] - 0s - loss: 0.1574 - acc: 0.9470     
Epoch 38/50
132/132 [==============================] - 0s - loss: 0.1950 - acc: 0.9470     
Epoch 39/50
132/132 [==============================] - 0s - loss: 0.1314 - acc: 0.9697     
Epoch 40/50
132/132 [==============================] - 0s - loss: 0.1362 - acc: 0.9621     
Epoch 41/50
132/132 [==============================] - 0s - loss: 0.0835 - acc: 0.9848     
Epoch 42/50
132/132 [==============================] - 0s - loss: 0.0727 - acc: 0.9924     
Epoch 43/50
132/132 [==============================] - 0s - loss: 0.0738 - acc: 0.9848     
Epoch 44/50
132/132 [==============================] - 0s - loss: 0.0456 - acc: 0.9924     
Epoch 45/50
132/132 [==============================] - 0s - loss: 0.0883 - acc: 0.9848     
Epoch 46/50
132/132 [==============================] - 0s - loss: 0.1439 - acc: 0.9697     
Epoch 47/50
132/132 [==============================] - 0s - loss: 0.1761 - acc: 0.9394     
Epoch 48/50
132/132 [==============================] - 0s - loss: 0.2153 - acc: 0.9242     
Epoch 49/50
132/132 [==============================] - 0s - loss: 0.1201 - acc: 0.9621     
Epoch 50/50
132/132 [==============================] - 0s - loss: 0.2094 - acc: 0.9394     





<keras.callbacks.History at 0x7f02df9e9940>

Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.

X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
32/56 [================>.............] - ETA: 0s
Test accuracy =  0.857142857143

You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.

# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
    x = X_test_indices
    num = np.argmax(pred[i])
    if(num != Y_test[i]):
        print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
Expected emoji:�� prediction: work is hard  ��
Expected emoji:�� prediction: This girl is messing with me  ❤️
Expected emoji:�� prediction: any suggestions for dinner    ��
Expected emoji:❤️ prediction: I love taking breaks  ��
Expected emoji:�� prediction: you brighten my day   ❤️
Expected emoji:�� prediction: will you be my valentine  ❤️
Expected emoji:�� prediction: See you at the restaurant ��
Expected emoji:�� prediction: go away   ⚾

Now you can try it on your own example. Write your own sentence below.

# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.  
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+  label_to_emoji(np.argmax(model.predict(X_test_indices))))
not feeling happy ��

Previously, Emojify-V1 model did not correctly label “not feeling happy,” but our implementation of Emojiy-V2 got it right. (Keras’ outputs are slightly random each time, so you may not have obtained the same result.) The current model still isn’t very robust at understanding negation (like “not happy”) because the training set is small and so doesn’t have a lot of examples of negation. But if the training set were larger, the LSTM model would be much better than the Emojify-V1 model at understanding such complex sentences.

Congratulations!

You have completed this notebook! ❤️❤️❤️


What you should remember:
- If you have an NLP task where the training set is small, using word embeddings can help your algorithm significantly. Word embeddings allow your model to work on words in the test set that may not even have appeared in your training set.
- Training sequence models in Keras (and in most other deep learning frameworks) requires a few important details:
- To use mini-batches, the sequences need to be padded so that all the examples in a mini-batch have the same length.
- An Embedding() layer can be initialized with pretrained values. These values can be either fixed or trained further on your dataset. If however your labeled dataset is small, it’s usually not worth trying to train a large pre-trained set of embeddings.
- LSTM() has a flag called return_sequences to decide if you would like to return every hidden states or only the last one.
- You can use Dropout() right after LSTM() to regularize your network.

Congratulations on finishing this assignment and building an Emojifier. We hope you’re happy with what you’ve accomplished in this notebook!

������������

Acknowledgments

Thanks to Alison Darcy and the Woebot team for their advice on the creation of this assignment. Woebot is a chatbot friend that is ready to speak with you 24/7. As part of Woebot’s technology, it uses word embeddings to understand the emotions of what you say. You can play with it by going to http://woebot.io

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Emojify - v2参考答案 的相关文章

  • 003在WIN7下解决coursera视频无法播放问题

    https blog csdn net u012509485 article details 78459584 在WIN7下解决coursera视频无法播放问题 2019 1 20 23 18 最近Cousera一直表现不佳 xff0c 课
  • Pytorch 实战RNN

    一 简单实例 span class token comment coding utf8 span span class token keyword import span torch span class token keyword as
  • Coursera计算概论A(李戈)教授课程

    昨天 xff08 4月29日 xff09 结束了 计算概论A的课程 xff0c 我对C语言有了更多的了解 这部课程算是我踏入程序设计领域的一个敲门砖吧 对C程序语言的理解 xff1a C语言简单 高效 易懂 xff0c 重点在于 1 结构
  • 基于RNN-LSTM模型的诗词生成/TensorFlow

    1 研究任务一介绍 1 1 研究任务 给定诗词数据集poems xff0c 采用基于循环神经网络 xff08 RNN xff09 的LSTM模型实现古诗词自动生成 xff0c 调整参数实现五言诗 七言诗 五言藏头诗 七言藏头诗和词的自动生成
  • 有关于ValueError: Variable rnn/basic_lstm_cell/kernel already exists, disallowed.的问题

    很简单 xff0c 重新跑一篇程序 xff0c 我理解为重启核 xff1f xff1f xff1f jupyter中有kernel选项 xff0c 点击选择 Restart amp RunAll xff0c 即可解决问题 Mark
  • [人工智能-深度学习-51]:循环神经网络 - RNN基本原理详解

    作者主页 文火冰糖的硅基工坊 文火冰糖 王文兵 的博客 文火冰糖的硅基工坊 CSDN博客 本文网址 https blog csdn net HiWangWenBing article details 121387285 目录 第1章 详解前
  • 李宏毅2021年机器学习作业5(Seq2seq)实验记录

    李宏毅2021年机器学习作业5学习笔记 前言 一 问题描述 二 实验过程 2 1 基于RNN 2 2 基于Transformer 三 总结 前言 声明 本文参考了李宏毅机器学习2021年作业例程 开发平台是colab 一 问题描述 机器翻译
  • [Python人工智能] 十四.循环神经网络LSTM RNN回归案例之sin曲线预测

    从本专栏开始 作者正式开始研究Python深度学习 神经网络及人工智能相关知识 前一篇文章详细讲解了如何评价神经网络 绘制训练过程中的loss曲线 并结合图像分类案例讲解精确率 召回率和F值的计算过程 本篇文章将分享循环神经网络LSTM R
  • [4G+5G专题-132]: 传输层 - 以太网电缆的类型(Cat5,Cat5e,Cat6,Cat6a)

    作者主页 文火冰糖的硅基工坊 文火冰糖 王文兵 的博客 文火冰糖的硅基工坊 CSDN博客 本文网址 https blog csdn net HiWangWenBing article details 121552941 目录 1 主要的技术
  • Coursera

    该系列仅在原课程基础上部分知识点添加个人学习笔记 或相关推导补充等 如有错误 还请批评指教 在学习了 Andrew Ng 课程的基础上 为了更方便的查阅复习 将其整理成文字 因本人一直在学习英语 所以该系列以英文为主 同时也建议读者以英文为
  • 深度学习速成(12)LSTM的参数

    1 LSTM的参数 在PyTorch的torch nn模块中 LSTM 长短时记忆网络 的参数包括以下内容 1 input size 输入向量的特征维度 2 hidden size 隐藏状态的维度 也是LSTM单元中隐层状态的维度 3 nu
  • 小白循环神经网络RNN LSTM 参数数量 门单元 cell units timestep batch_size

    小白循环神经网络RNN LSTM 参数数量 门单元 cell units timestep batch size RNN循环神经网络 timestep batch size LSTM及参数计算 keras中若干个Cell例如LSTMCell
  • 循环神经网络——下篇【深度学习】【PyTorch】【d2l】

    文章目录 6 循环神经网络 6 7 深度循环神经网络 6 7 1 理论部分 6 7 2 代码实现 6 8 双向循环神经网络 6 8 1 理论部分 6 8 2 代码实现 6 9 机器翻译 6 9 1 理论部分 6 10 编码器解码器架构 6
  • 单层LSTM和多层LSTM的输入与输出

    单层LSTM的输入与输出 RNN结构 对应的代码为 代码中没写偏置 上图是单层LSTM的输入输出结构图 其实它是由一个LSTM单元的一个展开 如下图所示 所以从左到右的每个LSTM Block只是对应一个时序中的不同的步 在第一个图中 输入
  • 【机器学习】GRU 讲解

    有任何的书写错误 排版错误 概念错误等 希望大家包含指正 在阅读本篇之前建议先学习 RNN 讲解 LSTM 讲解 3 GRU 3 1 网络结构 GRU 是循环神经网络的一种 和 LSTM 一样 是为了解决长期依赖问题 GRU 单元结构如下
  • RNN循环神经网络的Matlab模拟和训练

    RNN循环神经网络的Matlab模拟和训练 循环神经网络 RNN 是一种适用于序列数据的神经网络 RNN 在处理时间序列数据方面非常适用 如自然语言处理 股票价格预测等 本文将介绍如何使用 Matlab 来进行 RNN 的模拟和训练 在 M
  • 一步一步详解LSTM网络【从RNN到LSTM到GRU等,直至attention】

    一步一步详解LSTM网络 从RNN到LSTM到GRU等 直至attention 0 前言 1 Recurrent Neural Networks循环神经网络 2 The Problem of Long Term Dependencies长期
  • 【Pytorch】循环神经网络实现手写体识别

    Pytorch 循环神经网络实现手写体识别 1 数据集加载 2 搭建RNN模型 3 训练模型 4 模型保存和加载 模型测试 1 数据集加载 import seaborn as sns sns set font scale 1 5 style
  • 在NLP上,CNN、RNN、MLP三者相比各有何优劣

    本文为知乎温颖就如下问题的回答 已授权CSDN转载 若想要实现某个具体的任务 如做关系抽取 实体识别 情感分类等 在不考虑实现的难度的情况下 如何从理论 经验 直觉上去选择最有希望的模型 前段时间做过用不同的神经网络模型做文本分类 情感分析
  • 【机器学习】LSTM 讲解

    2 LSTM 2 1 长期依赖问题 标准 RNN 结构在理论上完全可以实现将最初的信息保留到即使很远的时刻 但是在实践中发现 RNN 会受到短时记忆的影响 如果一条序列足够长 那它们将很难将信息从较早的时刻传送到后面的时刻 因此 如果正在尝

随机推荐

  • WebRTC实时音视频技术的整体架构介绍

    WebRTC 简介 WebRTC 名称源自网页实时通信 Web Real Time Communication 的缩写 是一个支持网页浏览器进行实时语音通话或视频聊天的技术 是谷歌2010年以6820万美元收购Global IP Solut
  • Echarts 无数据时显示“暂无数据”

    当我们的图表遇到接口数据为空或者筛选之后无数据的情况 会显示成这样 就比较丑 所以需要更换成指定的某些文字做提示 通过配置title 曲线救国 title show true text 暂无数据 left center top center
  • JAVA小练习149——需求: 一家三口都要工作, 儿子工作负责绘画, 妈妈可以在儿子的工作上进行增强---上涂料, 爸爸的工作就是在妈妈的基础上增强---上画框

    interface Work public void work class Son implements Work Override public void work System out println 在画画 class Mother
  • python实践:让所有奇数都在偶数前面,而且奇数升序排列,偶数降序排序

    给定一个任意长度数组 实现一个函数 让所有奇数都在偶数前面 而且奇数升序排列 偶数降序排序 如字符串 1982376455 变成 1355798642 class Solution def SortNum self num list par
  • js查找数组重复元素

    方法一 利用sort方法 先使用sort方法将数组排序 再来判断找出重复元素 function res arr var temp arr sort sort function a b if a b temp indexOf a 1 temp
  • 数值分析复习笔记-第七章-非线性方程求根

    Chapter7 非线性方程求根 7 1 前言 本质 对一些n次代数多项式or超越方程 它们的根是难以通过解析方法求得 因此需采取数值方法 主要有 二分法 不动点迭代法 迭代加速 牛顿迭代法 牛顿法 割线法 7 2 二分法 数学基础 零点定
  • 多边形等分算法

    多边形等分
  • Python基础-- 9函数(中)

    一 函数的返回值 返回值就是函数执行以后返回的结果 通过return来指定函数的返回值 return后面可以跟任意对象 返回值甚至可以是 个函数 二 文档字符串 在定义函数时 可以在函数内部编写文档字符串 文档字符串就是对函数的 说明 he
  • Linux中TAB补全显示设备空间不足问题

    今天在使用linux中习惯性的使用tab键进行补全信息 发现无论在何处按下tab都会显示这样报错 图中因为我用了xshell工具 这里报错是中文的 这个时候要检查下自己的空间是否充足 可使用df查看设备目录使用空间大小 可以将一下不用的目录
  • 概率论的几种常考分布总结

    两点分布 0 1分布 X b 1 p 二项分布 X b n p k 0 1 2 n 指数分布 参数为 线性分布 参数为a b 泊松分布 X k 0 1 2 n
  • 程序员常犯的5个非技术性错误

    一个好的软件开发人员需要培养两种技能 技术技能和非技术技能 不幸的是一些开发者只注重技术的部分 以致养成一些陋习 下面是最常犯的5个非技术性错误 0 缺乏自律 Jim Rohn曾经说过 自律是目标和成果之间的桥梁 我一直认为 不论是成为一名
  • 半导体芯片测试介绍:CP、FT、STL

    CP测试 Chip Probing 对Wafer进行电性能测试 挑选出好的Die 可以减少封装和测试的成本 也可以透过Wafer的良率 检查fab厂制造的工艺水平 FT测试 Final Test 芯片封装完成之后 通过分选机和测试机的配合使
  • Android 兼容8.0 App全局字体调节、禁止App字体随系统字体大小而更改

    在APP中 字体的大小单位一般会用sp 然而在改变系统字体大小时 App字体就会随着系统字体大小改变而改变 这就可能造成APP布局的错位 造成这种情况的原因是 sp单位除了受屏幕密度影响外 还受到用户的字体大小影响 通常情况下 建议使用sp
  • 【周末闲谈】什么是云计算?

    个人主页 个人主页 系列专栏 周末闲谈 第一周 二进制VS三进制 第二周 文心一言 模仿还是超越 第二周 畅想AR 文章目录 前言 什么是云计算 大数据 云计算是分布式计算的一种 云计算为我们提供的三种服务 基础设置即服务 laaS 软件运
  • Unicode汉字编码表

    1 Unicode编码表 Unicode只有一个字符集 中 日 韩的三种文字占用了Unicode中0x3000到0x9FFF的部分 Unicode目前普遍采用的是UCS 2 它用两个字节来编码一个字符 比如汉字 经 的编码是0x7ECF 注
  • 几种任务调度的 Java 实现方法与比较

    I 几种任务调度的 Java 实现方法与比较 综观目前的 Web 应用 多数应用都具备任务调度的功能 本文由浅入深介绍了几种任务调度的 Java 实现方法 包括 Timer Scheduler Quartz 以及 JCron Tab 并对其
  • project facets java转成web项目

    前言 用Eclipse开发项目的时候 把一个Web项目导入到Eclipse里会变成了一个Java工程 将无法在Tomcat中进行部署运行 方法 1 找到 project文件 找到里面的
  • [GUI]stm32搭载3.5寸SPI-TFT屏移植LittleVGL

    唠几句 记录下移植笔记 新项目用到LVGL 也是首次接触GUI库 所以Emmmm 学呗 之前都是直接在LCD屏上画点 画线 画圆 画个矩形 画个多边形 显示个字符串 显示张图片而已 没有用过GUI库 在网上找了点学习资料 然后把LVGL库的
  • Springboot企业级部署解决方案

    使用springboot的童鞋们 有没曾经想把项目打包成 bin conf libs logs 等这样的结构然后直接运行的 但是找了很多办法都不够完美 因为G是个完美主义 好了直接来看解决方案 1 修改执行打包的子工程的pom xml文件
  • Emojify - v2参考答案

    Emojify Welcome to the second assignment of Week 2 You are going to use word vector representations to build an Emojifie