BERT-BiLSTM-CRF命名实体识别应用

2023-05-16

引言

本文将采用BERT+BiLSTM+CRF模型进行命名实体识别(Named Entity Recognition 简称NER),即实体识别。命名实体识别,是指识别文本中具有特定意义的实体,主要包括人名、地名、机构名、专有名词等。

  • BERT(Bidirectional Encoder Representation from
    Transformers),即双向Transformer的Encoder。模型的创新点在预训练方法上,即用了Masked LM和Next
    Sentence Prediction两种方法分别捕捉词语和句子级别的表示。

  • BiLSTM是Bi-directional Long Short-Term Memory的缩写,是由前向LSTM与后向LSTM组合而成。

  • CRF为条件随机场,可以用于构造在给定一组输入随机变量的条件下,另一组输出随机变量的条件概率分布模型。

环境

采用的Python包为:Kashgari,此包封装了NLP传统和前沿模型,可以快速调用,快速部署模型。

  • Python: 3.6
  • TensorFlow: 1.15
  • Kashgari: 1.x

其中Kashgari1.x版本必须使用TensorFlow一代。

BERT中文预训练数据

谷歌提前训练好的数据,其中中文模型可以从https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip下载。

更多预训练模型参考:https://github.com/ymcui/Chinese-BERT-wwm

自带数据训练评价

数据为中国日报的NER语料库,代码自动下载。

训练集、测试集和验证集的存储格式:

train_x: [[char_seq1],[char_seq2],[char_seq3],..... ]
train_y:[[label_seq1],[label_seq2],[label_seq3],..... ]
其中 char_seq1:["我""爱""中""国"]
对应的的label_seq1:["O""O""B_LOC""I_LOC"]

代码:

from kashgari.corpus import ChineseDailyNerCorpus
from kashgari.embeddings import BERTEmbedding
from kashgari.tasks.labeling import BiLSTM_CRF_Model
import kashgari


train_x, train_y = ChineseDailyNerCorpus.load_data('train')
test_x, test_y = ChineseDailyNerCorpus.load_data('test')
valid_x, valid_y = ChineseDailyNerCorpus.load_data('valid')


embedding = BERTEmbedding("chinese", sequence_length=10, task=kashgari.LABELING)
model = BiLSTM_CRF_Model(embedding)
model.fit(train_x, train_y, x_validate=valid_x, y_validate=valid_y, epochs=1, batch_size=100)
model.evaluate(test_x, test_y)

# model.save('save')
# loaded_model = kashgari.utils.load_model('xxx')

最后注释的为模型保存和调用代码。

实例

此本分将用自己的数据来进行命名实体识别。train_x和y存储格式和上面相同。

可用的标注格式

BIO标注模式: (B-begin,I-inside,O-outside)

BIOES标注模式: (B-begin,I-inside,O-outside,E-end,S-single)

  • B,即Begin,表示开始
  • I,即Intermediate,表示中间
  • E,即End,表示结尾
  • S,即Single,表示单个字符
  • O,即Other,表示其他,用于标记无关字符

代码

访问 https://www.omegaxyz.com/2020/05/18/bert-bilstm-crf/查看。

结果

_____________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
Input-Token (InputLayer)        [(None, 20)]         0                                            
__________________________________________________________________________________________________
Input-Segment (InputLayer)      [(None, 20)]         0                                            
__________________________________________________________________________________________________
Embedding-Token (TokenEmbedding [(None, 20, 768), (2 16226304    Input-Token[0][0]                
__________________________________________________________________________________________________
Embedding-Segment (Embedding)   (None, 20, 768)      1536        Input-Segment[0][0]              
__________________________________________________________________________________________________
Embedding-Token-Segment (Add)   (None, 20, 768)      0           Embedding-Token[0][0]            
                                                                 Embedding-Segment[0][0]          
__________________________________________________________________________________________________
Embedding-Position (PositionEmb (None, 20, 768)      15360       Embedding-Token-Segment[0][0]    
__________________________________________________________________________________________________
Embedding-Dropout (Dropout)     (None, 20, 768)      0           Embedding-Position[0][0]         
__________________________________________________________________________________________________
Embedding-Norm (LayerNormalizat (None, 20, 768)      1536        Embedding-Dropout[0][0]          
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 20, 768)      2362368     Embedding-Norm[0][0]             
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-1-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 20, 768)      0           Embedding-Norm[0][0]             
                                                                 Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-1-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-1-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-1-MultiHeadSelfAttention-
                                                                 Encoder-1-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-1-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-1-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-1-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-2-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-1-FeedForward-Norm[0][0] 
                                                                 Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-2-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-2-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-2-MultiHeadSelfAttention-
                                                                 Encoder-2-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-2-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-2-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-2-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-3-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-2-FeedForward-Norm[0][0] 
                                                                 Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-3-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-3-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-3-MultiHeadSelfAttention-
                                                                 Encoder-3-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-3-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-3-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-3-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-4-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-3-FeedForward-Norm[0][0] 
                                                                 Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-4-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-4-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-4-MultiHeadSelfAttention-
                                                                 Encoder-4-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-4-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-4-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-4-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-5-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-4-FeedForward-Norm[0][0] 
                                                                 Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-5-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-5-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-5-MultiHeadSelfAttention-
                                                                 Encoder-5-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-5-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-5-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-5-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-6-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-5-FeedForward-Norm[0][0] 
                                                                 Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-6-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-6-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-6-MultiHeadSelfAttention-
                                                                 Encoder-6-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-6-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-6-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-6-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-7-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-6-FeedForward-Norm[0][0] 
                                                                 Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-7-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-7-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-7-MultiHeadSelfAttention-
                                                                 Encoder-7-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-7-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-7-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-7-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-8-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-7-FeedForward-Norm[0][0] 
                                                                 Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-8-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-8-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-8-MultiHeadSelfAttention-
                                                                 Encoder-8-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-8-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-8-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 20, 768)      2362368     Encoder-8-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-9-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 20, 768)      0           Encoder-8-FeedForward-Norm[0][0] 
                                                                 Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 20, 768)      1536        Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-FeedForward (FeedForw (None, 20, 768)      4722432     Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-FeedForward-Dropout ( (None, 20, 768)      0           Encoder-9-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-9-FeedForward-Add (Add) (None, 20, 768)      0           Encoder-9-MultiHeadSelfAttention-
                                                                 Encoder-9-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-9-FeedForward-Norm (Lay (None, 20, 768)      1536        Encoder-9-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 20, 768)      2362368     Encoder-9-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-9-FeedForward-Norm[0][0] 
                                                                 Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 20, 768)      1536        Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-FeedForward (FeedFor (None, 20, 768)      4722432     Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-FeedForward-Dropout  (None, 20, 768)      0           Encoder-10-FeedForward[0][0]     
__________________________________________________________________________________________________
Encoder-10-FeedForward-Add (Add (None, 20, 768)      0           Encoder-10-MultiHeadSelfAttention
                                                                 Encoder-10-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-10-FeedForward-Norm (La (None, 20, 768)      1536        Encoder-10-FeedForward-Add[0][0] 
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 20, 768)      2362368     Encoder-10-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-10-FeedForward-Norm[0][0]
                                                                 Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 20, 768)      1536        Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-FeedForward (FeedFor (None, 20, 768)      4722432     Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-FeedForward-Dropout  (None, 20, 768)      0           Encoder-11-FeedForward[0][0]     
__________________________________________________________________________________________________
Encoder-11-FeedForward-Add (Add (None, 20, 768)      0           Encoder-11-MultiHeadSelfAttention
                                                                 Encoder-11-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-11-FeedForward-Norm (La (None, 20, 768)      1536        Encoder-11-FeedForward-Add[0][0] 
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 20, 768)      2362368     Encoder-11-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 20, 768)      0           Encoder-11-FeedForward-Norm[0][0]
                                                                 Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 20, 768)      1536        Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-FeedForward (FeedFor (None, 20, 768)      4722432     Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-FeedForward-Dropout  (None, 20, 768)      0           Encoder-12-FeedForward[0][0]     
__________________________________________________________________________________________________
Encoder-12-FeedForward-Add (Add (None, 20, 768)      0           Encoder-12-MultiHeadSelfAttention
                                                                 Encoder-12-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-12-FeedForward-Norm (La (None, 20, 768)      1536        Encoder-12-FeedForward-Add[0][0] 
__________________________________________________________________________________________________
Encoder-Output (Concatenate)    (None, 20, 3072)     0           Encoder-9-FeedForward-Norm[0][0] 
                                                                 Encoder-10-FeedForward-Norm[0][0]
                                                                 Encoder-11-FeedForward-Norm[0][0]
                                                                 Encoder-12-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
non_masking_layer (NonMaskingLa (None, 20, 3072)     0           Encoder-Output[0][0]             
__________________________________________________________________________________________________
layer_blstm (Bidirectional)     (None, 20, 256)      3277824     non_masking_layer[0][0]          
__________________________________________________________________________________________________
layer_dense (Dense)             (None, 20, 64)       16448       layer_blstm[0][0]                
__________________________________________________________________________________________________
layer_crf_dense (Dense)         (None, 20, 10)       650         layer_dense[0][0]                
__________________________________________________________________________________________________
layer_crf (CRF)                 (None, 20, 10)       100         layer_crf_dense[0][0]            
==================================================================================================
Total params: 104,594,222
Trainable params: 3,295,022
Non-trainable params: 101,299,200
__________________________________________________________________________________________________
Epoch 1/10

1/1 [==============================] - 6s 6s/step - loss: 52.2269 - accuracy: 0.1250
Epoch 2/10

1/1 [==============================] - 1s 687ms/step - loss: 22.6029 - accuracy: 0.6750
Epoch 3/10

1/1 [==============================] - 1s 754ms/step - loss: 12.7078 - accuracy: 0.8500
Epoch 4/10

1/1 [==============================] - 1s 767ms/step - loss: 12.8406 - accuracy: 0.8250
Epoch 5/10

1/1 [==============================] - 1s 717ms/step - loss: 10.0257 - accuracy: 0.8750
Epoch 6/10

1/1 [==============================] - 1s 638ms/step - loss: 7.3283 - accuracy: 0.8750
Epoch 7/10

1/1 [==============================] - 1s 738ms/step - loss: 4.4533 - accuracy: 0.9500
Epoch 8/10

1/1 [==============================] - 1s 734ms/step - loss: 5.0040 - accuracy: 0.9750
Epoch 9/10

1/1 [==============================] - 1s 698ms/step - loss: 2.6457 - accuracy: 1.0000
Epoch 10/10

1/1 [==============================] - 1s 743ms/step - loss: 2.0256 - accuracy: 1.0000
[{'text': '吴 恩 达 在 北 京 大 学 。', 'text_raw': ['吴', '恩', '达', '在', '北', '京', '大', '学', '。'], 'labels': [{'entity': 'B_PER', 'start': 0, 'end': 0, 'value': '吴'}, {'entity': 'I_PER', 'start': 1, 'end': 2, 'value': '恩 达'}, {'entity': 'I_AFF', 'start': 4, 'end': 7, 'value': '北 京 大 学'}]}]

输出展示了BERT的12层Transformer结构,以及它的参数量。最后是NER的结果。

参考资料

https://kashgari-zh.bmio.net/
https://www.jianshu.com/p/1d6689851622
https://blog.csdn.net/ctwy291314/article/details/102819221

更多内容访问 omegaxyz.com
网站所有代码采用Apache 2.0授权
网站文章采用知识共享许可协议BY-NC-SA4.0授权
© 2020 • OmegaXYZ-版权所有 转载请注明出处

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

BERT-BiLSTM-CRF命名实体识别应用 的相关文章

  • 简单区分扇区,块,簇,页

    扇区 xff1a 硬盘的最小寻址单元 xff08 像硬盘一样的外部存储设备的最小查找部分 xff0c 即 拿硬盘 的时候 xff0c 一次最少 拿 一个扇区 xff09 块 簇 xff1a 对文件系统来说的最小基本单元 一般在Windows
  • Android手机移植TensorFlow,实现物体识别、行人检测、图像风格迁移

    Android手机移植TensorFlow xff0c 实现物体识别 行人检测 图像风格迁移 转载 xff1a http mp weixin qq com s ZUaxGPgqAGrN9itwRnSL2A 详解如何将TensorFlow训练
  • celery启动报错:kombu.exceptions.VersionMismatch: Redis transport requires redis-py versions 3.2.0

    问题原因 xff1a kombu依赖包从v4 3 0开始对redis py v2 10 6不兼容 所以需要使用旧版本的kombu 解决方法 xff1a pip install kombu 61 61 4 2 0 此时重启celery即可正常
  • SSH与GIT操作时出现Host key verification failed

    问题描述 在进行首次SSH链接与GIT操作时出现以下报错 xff1a 解决方法 vim etc ssh ssh config 在文档的末尾添上 StrictHostKeyChecking no UserKnownHostsFile dev
  • Cordova 打包 Android release app 过程详解

    转自 xff1a http www tuicool com articles 673mE3m Android app 的打包分为 debug 和 release 两种 xff0c 后者是用来发布到应用商店的版本 这篇文章会告诉你 Cordo
  • 人生苦短,我用Manjaro || 愿你Manjaro半天,归来仍是Deepin

    前言 先秀一下桌面 xff08 爱哟 xff0c 木兰姐姐真好看呀 xff09 Manjaro总体上装起来是挺方便的 但是对于双显卡的本子的确是真的不友好 本教主希望来总结一下装这个manajro gnome的艰辛历程 xff0c 以及双显
  • C# Winform调用MATLAB 动态链接库运算后窗体发生改变

    C Winform调用MATLAB 动态链接库运算后窗体发生改变 问题描述 xff1a 开始启动时 大小为最开始设置的大小 xff0c 当调用MATLAB 动态链接库时 xff0c 窗口突然变小 解决方案 在网上查询主要说两种方法 xff1
  • 在柱状图中找最大矩形——O(n)时间复杂度java实现

    最近在刷leetcode xff0c 又碰到了这道题 xff0c 想起来当时算法有些瑕疵 xff0c 所以将最新的AC代码更新在最上面做个对比 xff0c 具体思路见注释 public class Solution 思路 主要是使用一个栈来
  • 组合模式与职责链模式编程实现

    组合模式 简介 将对象组合成树形结构以表示 部分 整体 的层次结构 组合模式使得用户对单个对象和组合对象的使用具有一致性 动机 总部 分部和办事处是成树状结构 xff0c 也就是有组织结构的 xff0c 不可以简单的平行管理 希望总公司的组
  • 动态规划-最大的正方形面积

    题目表述 Given a 2D binary matrix filled with 0 s and 1 s find the largest square containing only 1 s and return its area Fo
  • 迭代器模式C++实现

    简介 提供一种方法顺序访问一个聚合对象中各个元素 xff0c 而又不暴露该对象的内部表示 动机 一个聚合对象 xff0c 如列表 xff08 List xff09 应提供一种方法来让别人可以访问它的元素 xff0c 而又不需要暴露它的内部结
  • macOS Catalina常见问题汇总

    本文共535个字 xff0c 预计阅读时间需要2分钟 作为一个伪程序员 xff0c 我来说说macOS Catalina的一些软件兼容性和注意点 macOS Catalina 正式版无法使用的APP 有道词典闪退 brew版本过低的加载错误
  • Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.

    Graphviz的可执行文件 http www graphviz org Download windows PHP 参考 xff1a http blog csdn net u014749291 article details 5489108
  • 计算机保研-中科院计算所霸面(笔试面试)

    基本情况 xff1a 学校 xff1a 末流211 排名 xff1a 1 70 绩点 xff1a 4 33 5 0 竞赛 xff1a 无ACM xff0c 有某水赛国奖 xff08 中国人工智能学会主办 xff09 科研 xff1a 一篇水
  • 计算机保研-中科大计算机

    Abstract 2019年中科大计算机夏令营比往年增加了不少难度 xff0c 统一增加了机试环节 xff0c 面试难度提高 xff08 陈恩红实验室和李向阳实验室向来包含机试 xff09 xff0c 最终录取率在60 左右 xff08 往
  • NSGA-II资料合集

    关于NSGA II的一些资料 NSGA II中文翻译 MATLAB代码 NSGA II的解释 简介 关于演化计算 生物系统中 xff0c 进化被认为是一种成功的自适应方法 xff0c 具有很好的健壮性 基本思想 xff1a 达尔文进化论是一
  • 简单区块链Python实现

    什么是区块链 区块链是一种数据结构 xff0c 也是一个分布式数据库 从技术上来看 xff1a 区块是一种记录交易的数据结构 xff0c 反映了一笔交易的资金流向 系统中已经达成的交易的区块连接在一起形成了一条主链 xff0c 所有参与计算
  • 复旦大学计算机保研夏令营

    Abstract 复旦的夏令营 xff1a 自由而无用 xff0c 一期招了200人入营 xff0c 不提供住宿 xff08 导致我租了个旅馆每天要骑单车来学校 xff0c 不过沿途环境不错 xff0c 有很多吃的地方 xff09 xff0
  • 计算机保研夏令营预推免

    夏令营与预推免个人情况 学校 xff1a 末流211 xff08 安徽大学 xff09 排名 xff1a 1 70绩点 xff1a 4 33 5 0竞赛 xff1a 无ACM xff0c 有某水赛国奖 xff08 中国人工智能学会主办 xf
  • 知识图谱嵌入的应用场景

    In KG应用 xff08 在 KG 范围内的应用 xff09 链接预测 xff08 Link prediction xff09 链接预测任务有时也称为实体预测或实体排序 xff0c 用来预测两个实体之间是否有特定的关系 即已知头实体h和关

随机推荐

  • Neo4j数据导入与可视化

    本文共1262个字 xff0c 预计阅读时间需要5分钟 简介 Neo4j是一个高性能的NoSQL图形数据库 xff0c 它将结构化数据存储在网络上而不是表中 它是一个嵌入式的 基于磁盘的 具备完全的事务特性的Java持久化引擎 xff0c
  • 用户身份链接方法——DeepLink

    论文 xff1a DeepLink A Deep Learning Approach for User Identity Linkage UIL xff08 User Identity Linkage xff09 xff1a 用户身份链接
  • 可视化图布局算法简介

    Fruchterman Reingold FR FR算法将所有的结点看做是电子 xff0c 每个结点收到两个力的作用 xff1a 其他结点的库伦力 xff08 斥力 xff09 f a d
  • Windows无法连接到打印机怎么办?快收藏这些正确做法!

    案例 xff1a Windows无法连接到打印机怎么办 xff1f 朋友们朋友们 xff0c 最近为了备考国考 xff0c 我特地买了个打印机回来打印资料 xff0c 但是我的Windows无法连接到打印机 xff0c 这是为什么呢 xff
  • Python爬虫Scrapy入门

    Scrapy组成 Scrapy是Python开发的一个快速 高层次的屏幕抓取和web抓取框架 xff0c 用于抓取web站点并从页面中提取结构化的数据 引擎 xff08 Scrapy xff09 xff1a 用来处理整个系统的数据流 xff
  • Mac下终端pip与pip3配置(软链接)

    缘起 今日Mac上的Python环境绝对是个asshole 系统自带一个Python2 7我官网下载一个3 6homebrew悄悄下了个3 xanaconda自带了一个3 x前天更新了一下Xcode命令行工具 xff0c 竟然给我偷偷下了个
  • 推荐系统摘要

    作为一个推荐系统的门外汉 xff0c 或者说是用户 xff0c 我觉得推荐系统有以下几个特性 推荐系统的真实目的并不是做到让用户满意 xff0c 而是提高销售能力 xff0c 业务水平和收益 一个好的推荐系统并不是推荐用户最喜爱 想要的东西
  • 数据分析岗位面试必备

    业务逻辑 数据分析遵循一定的流程 xff0c 不仅可以保证数据分析每一个阶段的工作内容有章可循 xff0c 而且还可以让分析最终的结果更加准确 xff0c 更加有说服力 一般情况下 xff0c 数据分析分为以下几个步骤 xff1a 业务理解
  • 基于LDA的文本主题聚类Python实现

    LDA简介 LDA xff08 Latent Dirichlet Allocation xff09 是一种文档主题生成模型 xff0c 也称为一个三层贝叶斯概率模型 xff0c 包含词 主题和文档三层结构 所谓生成模型 xff0c 就是说
  • Neo4j-import导入CSV的数据

    本文共1215个字 xff0c 预计阅读时间需要4分钟 最近有个上亿个关系 节点的数据需要导入到Neo4j xff0c 有以下几个工具可以导入 xff1a Cypher CREATE 语句 xff0c 为每一条数据写一个CREATECyph
  • Ajax与jQuery异步加载数据

    本文共1096个字 xff0c 预计阅读时间需要4分钟 简介 一次性从服务器数据库中读取数据并传送到前端页面上是不现实的 xff0c 一方面会加重服务器的压力 xff0c 另一方面客户的带宽资源也会被占用 Ajax刚好可以解决数据异步加载的
  • 图注意力网络(GAT) TensorFlow解析

    论文 图注意力网络来自 Graph Attention Networks xff0c ICLR 2018 https arxiv org abs 1710 10903 注意力机制 代码 span class token keyword im
  • 知识图谱属性与关系区别

    本文共674个字 xff0c 预计阅读时间需要3分钟 知识图谱中属性和关系的区别主要是在于其面对的实体不同 实体关系分为两种 xff0c 一种是属性property xff0c 一种是关系relation 其最大区别在于 xff0c 属性所
  • 知识融合(实体对齐)笔记

    本文共1132个字 xff0c 预计阅读时间需要4分钟 知识融合 本体匹配 xff08 ontology matching xff09 侧重发现模式层等价或相似的类 属性或关系 xff0c 也成为本体映射 xff08 mapping xff
  • C/C++/Windows/VC/MFC/Unix/Linux编程书籍推荐

    C C 43 43 编程书籍 C Primer Plus C 43 43 Primer C 43 43 Primer Plus C和指针 C陷阱与缺陷 C专家编程 C 43 43 沉思录 C语言深度剖析 Effective C 43 43
  • FR算法(Fruchterman-Reingold)Python实现

    简介Fruchterman Reingold FR FR算法将所有的结点看做是电子 xff0c 每个结点收到两个力的作用 xff1a 其他结点的库伦力 xff08 斥力 xff09 f a d
  • COVID-19知识图谱问答系统(基于REFO)

    本文共669个字 xff0c 预计阅读时间需要3分钟 简介 基于知识图谱的问答系统 xff0c 即KBQA 其中一个简单的实现方法是根据用户输入的自然语言问句 xff0c 转化为图数据库中的关系查询 xff0c 最终将数据库中的实体及关系呈
  • 黑暗森林:知识图谱的前世今生

    黑暗森林 宇宙就是一座黑暗森林 每个文明都是带枪的猎人 像幽灵般潜行于林间 轻轻拨开挡路的树枝 竭力不让脚步发出一点儿声音 连呼吸都必须小心翼翼 他必须小心 因为林中到处都有与他一样潜行的猎人 如果他发现了别的生命 能做的只有一件事 开枪消
  • 图神经网络(GNN)TensorFlow实现

    图神经网络的研究与图嵌入或网络嵌入密切相关 xff0c 图嵌入或网络嵌入是数据挖掘和机器学习界日益关注的另一个课题 图嵌入旨在通过保留图的网络拓扑结构和节点内容信息 xff0c 将图中顶点表示为低维向量 xff0c 以便使用简单的机器学习算
  • BERT-BiLSTM-CRF命名实体识别应用

    引言 本文将采用BERT 43 BiLSTM 43 CRF模型进行命名实体识别 xff08 Named Entity Recognition 简称NER xff09 xff0c 即实体识别 命名实体识别 xff0c 是指识别文本中具有特定意