实际上,您拥有的 SSN 被 spacy 标记为 5 个块:
print([token.text for token in nlp("690-96-4032")])
# => ['690', '-', '96', '-', '4032']
因此,要么使用自定义分词器-
数字之间的值不会拆分为单独的标记,或者 - 更简单 - 为连续 5 个标记创建一个模式:
patterns = [{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{2}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{4}$"}} ]}]
完整的 spacy 演示:
import spacy
from spacy.pipeline import EntityRuler
nlp = spacy.load("en_core_web_sm")
ruler = EntityRuler(nlp, overwrite_ents=True)
patterns = [{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{2}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{4}$"}} ]}]
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)
text = "My name is yuyyvb and I leave on 605 W Clinton Street. My social security 690-96-4032"
doc = nlp(text)
print([(ent.text, ent.label_) for ent in doc.ents])
# => [('605', 'CARDINAL'), ('690-96-4032', 'SSN')]
So, {"TEXT": {"REGEX": r"^\d{3}$"}}
匹配仅由三位数字组成的令牌,{"TEXT": "-"}
is a -
字符等
用 spacy 覆盖连字符数字标记化
如果您对如何通过覆盖默认标记化来实现它感兴趣,请注意infixes https://github.com/explosion/spaCy/blob/58533f01bf926546337ad2868abe7fc8f0a3b3ae/spacy/lang/punctuation.py#L36: the r"(?<=[0-9])[+\-\*^](?=[0-9-])"
正则表达式使 spacy 将连字符分隔的数字拆分为单独的标记。使1-2-3
and 1-2
就像子字符串被标记为单个标记一样,删除-
来自正则表达式。好吧,你不能这样做,这要棘手得多:你需要用 2 个正则表达式替换它:r"(?<=[0-9])[+*^](?=[0-9-])"
and r"(?<=[0-9])-(?=-)"
因为事实上-
也在数字之间进行检查((?<=[0-9])
) 和一个连字符(参见(?=[0-9-])
).
所以,整个事情看起来像
import spacy
from spacy.tokenizer import Tokenizer
from spacy.pipeline import EntityRuler
from spacy.util import compile_infix_regex
def custom_tokenizer(nlp):
# Take out the existing rule and replace it with a custom one:
inf = list(nlp.Defaults.infixes)
inf.remove(r"(?<=[0-9])[+\-\*^](?=[0-9-])")
inf = tuple(inf)
infixes = inf + tuple([r"(?<=[0-9])[+*^](?=[0-9-])", r"(?<=[0-9])-(?=-)"])
infix_re = compile_infix_regex(infixes)
return Tokenizer(nlp.vocab, prefix_search=nlp.tokenizer.prefix_search,
suffix_search=nlp.tokenizer.suffix_search,
infix_finditer=infix_re.finditer,
token_match=nlp.tokenizer.token_match,
rules=nlp.Defaults.tokenizer_exceptions)
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
ruler = EntityRuler(nlp, overwrite_ents=True)
ruler.add_patterns([{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}\W\d{2}\W\d{4}$"}}]}])
nlp.add_pipe(ruler)
text = "My name is yuyyvb and I leave on 605 W Clinton Street. My social security 690-96-4032. Some 9---al"
doc = nlp(text)
print([t.text for t in doc])
# => ['My', 'name', 'is', 'yuyyvb', 'and', 'I', 'leave', 'on', '605', 'W', 'Clinton', 'Street', '.', 'My', 'social', 'security', '690-96-4032', '.', 'Some', '9', '-', '--al']
print([(ent.text, ent.label_) for ent in doc.ents])
# => [('605', 'CARDINAL'), ('690-96-4032', 'SSN'), ('9', 'CARDINAL')]
如果你遗漏了r"(?<=[0-9])-(?=-)"
, the ['9', '-', '--al']
会变成'9---al'
.
NOTE你需要使用^\d{3}\W\d{2}\W\d{4}$
regex: ^
and $
匹配令牌的开始和结束(否则,部分匹配的令牌也将被识别为 SSN)并且[^\w]
等于\W
.