删除 R 中过于常见的单词(出现在超过 80% 的文档中)

2023-12-11

我正在使用“tm”包来创建语料库。我已经完成了大部分预处理步骤。 剩下的事情就是删除过于常见的单词(在超过 80% 的文档中出现的术语)。有人能帮我解决这个问题吗?

dsc <- Corpus(dd)
dsc <- tm_map(dsc, stripWhitespace)
dsc <- tm_map(dsc, removePunctuation)
dsc <- tm_map(dsc, removeNumbers)
dsc <- tm_map(dsc, removeWords, otherWords1)
dsc <- tm_map(dsc, removeWords, otherWords2)
dsc <- tm_map(dsc, removeWords, otherWords3)
dsc <- tm_map(dsc, removeWords, javaKeywords)
dsc <- tm_map(dsc, removeWords, stopwords("english"))
dsc = tm_map(dsc, stemDocument)
dtm<- DocumentTermMatrix(dsc, control = list(weighting = weightTf, 
                         stopwords = FALSE))

dtm = removeSparseTerms(dtm, 0.99) 
# ^-  Removes overly rare words (occur in less than 2% of the documents)

如果你做了一个怎么办removeCommonTerms功能

removeCommonTerms <- function (x, pct) 
{
    stopifnot(inherits(x, c("DocumentTermMatrix", "TermDocumentMatrix")), 
        is.numeric(pct), pct > 0, pct < 1)
    m <- if (inherits(x, "DocumentTermMatrix")) 
        t(x)
    else x
    t <- table(m$i) < m$ncol * (pct)
    termIndex <- as.numeric(names(t[t]))
    if (inherits(x, "DocumentTermMatrix")) 
        x[, termIndex]
    else x[termIndex, ]
}

然后,如果您想删除文档中 >=80% 的术语,您可以这样做

data("crude")
dtm <- DocumentTermMatrix(crude)
dtm
# <<DocumentTermMatrix (documents: 20, terms: 1266)>>
# Non-/sparse entries: 2255/23065
# Sparsity           : 91%
# Maximal term length: 17
# Weighting          : term frequency (tf)

removeCommonTerms(dtm ,.8)
# <<DocumentTermMatrix (documents: 20, terms: 1259)>>
# Non-/sparse entries: 2129/23051
# Sparsity           : 92%
# Maximal term length: 17
# Weighting          : term frequency (tf)
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

删除 R 中过于常见的单词(出现在超过 80% 的文档中) 的相关文章

随机推荐