我正在使用 UMAP(https://umap-learn.readthedocs.io/en/latest/# https://umap-learn.readthedocs.io/en/latest/#)以减少数据的维度。我的数据集包含 4700 个样本,每个样本有 120 万个特征(我想减少数量)。然而,尽管使用 32 个 CPU 和 120GB RAM,这仍然需要相当长的时间。特别是嵌入的构建速度很慢,并且详细输出在过去 3.5 小时内没有变化:
UMAP(dens_frac=0.0, dens_lambda=0.0, low_memory=False, n_neighbors=10,
verbose=True)
Construct fuzzy simplicial set
Mon Jul 5 09:43:28 2021 Finding Nearest Neighbors
Mon Jul 5 09:43:28 2021 Building RP forest with 59 trees
Mon Jul 5 10:06:10 2021 metric NN descent for 20 iterations
1 / 20
2 / 20
3 / 20
4 / 20
5 / 20
Stopping threshold met -- exiting after 5 iterations
Mon Jul 5 10:12:14 2021 Finished Nearest Neighbor Search
Mon Jul 5 10:12:25 2021 Construct embedding
有什么方法可以让这个过程更快。我已经在使用稀疏矩阵(scipy.sparse.lil_matrix),如下所述:https://umap-learn.readthedocs.io/en/latest/sparse.html https://umap-learn.readthedocs.io/en/latest/sparse.html。另外,我还安装了 pynndescent (如下所述:https://github.com/lmcinnes/umap/issues/416 https://github.com/lmcinnes/umap/issues/416)。我的代码如下:
from scipy.sparse import lil_matrix
import numpy as np
import umap.umap_ as umap
term_dok_matrix = np.load('term_dok_matrix.npy')
term_dok_mat_lil = lil_matrix(term_dok_matrix, dtype=np.float32)
test = umap.UMAP(a=None, angular_rp_forest=False, b=None,
force_approximation_algorithm=False, init='spectral', learning_rate=1.0,
local_connectivity=1.0, low_memory=False, metric='euclidean',
metric_kwds=None, n_neighbors=10, min_dist=0.1, n_components=2, n_epochs=None,
negative_sample_rate=5, output_metric='euclidean',
output_metric_kwds=None, random_state=None, repulsion_strength=1.0,
set_op_mix_ratio=1.0, spread=1.0, target_metric='categorical',
target_metric_kwds=None, target_n_neighbors=-1, target_weight=0.5,
transform_queue_size=4.0, unique=False, verbose=True).fit_transform(term_dok_mat_lil)
有什么技巧或想法可以使计算更快吗?我可以更改一些参数吗?我的矩阵仅由零和一组成(这意味着我的矩阵中的所有非零条目都是一)是否有帮助?