你将不得不使用tf.meshgrid
and tf.gather_nd
实现你想要的:
tensor = tf.random.uniform((60, 128, 30000)) # shape (60, 128, 30000)
argmax = tf.argmax(tensor, axis=2)
ij = tf.stack(tf.meshgrid(
tf.range(tensor.shape[0], dtype=tf.int64),
tf.range(tensor.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(argmax, axis=-1)], axis=-1)
result = tf.gather_nd(tensor, gather_indices)
tf.print(result.shape)
TensorShape([60, 128])
Why is tf.meshgrid
必要的?因为argmax
确实包含您的索引,但形状错误。功能tf.gather_nd
需要知道应该从 3D 张量中提取值的确切位置。这tf.meshgrid
函数创建两个一维数组的矩形网格,表示第一维和第二维的张量索引。
import tensorflow as tf
tensor = tf.random.uniform((2, 5, 3))
argmax = tf.argmax(tensor, axis=2)
# result = tf.gather_nd(tensor, gather_ind) <-- Would not work because arxmax has the shape TensorShape([2, 5]) but TensorShape([2, 5, 3]) is required
tf.print('Input tensor:\n', tensor, tensor.shape, '\nArgmax tensor:\n', argmax, argmax.shape)
i, j = tf.meshgrid(
tf.range(tensor.shape[0], dtype=tf.int64),
tf.range(tensor.shape[1], dtype=tf.int64),
indexing='ij')
# You need to create a mesh grid to correctly index your tensor.
ij = tf.stack([i, j], axis=-1)
tf.print('Meshgrid:\n', i, j, summarize=-1)
tf.print('Stacked:\n', ij, summarize=-1)
gather_indices = tf.concat([ij, tf.expand_dims(argmax, axis=-1)], axis=-1)
tf.print('Gathered indices:\n', gather_indices, gather_indices.shape, summarize=-1)
result = tf.gather_nd(tensor, gather_indices)
tf.print('\nFinal result:\n', result, result.shape)
Input tensor:
[[[0.889752269 0.243187189 0.601408958]
[0.891950965 0.776625633 0.146243811]
[0.136176467 0.743871331 0.762170076]
[0.424416184 0.150568008 0.464055896]
[0.308753 0.0792338848 0.383242]]
[[0.741660118 0.49783361 0.935318112]
[0.0616152287 0.0367363691 0.748341084]
[0.397849679 0.765681744 0.502376914]
[0.750188231 0.304993749 0.733741879]
[0.31267941 0.778184056 0.546301]]] TensorShape([2, 5, 3])
Argmax tensor:
[[0 0 2 2 2]
[2 2 1 0 1]] TensorShape([2, 5])
Meshgrid:
[[0 0 0 0 0]
[1 1 1 1 1]] [[0 1 2 3 4]
[0 1 2 3 4]]
Stacked:
[[[0 0]
[0 1]
[0 2]
[0 3]
[0 4]]
[[1 0]
[1 1]
[1 2]
[1 3]
[1 4]]]
Gathered indices:
[[[0 0 0]
[0 1 0]
[0 2 2]
[0 3 2]
[0 4 2]]
[[1 0 2]
[1 1 2]
[1 2 1]
[1 3 0]
[1 4 1]]] TensorShape([2, 5, 3])
Final result:
[[0.889752269 0.891950965 0.762170076 0.464055896 0.383242]
[0.935318112 0.748341084 0.765681744 0.750188231 0.778184056]] TensorShape([2, 5])
顺便说一句,您也可以考虑使用tf.math.top_k
因为您想获得最后一个维度的最大值。该函数返回索引和值(您想要的):
tensor = tf.random.uniform((60, 128, 30000)) # shape (60, 128, 30000)
values, indices = tf.math.top_k(tensor,
k=1)
tf.print(tf.squeeze(values, axis=-1).shape)
TensorShape([60, 128])