无法满足显式设备规范“/device:GPU:0”,因为没有匹配的设备

2024-05-08

我想在我的 Ubuntu 14.04 机器上使用 TensorFlow 0.12 作为 GPU。

但是,当将设备分配给节点时,我收到以下错误。

InvalidArgumentError (see above for traceback): Cannot assign a device to
node 'my_model/RNN/zeros': Could not satisfy explicit device specification
'/device:GPU:0' because no devices matching that specification are registered
in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
    [[Node: my_model/RNN/zeros = Fill[T=DT_FLOAT, _device="/device:GPU:0"]
(my_model/RNN/pack, my_model/RNN/zeros/Const)]]

我的张量流似乎设置正确,因为这个简单的程序可以工作:

import tensorflow as tf
# Creates a graph.
with tf.device('/gpu:0'):
  a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
  b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
  c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

哪个输出:

    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA 
library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] 
successfully opened CUDA library libcudnn.so locally I 
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
 libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] 
successfully opened CUDA library libcuda.so.1 locally I 
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
 libcurand.so locally I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] 
Found device 0 with properties:  name: Tesla K40m major: 3 minor: 5 
memoryClockRate (GHz) 0.745 pciBusID 0000:08:00.0 Total memory: 11.17GiB Free 
memory:

    11.10GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0  I
 tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y  I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow 
device (/gpu:0) -> (device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0) 
Device mapping: /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: 
Tesla K40m, pci bus id: 0000:08:00.0 I tensorflow/core/common_runtime
/direct_session.cc:255] Device mapping: /job:localhost/replica:0/task:0/gpu:0 
-> device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0


    MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0 I tensorflow/core
/common_runtime/simple_placer.cc:827] MatMul: (MatMul)/job:localhost/replica:0
/task:0/gpu:0 b: (Const): /job:localhost/replica:0/task:0/gpu:0 I 
tensorflow/core/common_runtime/simple_placer.cc:827] b: (Const)/job:localhost
/replica:0/task:0/gpu:0 a: (Const): /job:localhost/replica:0/task:0/gpu:0 I 
tensorflow/core/common_runtime/simple_placer.cc:827] a: (Const)/job:localhost
/replica:0/task:0/gpu:0 [[ 22.  28.]  [ 49. 
    64.]]

如何正确地将设备分配给节点?


尝试使用sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))。如果无法在 GPU 上执行操作,这将解决该问题。由于有些操作只有CPU执行。

Using allow_soft_placement=True当没有可用的 GPU 实现时,将允许 TensorFlow 回退到 CPU。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

无法满足显式设备规范“/device:GPU:0”,因为没有匹配的设备 的相关文章

随机推荐