我尝试了以下操作,希望看到源图像的灰度版本:
from PIL import Image
import numpy as np
img = Image.open("img.png").convert('L')
arr = np.array(img.getdata())
field = np.resize(arr, (img.size[1], img.size[0]))
out = field
img = Image.fromarray(out, mode='L')
img.show()
但由于某种原因,整个图像几乎是由很多点组成,中间有黑色。为什么会发生这种情况?
When you are creating the numpy
array using the image data from your Pillow object, be advised that the default precision of the array is int32
. I'm assuming that your data is actually uint8
as most images seen in practice are this way. Therefore, you must explicitly ensure that the array is the same type as what was seen in your image. Simply put, ensure that the array is uint8
after you get the image data, so that would be the fourth line in your code1.
arr = np.array(img.getdata(), dtype=np.uint8) # Note the dtype input
1. Take note that I've added two more lines in your code at the beginning to import the necessary packages for this code to work (albeit with an image offline).
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)