Skip to content Skip to sidebar Skip to footer

Get The (x,y) Coordinate Values From An Image Array's Rgb Value Using Numpy

I am new to python so I really need help with this one. I have an image greyscaled and thresholded so that the only colors present are black and white. I'm not sure how to go about

Solution 1:

Surely you must already have the image data in the form of a list of intensity values? If you're using Anaconda, you can use the PIL Image module and call getdata() to obtain this intensity information. Some people advise to use NumPy methods, or others, instead, which may improve performance. If you want to look into that then go for it, my answer can apply to any of them.

If you have already a function to convert a greyscale image to B&W, then you should have the intensity information on that output image, a list of 0's and 1's , starting from the top left corner to the bottom right. If you have that, you already have your location data, it just isnt in (x,y) form. To do that, use something like this:

data = image.getdata()
height = image.getHeight()
width = image.getWidth()
pixelList = []

for i in range(height):
    for j in range(width):
        stride = (width*i) + j
        pixelList.append((j, i, data[stride]))

Where data is a list of 0's and 1's (B&W), and I assume you have written getWidth() and getHeight() Don't just copy what I've written, understand what the loops are doing. That will result in a list, pixelList, of tuples, each tuple containing intensity and location information, in the form (x, y, intensity). That may be a messy form for what you are doing, but that's the idea. It would be much cleaner and accessible to instead of making a list of tuples, pass the three values (x, y, intensity) to a Pixel object or something. Then you can get any of those values from anywhere. I would encourage you to do that, for better organization and so you can write the code on your own.

In either case, having the intensity and location stored together makes sorting out the white pixels very easy. Here it is using the list of tuples:

whites= []
for pixel in pixelList:ifpixel[2]==1:whites.append(pixel[0:2])

Then you have a list of white pixel coordinates.

Solution 2:

You can usePIL and np.where to get the results efficiently and concisely

from PIL import Image
import numpy as np


img = Image.open('/your_pic.png')
pixel_mat = np.array(img.getdata())
width = img.size[0]

pixel_ind = np.where((pixel_mat[:, :3] > 0).any(axis=1))[0]
coordinate = np.concatenate(
    [
        (pixel_ind % width).reshape(-1, 1),
        (pixel_ind // width).reshape(-1, 1),
    ],
    axis=1,
)

Pick the required pixels and get their index, then calculate the coordinates based on it. Without using Loop expressions, this algorithm may be faster.

PIL is only used to get the pixel matrix and image width, you can use any library you are familiar with to replace it.

Post a Comment for "Get The (x,y) Coordinate Values From An Image Array's Rgb Value Using Numpy"