Blur Detection Of Image Using OpenCV
Solution 1:
I may be late to answer this one, but here is one potential approach. The blur_detector library in pypi can be used to identify regions in an image which are sharp vs blurry. Here is the paper on which the library is created: https://arxiv.org/pdf/1703.07478.pdf
The way this library operates is that it looks at every pixel in the image at multiple scales and performs the discrete cosine transform at each scale. These DCT coefficients are then filtered such that we only use the high frequency
coefficients. All the high frequency
DCT coefficients at multiple scales are then fused together and sorted to form the multiscale-fused and sorted high-frequency transform coefficients
A subset of these sorted coefficients is selected. This is a tunable parameter and user can experiment with it based on the application. The output of the selected DCT coefficients is then sent through a max pooling to retain the maximum activation at multiple scales. This makes the algorithm quite robust to detect blurry areas in an image.
Here are the results that I see on the images that you have provided in the question:
Note: I have used a face detector from the default cascade_detectors in opencv to select a region of interest. the output of these two approaches (spatial blur detection + face detection) can be used to get the sharpness map in the image.
Here we can see that in the sharp images, the intensity of the pixels in the eyes region is very high, whereas for the blurry image, it is low.
You can threshold this to identify which images are sharp and which images are blurry.
Here is the code snippet which generated the above results:
pip install blur_detector
import blur_detector
import cv2
if __name__ == '__main__':
face_cascade = cv2.CascadeClassifier('cv2/data/haarcascade_frontalface_default.xml')
img = cv2.imread('1.png', 0)
blur_map1 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map1, (x, y), (x + w, y + h), (255, 0, 0), 2)
img = cv2.imread('2.png', 0)
blur_map2 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map2, (x, y), (x + w, y + h), (255, 0, 0), 2)
img = cv2.imread('3.png', 0)
blur_map3 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map3, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2.imshow('a', blur_map1)
cv2.imshow('b', blur_map2)
cv2.imshow('c', blur_map3)
cv2.waitKey(0)
To understand the details about the algorithm regarding the blur detector, please take a look at this github page: https://github.com/Utkarsh-Deshmukh/Spatially-Varying-Blur-Detection-python
Solution 2:
You can try to define a threshold as float, so for every result falling under the threshold == blurry. But if the pixel images shows very high every time, even if not blurry, you could check for another value that is very high. Another way might be to detect focus of the picture.
Post a Comment for "Blur Detection Of Image Using OpenCV"