Showing posts with label sift. Show all posts
Showing posts with label sift. Show all posts

Monday, July 3, 2017

Generate similarity score in percentage from SIFT using opencv

Leave a Comment

I have been trying to find a way to generate similarity score ( in %) after comparing two images using SIFT in python (2.7.x) opencv (2.4.9). I was only able to find examples that draw lines between matches. How do I proceed with this.

1 Answers

Answers 1

There is an opencv equivalent of vl_ubcmatch function in Matlab.

Here is the excerpt from opencv documentation.

# create BFMatcher object bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)  # Match descriptors. matches = bf.match(des1,des2) 

matches = bf. match (des1, des2) matches the two sets of descriptors and returns a list of DMatch objects. This DMatch object has four attributes: distance, trainIdx, queryIdx, imgIdx. These return values are equivalent of vl_ubcmatch function.

I hope you will find it helpful.

Read More

Saturday, June 25, 2016

Proper approach to feature detection with opencv

Leave a Comment

My goal is to find known logos in static image and videos. I want to achieve that by using feature detection with KAZE or AKAZE and RanSac.

I am aiming for a similar result to: https://www.youtube.com/watch?v=nzrqH...

While experimenting with the detection example from the docs which is great btw, i was facing several issues:

  • Object resolution: Differences in size between the known object and the resolution of the scene where the object should be located sometimes breaks the detection algorithm - the object won't be recognized in images with a low resolution although the image quality is still allright for a human eye.
  • Color contrast with the background: It seems, that the detection can easily be distracted by different background contrasts (eg: object is logo black on white background, logo in scene is white on black background). How can I make the detection more robust against different luminations and background contrasts?
  • Preprocessing: Should there be done any kind of preprocessing of the object / scene? For example enlarge the scene up to a specific size? Is there any guideline how to approach the feature detection in several steps to get the best results?

1 Answers

Answers 1

I think your issue is more complicated than feature-descriptor-matching-homography process. It is more likely oriented to pattern recognition or classification.

You can check this extended paper review of shape matching:

http://www.staff.science.uu.nl/~kreve101/asci/vir2001.pdf

Firstly, the resolution of images is very important, because usually matching operation makes a pixel intensity cross-correlation between your sample image (logo) and your process image, so you will get the best-crosscorrelated area.

In the same way, the background colour intensity is very important because background illumination could affect severally to your final result.

Feature-based methods are widely researched:

http://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html

http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html

So for example, you can try alternative methods such as:

Hog descritors: Histogram oriented gradients: https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients

Pattern matching or template matching http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

I think the lastest (Pattern matching) is the easiest to check your algorithm.

Hope these references helps.

Cheers.

Unai.

Read More

Tuesday, April 12, 2016

How to use opencv feature matching for detecting copy-move forgery

Leave a Comment

In my opencv project, I want to detect copy-move forgery in an image. I know how to use the opencv FLANN for feature matching in 2 different image, but I am become so confused on how to use FLANN for detection copy-move forgery in an image.

P.S1: I get the sift keypoints and descriptors of image and stuck in using the feature matching class.

P.S2: the type of feature matching is not important for me.

Thanks in advance.

Update :

These pictures is an example of what I need

Input Image

Result

And There is a code which matches features of two images and do something like it on two images (not a single one), the code in android native opencv format is like below:

    vector<KeyPoint> keypoints;         Mat descriptors;          // Create a SIFT keypoint detector.         SiftFeatureDetector detector;         detector.detect(image_gray, keypoints);         LOGI("Detected %d Keypoints ...", (int) keypoints.size());          // Compute feature description.         detector.compute(image, keypoints, descriptors);         LOGI("Compute Feature ...");           FlannBasedMatcher matcher;         std::vector< DMatch > matches;         matcher.match( descriptors, descriptors, matches );          double max_dist = 0; double min_dist = 100;          //-- Quick calculation of max and min distances between keypoints           for( int i = 0; i < descriptors.rows; i++ )           { double dist = matches[i].distance;             if( dist < min_dist ) min_dist = dist;             if( dist > max_dist ) max_dist = dist;           }            printf("-- Max dist : %f \n", max_dist );           printf("-- Min dist : %f \n", min_dist );            //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,           //-- or a small arbitary value ( 0.02 ) in the event that min_dist is very           //-- small)           //-- PS.- radiusMatch can also be used here.           std::vector< DMatch > good_matches;            for( int i = 0; i < descriptors.rows; i++ )           { if( matches[i].distance <= max(2*min_dist, 0.02) )             { good_matches.push_back( matches[i]); }           }            //-- Draw only "good" matches           Mat img_matches;           drawMatches( image, keypoints, image, keypoints,                        good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),                        vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );            //-- Show detected matches //          imshow( "Good Matches", img_matches );           imwrite(imgOutFile, img_matches); 

1 Answers

Answers 1

I don't know if it's a good idea to use keypoints for this problem. I'd rather test template matching (using a sliding window on your image as patch). Compared to keypoints, this method has the disadvantage of being sensible to rotation and scale.

If you want to use keypoints, you can :

  • find a set of keypoints (SURF, SIFT, or whatever you want),
  • compute the matching score with every other keypoints, with the knnMatch function of the Brute Force Matcher (cv::BFMatcher),
  • keep matches between distincts points, i.e. points whose distance is greater than zero (or a threshold).

    int nknn = 10; // max number of matches for each keypoint double minDist = 0.5; // distance threshold  // Match each keypoint with every other keypoints cv::BFMatcher matcher(cv::NORM_L2, false); std::vector< std::vector< cv::DMatch > > matches; matcher.knnMatch(descriptors, descriptors, matches, nknn);  // Compute distance and store distant matches std::vector< cv::DMatch > good_matches; for (int i = 0; i < matches.size(); i++) {     for (int j = 0; j < matches[i].size(); j++)     {         double dist = matches[i][j].distance;         if (dist > minDist)             good_matches.push_back(matches[i][j]);     } }  Mat img_matches; drawMatches(image_gray, keypoints, image_gray, keypoints, good_matches, img_matches); 
Read More