Showing posts with label surf. Show all posts
Showing posts with label surf. Show all posts

Saturday, June 25, 2016

Proper approach to feature detection with opencv

Leave a Comment

My goal is to find known logos in static image and videos. I want to achieve that by using feature detection with KAZE or AKAZE and RanSac.

I am aiming for a similar result to: https://www.youtube.com/watch?v=nzrqH...

While experimenting with the detection example from the docs which is great btw, i was facing several issues:

  • Object resolution: Differences in size between the known object and the resolution of the scene where the object should be located sometimes breaks the detection algorithm - the object won't be recognized in images with a low resolution although the image quality is still allright for a human eye.
  • Color contrast with the background: It seems, that the detection can easily be distracted by different background contrasts (eg: object is logo black on white background, logo in scene is white on black background). How can I make the detection more robust against different luminations and background contrasts?
  • Preprocessing: Should there be done any kind of preprocessing of the object / scene? For example enlarge the scene up to a specific size? Is there any guideline how to approach the feature detection in several steps to get the best results?

1 Answers

Answers 1

I think your issue is more complicated than feature-descriptor-matching-homography process. It is more likely oriented to pattern recognition or classification.

You can check this extended paper review of shape matching:

http://www.staff.science.uu.nl/~kreve101/asci/vir2001.pdf

Firstly, the resolution of images is very important, because usually matching operation makes a pixel intensity cross-correlation between your sample image (logo) and your process image, so you will get the best-crosscorrelated area.

In the same way, the background colour intensity is very important because background illumination could affect severally to your final result.

Feature-based methods are widely researched:

http://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html

http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html

So for example, you can try alternative methods such as:

Hog descritors: Histogram oriented gradients: https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients

Pattern matching or template matching http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

I think the lastest (Pattern matching) is the easiest to check your algorithm.

Hope these references helps.

Cheers.

Unai.

Read More

Tuesday, March 29, 2016

Matching image and determine best match using SURF

Leave a Comment

I have been trying to use the EMGU Example SURFFeature to determine if an image is in a collection of images. But I am having problems understanding how to determine if a match was found.

.........Original image ..............................Scene_1 (match).........................Scene_2 (no match)

enter image description here................... enter image description here................... enter image description here

I have been looking at the documentation and spent hours looking for a possible solution, on how to determine if the images are the same. As you can see in the following pics, a match is found for both.

enter image description here enter image description here

Its clear that the one I'm trying to find gets more matches (lines connecting) but how do I check this in the code?

Question: How do I filter out the good match?

My goal is to be able to compare an input Image (capture from webcam) with a collection of images in a database. but before I can save all images to the DB I need to know what Values I can compare the input to. (e.g. save the objectKeypoints in the DB)

Here is my sample code (the matching part):

private void match_test() {     long matchTime;     using (Mat modelImage = CvInvoke.Imread(@"images\input.jpg", LoadImageType.Grayscale))     using (Mat observedImage = CvInvoke.Imread(@"images\2.jpg", LoadImageType.Grayscale))     {         Mat result = DrawMatches.Draw(modelImage, observedImage, out matchTime);         //ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime));         ib_output.Image = result;         label7.Text = String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime);      } }  public static void FindMatch(Mat modelImage, Mat observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography) {     int k = 2;     double uniquenessThreshold = 0.9;     double hessianThresh = 800;      Stopwatch watch;     homography = null;      modelKeyPoints = new VectorOfKeyPoint();     observedKeyPoints = new VectorOfKeyPoint();      using (UMat uModelImage = modelImage.ToUMat(AccessType.Read))     using (UMat uObservedImage = observedImage.ToUMat(AccessType.Read))     {         SURF surfCPU = new SURF(hessianThresh);         //extract features from the object image         UMat modelDescriptors = new UMat();         surfCPU.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);          watch = Stopwatch.StartNew();          // extract features from the observed image         UMat observedDescriptors = new UMat();         surfCPU.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);          //Match the two SURF descriptors         BFMatcher matcher = new BFMatcher(DistanceType.L2);         matcher.Add(modelDescriptors);          matcher.KnnMatch(observedDescriptors, matches, k, null);          mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);         mask.SetTo(new MCvScalar(255));          Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);         int nonZeroCount = CvInvoke.CountNonZero(mask);          if (nonZeroCount >= 4)         {             nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,                matches, mask, 1.5, 20);              if (nonZeroCount >= 4)                 homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,                    observedKeyPoints, matches, mask, 2);         }          watch.Stop();     }      matchTime = watch.ElapsedMilliseconds; } 

I really have the feeling I'm not far from the solution.. hope someone can help me out

1 Answers

Answers 1

On exit from Features2DToolbox.GetHomographyMatrixFromMatchedFeatures, the mask matrix is updated to have zeros where matches are outliers (i.e., don't correspond well under the computed homography). Therefore, calling CountNonZero again on mask should give an indication of match quality.

I see you're wanting to classify matches as "good" or "bad" rather than just compare multiple matches against a single image; from the examples in your question it looks like maybe a reasonable threshold would be 1/4 the keypoints found in the input image. You might want an absolute minimum as well, on the grounds that you can't really consider something a good match without a certain quantity of evidence. So, e.g., something like

bool FindMatch(...) {     bool goodMatch = false;     // ...     homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(...);     int nInliers = CvInvoke.CountNonZero(mask);     goodMatch = nInliers >= 10 && nInliers >= observedKeyPoints.size()/4;     // ...     return goodMatch; } 

where on branches that don't get as far as computing homography of course goodMatch just stays false as it was initialized. The numbers 10 and 1/4 are kinda arbitrary and will depend on your application.

(Warning: the above is entirely derived from reading the docs; I haven't actually tried it.)

Read More