# Texture matching🔗

## Summary🔗

The texture matching recipe shows how to accurately find objects in an image based on the texture in the neighborhood of keypoints. The keypoints are automatically attached to locations with salient features like corners and edges.

## Detailed description🔗

Our mission is to inspect print quality of a logo reminiscent of a fruit produced by Malus domestica and give Pass or Fail verdict for the logo in each image.

The top-level processing graph is shown here. We’ll walk it through from top to bottom.

The processing graph of the texture matching demo.🔗

### Image input🔗

• The images originate from a live or virtual camera configured in Image Source.

• Convert Colors converts the input images from RGB to grayscale. This step is actually not mandatory because all processing blocks downstream can input also color images. However, because matching is based on grayscale texture, the image is converted once for all processing blocks. Furthermore, this block is an excellent place to set a breakpoint by clicking on the red dot. The breakpoint stops the processing after each image so you can examine the outputs of the other blocks.

Image input from a camera🔗

### Match with a trained model🔗

• Linear filter removes noise and possible artefacts like sharpening. The filter size is small so very few details are lost.

• Match Textures is the crux of this recipe. It identifies keypoints from the input image, calculates LBP descriptors using the texture around the keypoints and compares them with the descriptors of the objects the matcher has been trained with. More on the matcher training later in this recipe.

• JavaScript inputs the frame and size (i.e. the bounding box) of the found object (if any) and enlarges the bounding box so that the height and the width are extended by 10% on each side of the box and the original box resides in the middle of the new box. Boolean output Found is either true or false depending whether Match Textures found the object or not.

frame = $i.frame; size =$i.size;
$o.found = true; var expandFactor = 0.1; if (frame.rows != 4 || frame.columns != 4) {$o.found = false;
frame = VisionAppster.doubleMatrix(4, 4);
for (i = 0; i < 4; ++i)
frame.setEntry(i,i,1);
}
else
{
frame.setEntry(0, 3, frame.entry(0, 3) - expandFactor * (frame.entry(0, 0) * size.entry(0, 0) + frame.entry(0, 1) * size.entry(0,1)));
frame.setEntry(1, 3, frame.entry(1, 3) - expandFactor * (frame.entry(1, 0) * size.entry(0, 0) + frame.entry(1, 1) * size.entry(0,1)));
}

if (size.rows != 1 || size.columns != 2)
{
$o.found = false; size = VisionAppster.doubleMatrix(1, 2); } else { size.setEntry(0,0,size.entry(0, 0) * (1 + 2 * expandFactor)); size.setEntry(0,1,size.entry(0, 1) * (1 + 2 * expandFactor)); }$o.frame = frame;
$o.size = size;  Match with the trained model🔗 ### Project into a constant sized image🔗 • Project To Virtual View crops and resizes the bounding box into a size proportional to the size of the template model. That is, the image which has been used in model training. The output image now looks the same regardless of the size and the orientation of the input image. • Replace Coordinate Frame replaces the existing coordinate system with a new coordinate system such that the origin is at the center of the cropped image and the coordinate axes are orthonormal. Extract the object and set standard coordinate system🔗 ### Split the logo into blobs🔗 The image contains now only the logo and some margin. The logo should contain two dark blobs (the stalk and the apple) and the background should be a single connected bright area. The threshold for blob detectors is calculated with Histogram and Find Threshold tools. The blobs are detected with Detect Blobs tools and the shapes and sizes of the blobs are analyzed in Analyze Blob Geometry tools. There are two instances of both tools, one for dark and the other for bright blobs. Split the logo into blobs🔗 ### Analyze the results🔗 The final judgement on whether the quality of the logo is good enough is done in an instance of JavaScript tool. Analyze the results in a script🔗 It inputs the areas, sizes of bounding boxes and ratios of the lengths of the orientation axes of the blobs. The bigger the ratio, the more oblong the blob is. For instance, the stalk should about twice as high as it is wide whereas the height and width of the apple should be about the same. The script makes several checks on the number of detected blobs as well as their shapes and sizes before passing a verdict on the print quality. ~{.js}$o.result = false; $o.problem = “Unspecified defect detected”; // No pattern found if (!$i.found) { $o.problem = “Logo not found”; return; } // There should be 2 white-on-black blobs and one black-on-while blob if ($i.size1.rows != 2 || $$i.size2.rows != 1) { if ($$i.size1.rows < 2) $$o.problem = "Part of the logo missing"; else if ($$i.size1.rows > 2) $$o.problem = "Black fragments detected"; else if ($$i.size2.rows > 1) $o.problem = “Possible hole(s) detected”; else$o.problem = “Blob count mismatch”; return; }

// Ratios of areas of the big and small blobs should be within limits. var smallBlobIndex = 0, bigBlobIndex = 1; if ($i.area1.entry(0,0) >$i.area1.entry(1,0)) { bigBlobIndex = 0; smallBlobIndex = 1; } var smallArea = $i.area1.entry(smallBlobIndex,0); var bigArea =$i.area1.entry(bigBlobIndex,0); var areaRatio = bigArea / smallArea;

if (areaRatio < 16 || areaRatio > 20) { $o.problem = “Size ratio of the stalk and the apple is out of limits”; return; } // Axis ratios of small and big blobs should be within limits. // The stalk of the apple is oblong var axisRatioSmall =$i.ratio1.entry(smallBlobIndex, 0); if (axisRatioSmall < 2 || axisRatioSmall > 2.5) { $o.problem = “Shape of the stalk is out of limits”; return; } // The apple is nearly round, var axisRatioBig =$i.ratio1.entry(bigBlobIndex, 0); if (axisRatioBig < 0.95 || axisRatioBig > 1.15) { $o.problem = “Shape of the apple is out of limits”; return; } // The sizes of the bounding boxes of both blobs should be within limits. var sizeRatioSmall =$i.size1.entry(smallBlobIndex, 0) / $i.size1.entry(smallBlobIndex, 1); var sizeRatioBig =$i.size1.entry(bigBlobIndex, 0) / $i.size1.entry(bigBlobIndex, 1); if (sizeRatioSmall < 0.80 || sizeRatioSmall > 1.08) {$o.problem = “Shape of the stalk’s bounding box is out of limits”; return; }

if (sizeRatioBig < 1 || sizeRatioBig > 1.12) { $o.problem = “Shape of the apple’s bounding box is out of limits”; return; }$o.result = true; \$o.problem = “No problems”; ~

Here are a few examples on passed (green) and failed (red) cases.

Print quality is good🔗

Print quality is good🔗

Print quality fails🔗

Print quality fails🔗

Print quality fails🔗

Print quality fails🔗

## Matcher configuration🔗

The matcher configuration mode is activated by clicking on the cogwheel icon on the matcher tool.

Matcher configuration mode🔗

The first step is to add one or many model images. In general the matcher can contain several classes of objects and there can be several model images in each class. In this case we use only one class and need only one image of the logo.

There are several parameters which affect the probability of correct matching. The set of values shown below work reasonably well in this use case.

Configure parameters🔗

Let us walk through each parameter and describe their function.

### Detector parameters🔗

The detector is based on (LBP) algorithm. First the keypoints are placed at locations which appear have some distinct, salient features. The keypoints are shown as orange dots in the above picture. Then a histogram of LBP values in a neighborhood of each keypoint is calculated and stored in a database. Matching is based on the similarity of the histogram at the keypoints found in the model images and in the image under inspection.

• noiseThreshold sets the expected noise level in terms of intensity levels. The higher the threshold, the less the locations of keypoints are affected by noise but also more potentially interesting keypoints are missed.

• scale affects the maximum number of keypoints the algorithm tries to set. The value is relative to a default setting so you should try scale = 1 first. If it looks like there are clearly too few keypoints, try 0.5 or if there are too many, try 2. Iterate until the number of keypoints looks good.

• roiRadius indicates the radius of the neighborhood in which the LBP histogram is calculated. The optimal value depends on the model images. The neighborhood should be large enough to capture enough details which sets the keypoint apart from other keypoints but small enough so that it does not overlap too much with the neighborhoods of other keypoints.

• sensitivity determines how heavily keypoint candidates are filtered. The smaller the sensitivity, the sharper edge or corner it takes for a keypoint to stick to it.

• mode selects the variant of LBP algorithm. The variant affects the number of bins in LBP histogram. The fewer bins there are, the faster the matching will be. On the other hand more bins generally means more reliable object detection. The options are:

• Standard - Default histogram of length 256. Gives the most reliable detection but is not invariant to rotation.

• Uniform - Otherwise similar to the standard mode but uses a reduced number of histogram bins.

• RotationInvariant - Algorithm that is insensitive to rotation of the detectable objects. It is slightly less reliable than the Standard mode.

• Uniform Rotation Invariant - the combination of the above two algorithms.

• Symmetric - An algorithm that compares the intensities of opposing neighbors around each pixel in the region of interest. Produces a short 16-bin histogram.

• keypointAlgorithm - Algorithm for finding interesting keypoints. The options are:

• Corners - Place keypoints at sharp corners. The required sharpness is determined by sensitivity.

• Grid - The keypoints will originate from an equally spaced grid. The grid points are filtered so that only the points near sharp edges will remain. The densitity of the initial grid is determined by scale and the required sharpness by sensitivity.

### Matcher parameters🔗

Matcher parameters control how the matching keypoints are combined into a detected object.

• matchingMode - Tells the matcher if the same model can occur in the image just once or several times. If it can occur only once, the matching will be faster.

• maxMatches - The maximum number of objects belonging to any class. Zero means an unlimited number of objects.

• closestMatchCount - The number of closest matches against which each keypoint in the input image will be compared. The more match candidates are used, the more accurate but also slower the detection will be.

• maxEvaluations - The maximum number of evaluations when searching the database for matching feature vectors. This is another way for trading speed for accuracy. The default value is Unlimited, meaning maximal accuracy.

• rotationInvariant - If set to true, allows detected objects be rotated compared to the model.

• maxRotationAngle - The maximum allowed rotation angle in degrees. If set to 180, all rotations will be accepted.

• scaleInvariant - If true, the matcher will allow a relative scale change between minScale and maxScale. Otherwise, scale changes will not be allowed.

• minScale - The minimum accepted scaling factor. If the size of a detected object is less than minScale times that of the matched model, no match will be reported.

• maxScale - The maximum accepted scaling factor. If the size of a detected object is greater than maxScale times that of the matched model, no match will be reported.

• minimizeGeometricError - Enables geometric refinement of the location and rotation of matched objects. Generally this should be true. Setting it false makes the algorithm a bit faster but also less accurate.

• mergeNearbyMatches - If true, the matcher will merge detections whose scale, angle and location are close enough to each other. This is usually the right thing to do because the same model can often be found several times with slightly different sizes and shapes. The next three parameters control the merging process.

• scaleTolerance - The maximum allowed relative scale change in merging nearby detections.

• angleTolerance - The maximum allowed angle change in merging nearby detections, in degrees.

• distanceTolerance - The maximum allowed distance between origins of models to be merged.

• selectionProbability - The probability of choosing a model that fits the data well enough. The higher the probability, the more effort the algorithm makes to find a match.

• maxIterations - The maximum number of iterations the algorithm will run while trying to find a match regardless of the setting of selectionProbability.

• maxSamplings - The maximum number of keypoint pairs the algorithm will try while finding a model candidate. The more keypoints there are in the input image which don’t belong to any object the matcher is trying to find, the bigger this number should be.

• maxPointMatchDistance - A keypoint is accepted into an existing match candidate if its distance from a keypoint in the match does not exceed this value. Otherwise, the point will be considered an outlier and rejected. If the value is too big, outliers may get accepted into a match and the accuracy of the match is deteriorated.

• minMatchedPoints - The minimum number of matched keypoints that are required for an accepted match. Increasing this value makes spurious detections less likely but also increases the probability that the object is not found at all.