Contour Matching

Contour Matching

Summary

The contour matching demo shows how to find locations, sizes and orientations of predefined objects in an image. The matcher can be trained to identify and accurately locate objects based on the shape of their boundaries.

The contours of objects are detected in a binary image. Hence, it is essential that the objects of interest can be separated from the background.

Downloads

Detailed description

The demo consist of three phases: preprocessing, matching and analyzing results. We'll walk through these from top to bottom.

The processing graph of the contour matching demo.

The processing graph of the contour matching demo.

Preprocessing

  • The images originate from a virtual camera configured in Image Source.

  • Color Separation splits color images into RGB color channels. In this application, we choose the blue channel as it gives the best separation of the objects against the dark background.

  • Grayscale Histogram calculates a histogram of the intensities of the pixels in the 8-bit grayscale input image.

  • Threshold Finding analyzes the histogram and suggests an optimal threshold for binarization.

  • Level Thresholding Tool binarizes the image using the suggested threshold.

  • Binary Morphology Tool fills the holes possibly left over from the binarization process. The reasons why there might be holes are for instance random noise and reflections. This gives us relatively good-quality binary blobs from which contours can be reliably extracted.

Matching

Contour Matching Tool is the crux of this demo. It takes any image as input, identifies the boundaries of the objects using a static gray level threshold and compares them with the objects the matcher has been trained with. If the input is already binarized as is the case in this demo, the static treshold can be set to zero.

The matcher is trained using binary images of the objects of interest. You can enter the training mode by clicking on the cogwheel icon of Contour Matching Tool. In the training mode, you can add and remove models and change the parameters of the matcher.

Changes to the model database of the contour matching tool instance are applied by pressing the "Build Database" button. The button appears if there are pending changes. Training mode can be closed by clicking on the "X" icon at the top-right corner of the window.

The matcher makes use of key points on the contour of a model image.

The matcher makes use of key points on the contour of a model image.

Analyzing results

The sizes, orientations and locations of the found objects are given in Frame and Size outputs of the contour matching tool. The class names of the recognized objects are given in the Class Name output. Assume that the matcher finds four objects. This means that

  • the frames are presented as a 16-by-4 matrix (i.e. four 4x4-frames stacked up),
  • the sizes are presented as a 4-by-2 matrix (four 1-by-2 sizes stacked up) and
  • the class names as a 4-by-1 table of strings.

An Iterate Tool has been configured to input the three connected outputs from the contour matcher (i.e. Frame, Size and Class Name). It splits the frame, size and class name matrices back to single entities. So continuing with the example above:

  • it splits the 16-by-4 frame matrix into four individual 4-by-4 matrices,
  • the 4-by-2 size matrix into four individual 1-by-2 matrices,
  • the 4-by-1 table of class names into four individual strings.

Now all tools connected to the corresponding outputs (named as Element 0, 1, 2) can process the matches one by one instead of all at once. Click on the names of the outputs to see where they are connected.

  • Image Alignment Tool receives the original image and a frame/size-pair which segments a found object. The output image contains nothing but the found object.

  • Script Tool (for spitting the image) receives the image of the found object as well as the class name of the object. The names of the outputs of the script tool have been configured so that they are identical to the class names. Hence, the script code that routes the input image to the output corresponding to the class name becomes very simple:

    $o[$i.className.entry(0, 0)] = $i.image;

    The symbol $o stands for output, $i is the input and $i.className.entry(i, j) returns the value at index [i, j] in the input matrix called className.

    During the development of a script, it is often essential to print values of the variables. For instance:

    print("The object is " + $i.className.entry(0, 0));
  • Another instance of Grayscale Histogram calculates the histogram of intensities of segmented objects. The intensity of an RGB pixel is the average of the color channels.

  • Histogram Features Tool calculates statistical features such as mean, variance, median and many more. In this case, only mean output is connected.

  • Script Tool (for combining name and mean of an object) receives the name of a found object and mean intensity and combines them into a 1-by-2 table which is sent to info output. Again, the script code is very simple:

    // reserve space for the output
    $o.info = Kuvio.table(1, 2);
    // Write data to the output table
    $o.info.setEntry(0, 0, $i.className.entry(0, 0));
    $o.info.setEntry(0, 1, Math.round($i.mean));
  • Collect Tool is a counterpart of the Iterate Tool described above. It stacks up entries received at its inputs to martices or tables. In this example, the entries received at Element 0 input are 1-by-2 tables with one row and two columns. If there are, say, four object matches in the original input image, the output at Result 0 will be a 4-by-2 table.