Contour matchingđź”—
Summaryđź”—
The contour matching recipe shows how to accurately find locations, sizes and orientations of objects in an image based on the shape of their contour. The contours of objects are detected in a binary image. Hence, it is essential that the objects of interest can be separated from the background.
Downloadsđź”—
Detailed descriptionđź”—
The recipe consist of three phases: preprocessing, matching and analyzing results. We’ll walk through these from top to bottom.

The processing graph of the contour matching demo.đź”—
Preprocessingđź”—
The images originate from a virtual camera configured in Image Source.
Separate Color Channels splits color images into RGB color channels. In this application, we choose the blue channel as it gives the best separation of the objects against the dark background.
Histogram calculates a histogram of the intensities of the pixels in the 8-bit grayscale input image.
Find Threshold analyzes the histogram and suggests an optimal threshold for binarization.
Binarize binarizes the image using the suggested threshold.
Binary Morphology fills the holes possibly left over from the binarization process. The reasons why there might be holes are for instance random noise and reflections. This gives us relatively good-quality binary blobs from which contours can be reliably extracted. The mask size is 5-by-5 pixels.
Matchingđź”—
Match Countours is the crux of this recipe. It takes a binarized image as input, identifies the boundaries of the objects and compares them with the objects the matcher has been trained with.
The matcher is trained using binary images of the objects of interest. You can enter the training mode by clicking on the cogwheel icon of the Match Contours tool. In the training mode, you can add and remove models and change the parameters of the matcher.
Changes to the model database of the Match Contours tool instance are applied by pressing the “Build Database” button. The button is active if there are pending changes. Training mode can be closed by clicking on the “X” icon at the top-right corner of the window.

The matcher makes use of key points on the contour of a model image.đź”—
Analyzing resultsđź”—
The sizes, orientations and locations of the found objects are given in
the Frame
and Size
outputs of the matcher. The class names of
the recognized objects are given in the Class Name
output. Assume
that the matcher finds four objects. This means that
the frames are presented as a 16-by-4 matrix (i.e. four 4x4-frames stacked up),
the sizes are presented as a 4-by-2 matrix (four 1-by-2 sizes stacked up) and
the class names as a 4-by-1 table of strings.
A Begin Iterate tool has been configured
to input the three connected outputs from the contour matcher
(i.e. Frame
, Size
and Class Name
). It splits the frame, size
and class name matrices to single entities:
the 16-by-4 frame matrix becomes four 4-by-4 matrices,
the 4-by-2 size matrix becomes four 1-by-2 matrices,
the 4-by-1 table of class names becomes four strings.
Now all tools connected to the corresponding outputs (named as
Element 0, 1, 2
) can process the matches one by one instead of all
at once. Click on the names of the outputs to see where they are
connected.
Project to Virtual View receives the original image and a frame/size-pair which frames a found object. The output image contains nothing but the found object.
JavaScript (for spitting the image) receives the image of the found object as well as the class name of the object. The names of the outputs of the script tool have been configured so that they are identical to the class names. Hence, the script code that routes the input image to the output corresponding to the class name becomes very simple:
$o[$i.className.entry(0, 0)] = $i.image;
The symbol
$o
stands for output,$i
is the input and$i.className.entry(i, j)
returns the value at index [i, j] in the input matrix calledclassName
.During the development of a script, it is often essential to print values of the variables. For instance:
print("The object is " + $i.className.entry(0, 0));
Input and output configuration of the script is shown below.

Inputs and outputs of the “split image” script🔗
Another instance of Histogram calculates the histogram of intensities of segmented objects. The intensity of an RGB pixel is the average of the color channels.
Analyze Histogram calculates statistical features such as mean, variance, median and many more. In this case, only
mean
output is connected.JavaScript (for combining name and mean of an object) receives the name of a found object and mean intensity and combines them into a 1-by-2 table which is sent to
info
output. Again, the script code is very simple:// reserve space for the output $o.info = Kuvio.table(1, 2); // Write data to the output table $o.info.setEntry(0, 0, $i.className.entry(0, 0)); $o.info.setEntry(0, 1, Math.round($i.mean));
Input and output configuration of the script as well as connection to
End Iterate
tool are shown below.

Inputs and outputs of the “name + mean” script🔗
End Iterate is a counterpart of the
Begin Iterate
tool described above. It stacks up entries received at its inputs to matrices or tables. In this example, the entries received atElement 0
input are 1-by-2 tables with one row and two columns. If there are, say, four object matches in the original input image, the output atResult 0
will be a 4-by-2 table. Notice thatSync
input ofEnd Iterate
is connected to the corresponding output ofBegin Iterate
.
Finally, an example showing how the app identifies and separates four kinds of tools (two kinds of pliers, a screwdriver and a spanner.)

Four different kinds of tools are correctly indentified.đź”—