Motion detectorπ
Summaryπ
This recipe shows how to make an app that detects motion by monitoring changes in successive input images and raises an alarm if an object of the specified type (e.g.Β a person, a dog or a car) is seen in the image. Furthermore, the image which triggered the alarm can be saved to a file.
Downloadsπ
Detailed descriptionπ
The big picture of the motion detector app is given below. The processing graph is too large to fit on one page, which is why weβll walk it through in smaller pieces.

Big picture of the motion detectorπ
The images originate from the default webcam as configured in Image Source. Notice that the default camera is selected automatically by setting the
Camera Id
as/^webcam/
, a regular expression that finds the first webcam connected to the computer.Convert Colors converts possibly encoded or compressed images to RGB format. This block is actually not needed as long as the images come from
Image Source
which decodes the images automatically. However,Convert Colors
serves as a convenient entry point in case you later want to use the app as an API to which a client can send compressed images for analysis.Detect Motion compares the input image to the background image which is a weighted average of all previous images. If more than 1% of the pixels (i.e.Β
Motion Threshold
= 0.01) differ from the background by more than 10 intensity levels (i.e.ΒThreshold
= 10), it declares that motion has been detected by setting outputMotion Detected
astrue
. ParametersAlpha1
andAlpha2
control how fast the background image adapts to changes in input images.Gate passes through the input image and
Motion Detected
flag only if motion has been detected-

Gate passes through only images in which motion has been detected.π
YOLO ONNX model tries to recognize objects in the image. The processing path from
Scale Image
toProcess YOLO Result
is identical to the YOLO Classifier recipe. The output is a list of bounding boxes of found objects as pairs ofFrame
andSize
matrices, and a corresponding list of object classes as integers between 0 and 19.

YOLO recognizes an object in the image.π
Information about detected motion, classes of objects which possibly caused the motion, as well as locations of the objects are passed to JavaScript which makes the final decision on whether to raise an alarm. Inputs, outputs and the code for the script are given below.
YOLO recognizes 20 object classes. The name of the class can be configured using the
Object Type
parameter. The class names are listed in the script below. If theObject Type
parameter is left empty, the alarm is raised whenever any motion is detected.

A script decides if an alarm should be raised.π
// Default values for frame & size
var outFrame = VisionAppster.identity(4);
var outSize = VisionAppster.doubleMatrix(1,2);
outSize.setEntry(0, 0, 0);
outSize.setEntry(0, 1, 0);
var yoloClasses = ["aeroplane", "bicycle", "bird", "boat", "bottle",
"bus", "car", "cat", "chair", "cow",
"table", "dog", "horse", "motorbike", "person",
"pottedplant", "sheep", "sofa", "train", "tv"];
var classId = yoloClasses.indexOf($i.objectType);
var classIndex = -1;
for (var i = 0; i < $i.classIndex.rows; ++i)
if ($i.classIndex.entry(i, 0) == classId)
{
classIndex = i;
break;
}
if (classIndex >= 0 && $i.motionDetected)
{
outFrame = $i.frame.sub(4 * classIndex, 4);
outSize = $i.size.sub(classIndex, 1);
$o.objectDetected = true;
}
else if (classId < 0 && $i.motionDetected)
$o.objectDetected = true;
else
$o.objectDetected = false;
$o.frame = outFrame;
$o.size = outSize;
If a motion has been detected and the image contains an object of
interest, ObjectDetected
flag is set to true
. Remember however
that the alarm may have been triggered also by motion of something else
than the recognized object.
In this example the frame and size outputs (i.e.Β the bounding box
information) are not used. You can use them for example by dragging and
dropping Crop Image tool on the canvas and
connecting the output of Gate: Motion
tool to Image
input of
Crop Image
. Then connect Frame
and Size
outputs of the
script to the corresponding input of Crop Image
tool.
Object Detected
flag is connected to yet another script. The script
controls where the interesting images will be saved. The file path where
images will be saved is given in parameter File Path
. The image
format will be jpg and files are named as image1.jpg
,
image2.jpg
, image3.jpg
and so on. The numbering will wrap around
after Max File Count
have been saved and old images will be
overwritten. No images will be saved if Max File Count
is set as
zero.

A script controls if and where the image should be saved,π
if ($s.fileCount === undefined)
{
$s.fileCount = 0;
}
if ($i.maxFileCount > 0 && $i.objectDetected)
{
$s.fileCount++;
if ($s.fileCount > $i.maxFileCount)
$s.fileCount = 1;
$o.filePath = $i.filePath + "/image" + $s.fileCount + ".jpg";
$o.writeEnabled = true;
}
else
{
$o.filePath = "";
$o.writeEnabled = false;
}
A gate gets the Write Enabled
flag from the
script and uses it to decide whether to pass the image and file path to
Save Image.

Images of interest are saved to jpg files,π
Finally, an example of the app in action. The object of interest has
been configured as a dog. The app has detected both motion and a dog, so
an alarm has been raised (i.e.Β Object Detected
output is True
)
and the image has been saved to a file.

An example of dog detection cam.π