Detect motion🔗

Accumulates a model of a static background of a scene with moving objects. The background model is based on the mean and covariance values of image pixels. The background model is updated according to the following formula:

f[ B_{t+1} = B_t + (alpha_1 * (1 - I_t) + alpha_2 * I_t) * (I_t - B_t), f]

where f$ B_t f$ is the background model at time moment f$ t f$ and f$ I_t f$ the current intensity of a pixel. f$ alpha_1 f$ and f$ alpha_2 f$ are learning weights that control the speed at which the foreground pixels are merged to the background. Note that the input image is normalized so that the maximum pixel intensity is always one. Color images are automatically converted to gray levels by calculating the average of color channels.

Inputs🔗

  • image: Input image.

  • threshold: The minimum intensity difference between the background model and the current frame that will be considered a change.

  • alpha1: The first learning weight. It determines the rate of adaptation towards dark intensity.

  • alpha2: The second learning weight. It determines the rate of adaptation towards bright intensity.

  • maxStillTime: The maximum number of successive frames a pixel can belong to foreground. This value makes it possible to kill burnt-in objects before the adaptation catches them.

  • motionThreshold: The maximum fraction of pixels that can be classified as foreground before the motionDetected output is triggered.

Outputs🔗

  • background: The current background model.

  • mask: A movement mask image in which background pixels are zero. The value of a pixel indicates the number of successive frames the pixel has been classified as foreground.

  • motionDetected: this output emits a boolean value that determines if there is significant movement in the current frame. The emitted value will be true if the relative number of detected foreground pixels is above motionThreshold, and false otherwise.