Image classifierđź”—

Summaryđź”—

Image classifier demo shows you how to identify objects which belong to one of 1000 ImageNet categories. In this demo we use a deep convolutional neural network called ResNet-50. The network is stored in ONNX format. Many more classifier models are available in GitHub.

Detailed descriptionđź”—

Let us walk through the schema below top-down.

Processing graph

Processing graphđź”—

  • The images originate from the virtual camera configured in Image Source.

  • Convert Colors takes an image in any supported format and converts it to uncompressed RGB. This tool is necessary because our intent is to analyze images sent by arbitrary users.

Converting any image type to RGB

Converting any image type to RGBđź”—

  • Scale Image Tool scales the image to the size required by the image classifier model. In this case, the size is 224 x 224 pixels.

    The ResNet model requires a fixed image size of 224Ă—224 pixels.

    The ResNet model requires a fixed image size of 224Ă—224 pixels.đź”—

  • Image to Tensor converts a color image into a tensor of shape 1 x 3 x height x width. 1 is the number of images per tensor (N, batch size), 3 is the number of color channels (C) in an RGB image. Height (H) and width (W) are both 244 pixels. Intensity normalization is also applied to each pixel as required by the classifier model.

    The ResNet model requires NCHW input tensor with normalized intensity.

    The ResNet model requires NCHW input tensor with normalized intensity.đź”—

  • Run Onnx Model runs the ResNet-50 model. The model file is selected with a file dialog which appears by clicking the Model File input parameter. Input and output sockets appear after the model file has been loaded. The output of this model is a tensor of shape 1 x 1000 which contains confidence levels for each of the 1000 ImageNet object classes. Note that in this case the model has been saved as a resource file and will thus be bundled with the app. Alternatively, you can load the model from a local disk.

    Parameter displays show the shapes of input and output tensors.

    Parameter displays show the shapes of input and output tensors.đź”—

  • Tensor to Classification inputs the tensor of confidences, selects those that exceed the given Confidence Threshold and sorts them into descending order. Only up to Max Result Count elements are taken. The Onnx model used in this example expects calculating the softmax propability scores as a postprocessing step, so Scaling parameter is set to Softmax. The Index and Confidence outputs contain the indices of the selected tensor and the corresponding confidence values.

    Tensor to classification picks the most likely classifications.

    Tensor to classification picks the most likely classifications.đź”—

  • Finally, JavaScript inputs the selected indices and converts up to Max Result Count most likely results into readable strings.

    A JavaScript tool is configured to take three inputs and produce a table as an output.

    A JavaScript tool is configured to take three inputs and produce a table as an output.đź”—

The script code is given below. All 1000 classes in the classNames array are not shown.

The script code (int two pieces).

The script code (int two pieces).đź”—

Here is an example of an input image and a list of 5 most likely results.

The result.

The result.đź”—

Deploying the applicationđź”—

The project has an Image Source connected to the image classifier compound tool, so it can be easily tested in the Builder. Our target is however to build an API to which images are sent from clients. For this, the app doesn’t need an internal image source.

  • Remove the image source tool by right clicking it and selecting “Remove tool” from the menu.

  • Close the classifier compound tool by right-clicking the outer light gray box containg the other tools below the image source and by selecting “Close compound” from the menu.

Closing a compound.

Closing a compound.đź”—

  • Right-click the closed “Classify Image” compound tool and select “Properties…” from the menu.

  • Check “Publish tool function API” and mark both input and output as published. Accept the dialog. The tool’s illustration in the processing graph now shows an API symbol.

Publishing a tool as an API function.

Publishing a tool as an API function.đź”—

  • Go to the application API editor by clicking the API button in the top right corner of the Builder. The tool function API should now be visible in the “Functions” section. The default name of the API is now “Compound”, change it by clicking the “Compound” text in the rightmost column. Change the name to “classifyImage”. This will be the name of the function in the published API. The name is case sensitive. Since the bundled web app already uses this name, it must be typed exactly right.

Editing an app’s API.

Editing an app’s API.🔗

  • Save the modified project and select “Package this app” from the file menu. In the package dialog, go to the “Remote” field and upload the package to “localhost”.

Exporting a package to a remote.

Exporting a package to a remote.đź”—

  • Browse to the Components tab on the Engine Front Page and the uploaded package should be visible there. Mark the package and install it.

To install an uploaded package, mark it and click “Install”.

To install an uploaded package, mark it and click “Install”.🔗

  • On the Apps tab, click “Start” to start the app and publish the API. A ready-made package with these modifications is also available here.

To start an installed app, click “Start”.

To start an installed app, click “Start”.🔗

  • Now, let us test the server application API with a web UI application. Download Web UI app and install it to the engine via the Components tab. Then click it on the Apps tab.

Image classifier web application

Image classifier web applicationđź”—