Builder

The VisionAppster Builder is an IDE for building vision apps that be run on the VisionAppster Engine and sold in the VisionAppster Store.

First steps

Start the VisionAppster Builder from the start menu of your desktop environment. The Builder will open with an new project that contains an empty processing graph.

The Builder opens with an empty workspace.

The Builder opens with an empty workspace.

To create a functional processing graph you need to add some tools. The Tools menu at the top lists installed tools. Type in a search term to find tools by their name. To add a tool, drag and drop it on the workspace.

Drag and drop tools from the tool menu to the processing graph.

Drag and drop tools from the tool menu to the processing graph.

Now, find a tool called "Image Source" and drop it on the workspace. Click the workspace to hide the tool box. Then click the tool to reveal its parameters. The parameters listed above the tool are inputs and those below it are outputs. The parameters with a colored dots are connectable. The color codes are as follows:

Gray
Optionally connectable, currently not connected.
Green
Connected
Red
Required, but not connected.

Click the dots to add and delete connections.

The workspace after dropping and clicking on an image source

The workspace after dropping and clicking on an image source

The image source receives images from a camera and sends them to the processing pipeline. In this case, we are going to set up a virtual camera that reads images from files. Open the Virtual Camera menu and click the "+" button to add source images. Select "Add images" and use the file browser to select images from your hard drive.

Virtual cameras can be used to build test image sets.

Virtual cameras can be used to build test image sets.

Now that the virtual camera has been configured, we can use it as a source of images: click the Camera Id input parameter of the Image Source and select "Virtual Camera".

Auto-detected cameras are shown in a drop-down list.

Auto-detected cameras are shown in a drop-down list.

The image source will automatically take a picture, which you can see by dragging the Image output parameter to the workspace.

Image parameters are displayed using a powerful image display.

Image parameters are displayed using a powerful image display.

Input and output parameters can be dragged to the workspace. This will publish the parameter in the API of the app. The Builder will create a suitable user interface component to display the value of the parameter and connect it to the API entry. The type of the UI component can be changed in the window menu.

Now open the Tool menu, drag "Detect Edges" and drop it on top of the Image Source on the workspace.

Tools dropped on top of each other are automatically connected.

Tools dropped on top of each other are automatically connected.

To feed a new image to the analysis pipeline, click the "next" button at the bottom.

The buttons on the status bar are used to step execution.

The buttons on the status bar are used to step execution.

Click the Detect Edges tool and drag its Image output on top of the image display. This will add another layer on the display. Use the layers tab on the image display to adjust the properties of the topmost layer as shown in the picture below.

Source image and analysis result shown on top of each other

Source image and analysis result shown on top of each other

Your first app is ready. Click File, Save As... and give your project a name.

Saving a project.

Saving a project.

Saving a project creates a directory that contains the project's assets. When you open a saved project, you need to select that directory, not the project file (vaproject.json) in the directory.

Now, you are ready to create your first component.