Linux installationđź”—
The Linux installer comes in two flavors: a Flatpak-based desktop installer that contains both the Builder, the SDK and the Engine and a tarball that contains just the Engine. If you are installing VisionAppster on a desktop PC for development purposes, you’ll need the desktop installer (Flatpak). If you only need the Engine (and do not want Docker) or are installing it on an embedded device, pick the tarball.
Desktop (Flatpak)đź”—
Open a terminal and run the following commands:
wget https://download.visionappster.com/linux/va-install
chmod +x va-install
./va-install
The VisionAppster platform will be installed to your home directory (user scope) by default.
The desktop installer uses Flatpak to cover as many Linux distributions
as possible. The va-install
script will inspect your system and try
to install a recent enough Flatpak if needed. We haven’t tested all
Linux distributions though. If the installer fails, please check the
official Flatpak site for
distro-specific installation instructions.
The va-install
script can later be used as a maintenance tool. You
may want to copy it to your PATH
to ease future maintenance tasks.
In the terminal, give the following command:
sudo cp va-install /usr/bin
Once this is done, you can update your local or system-level
installation by just entering va-install
or va-install --system
on the command line. Type va-install --help
for usage instructions.
The installer will add desktop entries that will appear somewhere in your start menu/launcher depending on your desktop environment. There is a known caching issue in some KDE versions that prevents you from seeing the new entries unless you log out and in again. You can however always start the Builder from the command line:
# User-scope installation, if ~/bin or ~/.local/bin is in PATH
va-builder
# Otherwise
flatpak run com.visionappster.Builder
The installer lets you to optionally install the VisionAppster Engine as a systemd service. To start and stop the service:
# User scope installation
systemctl start --user va-engine
systemctl stop --user va-engine
# System scope installation
sudo systemctl start va-engine
sudo systemctl stop va-engine
Embedded (tarball)đź”—
The tarball installer contains the VisionAppster Engine and all of its dependencies. This makes it possible to run the Engine on practically any Linux distribution provided that the underlying hardware meets the requirements.
To install the tarball, type the following commands in a terminal. You
may need to make some adaptations; for example, the sudo
command may
not be available. In such a case run the commands as root.
# Change as needed. Supported architectures are x86_64, arm_64 and arm_32
ARCH=x86_64
wget https://download.visionappster.com/linux/$ARCH/va-engine-linux-$ARCH-latest.tgz
sudo tar zxfC va-engine-linux-$ARCH-latest.tgz /
cd /va-engine/overlay/opt/visionappster/bin
sudo chown root:root va-chroot
sudo chmod +s va-chroot
export PATH="$PATH:/va-engine/overlay/bin"
Test the installation:
va-pkg --help
To install the VisionAppster Engine as a systemd service:
sudo ln -s /va-engine/overlay/usr/lib/systemd/system/va-engine.service /usr/lib/systemd/system
sudo systemctl enable va-engine
Inspect the va-engine.service
file to see how to start the service
manually.
How does it work?đź”—
The tarball contains everything the VisionAppster Engine requires,
including the standard C library (glibc
) and the dynamic loader
(ld-linux.so
). It depends on nothing but the Linux kernel, which has
a very stable interface. If you compile the kernel yourself, make sure
to enable overlayfs support.
Binaries in the tarball are started through va-chroot
, a statically
linked executable that sets up a chrooted sandbox for a command to run.
This separates the command from the surrounding system so that no
conflicting shared libraries will be loaded.
va-chroot
works broadly the same way as the standard
chroot command, but instead of
just using an existing directory as the file system root it creates a
merged file system (overlayfs) by mounting the tarball’s contents on top
of the system’s root directory (in a private namespace) and chroots the
process there. This requires root privileges, which is why we set the
suid bit (chmod +s
) in the instructions above. va-chroot
will
drop root privileges once it has done the mounts.
The system’s root directory will be mounted in read-only mode, which
means the VisionAppster Engine will not be able to make changes to it.
Instead, the changes will appear under /va-engine/overlay/
when
viewed from the host system. (This directory is not accessible in the
sandbox.)
You can use va-chroot
either through symbolic links or by invoking
the command directly. By default, the va-pkg
and va-run
commands
are symlinked to /va-engine/overlay/bin
, which is why we added that
directory to PATH
above. If you want to use other commands through
va-chroot
, its help provides further details:
/va-engine/overlay/opt/visionappster/bin/va-chroot --help
Note that if you run dynamically linked binaries in the sandbox, they must be compatible with the shared libraries and the dynamic loader that are shipped in the tarball. Existing binaries in your system may or may not work. For example, the following either lists the contents of the merged file system’s root or fails with a dynamic linker error message:
/va-engine/overlay/opt/visionappster/bin/va-chroot /bin/ls /
Differences to Dockerđź”—
The VisionAppster Engine Docker image contains the same binary files as the tarball. The file system in the Docker image is however strictly separated from the host system, which means that the image must be more self-contained. It basically comes with a small Linux distribution wrapped around the tarball, which obviously makes it bigger. To run the image you also need to have the Docker runtime installed. These reasons, and the more restricted access to the underlying system make the Docker image less suitable for embedded devices. Docker’s management interface however makes it a good choice e.g. in cloud deployments.
On the other hand, the tarball is not fully self-contained in the sense that it directly uses the host system’s devices, network configuration etc. Apart from the Linux kernel it requires no external binaries to function, which makes it ideal for custom-built embedded systems. It is also a good choice on bare-metal server deployments (e.g. Linux PCs) that are managed via systemd.