LXC on Arch Linux and elsewhere

Date: 2025-01-05
Tags: Alpine, Arch Linux, LXC, Linux, administration, container

The Linux kernel containers (LXC) is a nice and efficient mean to run containers. Far lighter and more efficient than Docker ones, it is well integrated in GNU/Linux system environment.

LinuxContainers developers community followed, Aleksa Sarai Incus interface, a fork of Canonical's LXD. It have both a cli and a web interface, like the very powerful Proxmox, maybe not reached the same level (in january 2025), but can be installed on your own linux, instead to having to work on a proxmox specific server. Both have their pros and cons.

I will present here all installation process step by step on Arch Linux to make LXC container working:

Disclamair:

First you need to set some system specific parameters as root, to have it working:

Table of Content

Main LXC cli commands

Give it first for reference, will only be useful later.

For eaxch command, the --name is not mandatory here (just give exemple with lxc-start. You can use executable included --help for essential options with short descrption or manpage for more complete documentation. Exemple with lxc-start:

lxc-start --name searxng    # stgart searxng container
lxc-start searxng           # same thing
lxc-start --help            # Short essential help
man lxc-start               # Complete man page

List of first essental command

lxc-start container_name             # Start the container "container_name"
lxc-info container_name              # informations about the container
lxc-ls --fancy                       # list containers, their IP address and their state
lxc-attach container_name [command]  # runs a specified command in the container, if no command, already logged console start, simply type exit to dettach
lxc-console container_name           # Join the container system console (/dev/tty1, generaly a system user login/pass is asked)
lxc-checkconfig                      # Check LXC general configuration
lxc-snapshot container_name          # Manage snapshots (save/restore/view whole container states at given time) see --help
lxc-destroy                          # *definitvely* destroy a container

Base installation

Install packages

pacman -S --needed lxc dnsmasq arch-install-scripts qemu-user-static distrobuilder

Set systemwide mapping for unprivileged mode

echo "lxc.idmap = u 0 100000 65536" >>/etc/lxc/default.conf
echo "lc.idmap = g 0 100000 65536" >>/etc/lxc/default.conf
echo "root:100000:65536" >>/etc/subuid
echo "root:100000:65536" >>/etc/subgid

Network and lxc-net service

Warning: if you disabled IPv6 at kernel level, you need to comment some hard-coded line in /usr/lib/lxc/lxc-net to be able to start it:

sudo sed -i 's/^LXC_IPV6_/#LXC_IPV6_/' /usr/lib/lxc/lxc-net

But this will overwritten by next package update. A not very clean, but working solution to avoiding to not have firewall anymore is to block update (a question will be asked if you still want to overwrite it, you can then evaluate again the state of the file, but, at least, you will not have firewall disabled strangly without notice it):

sudo sed -i 's/^#IgnorePkg   =/IgnorePkg    = lxc/' /etc/pacman.conf

Set lxc-net

By default /etc/default/lxc-net is undefined in Arch Linux, you must create it the following way to have bridge working

cat >/etc/default/lxc-net <<EOF
USE_LXC_BRIDGE="true"
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
EOF

It is now optionally possible to define static IP via dhcpcd to some containers, by putting them in /etc/lxc/dnsmasq.conf:

dhcp-host=searxng,10.0.3.100
dhcp-host=archlinux,10.0.3.50

then start and enable LXC−net:

systemctl start lxc-net
systemctl enable lxc-net

Verify the status:

systemctl status lxc-net

You can veify the presence of the bridge by typing ip address (short ip a) show all interfaces or: -ip address show dev lxcbr0 (show only the specified lxbr0 bridge) -ip address show type bridge (show all devices of type bridge)

$ ip address show type bridge
10: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 fc42:5009:ba4b:5ab0::1/64 scope global 
       valid_lft forever preferred_lft forever
$ ip -o address show type bridge
4: lxcbr0    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0\       valid_lft forever preferred_lft forever
4: lxcbr0    inet6 fc42:5009:ba4b:5ab0::1/64 scope global \       valid_lft forever preferred_lft forever

Firewall setting

WARNING THIS IS VERY ESSENTIAL IF A FIREWALL IS USED Allow Firewall communication between lxc-net service and the bridge to use IP address in containers If not set correctly this will block DHCP and container would not obtan IP addresses

If you use nftables, it needs to accept lxc_bridge to have address (work both for IPv4/IPv6), else lxc will not receive the lxc-net DHCP assignation. You can add fixed one. So in /etc/nftables.conf:

lxc_bridge definition at begining of the file:

define lxc_bridge = lxcbr0

Inside the chain input() section of the table inet filter {) section, after policy drop:

    iif $lxc_bridge accept comment "in LXC bridge"

Choose and install the distro flavour

To obtain the list of available systems and flavours templates

/usr/share/lxc/templates/lxc-download -l

Add | grep to select specific cases.

For example all available riscv available templates:

$ /usr/share/lxc/templates/lxc-download -l | grep riscv
alpine           3.20        riscv64  default  20250105_13:03
alpine           3.21        riscv64  default  20250105_13:02
alpine           edge        riscv64  default  20250105_13:04
debian           trixie      riscv64  default  20250105_05:24
ubuntu           focal       riscv64  default  20250105_09:36
ubuntu           jammy       riscv64  default  20250105_10:17
ubuntu           noble       riscv64  default  20250105_09:13
ubuntu           oracular    riscv64  default  20250105_08:19

The most interesting default included distributions in my point of view:

All args are mandatory here. I give riscv64 (RISC-V 64-bit, RV64GC) and arm64 (ARM 64-bit, aarch64) as example.

You need appropriate riscv64/arm64/amd64 qemu-static binary, to run containers with foreign architecture. For exemple on Arch Linux, if you want minimal dependencies:

Warning: The Arch Linux packagxe qemu-full will install dependencies to run all system architectures, but not qemu-user-static, nor qemu-user-static-binfmt. So it will be useless in our case.

lxc-create --name barebonearm --template download -- --dist busybox --release 1.36.1 --arch arm64
lxc-create --name alpinerv --template download -- --dist alpine --release 3.21 --arch riscv64 
lxc-create --name searxng --template download -- --dist archlinux --release current --arch amd64
lxc-create --name debrv --template download -- --dist debian --release trixie --arch riscv64
lxc-create --name ubuntuarm --template download -- --dist ubuntu --release noble --arch arm64

In the case of Debian, a message is thrown at the end of creation, we will see how to manage it in launch section. This concern all distribution if you want ot connecto it via ssh, that is not mandatory as lxc-attach allow to manage it easyly, at least from root account of the server.

To enable SSH, run: apt install openssh-server
No default root or user password are set by LXC.

You can now jump to next section, Launch and use containers if you don't want to know more about templates.

The --template argument have several options, depending on files in /usr/share/lxc/templates/

[root@archriscv ~]# ls /usr/share/lxc/templates/
lxc-busybox  lxc-download  lxc-local  lxc-oci

Here the options are --template (busybox|download|local|oci)

You can have specific options of this templates by adding -h to the template -t is the short version of --template

[root@archriscv ~]# lxc-create -t local -h
Usage: lxc-create --name=NAME --template=TEMPLATE [OPTION...] [-- template-options]

lxc-create creates a container

Options :
  -n, --name=NAME               NAME of the container
  -f, --config=CONFIG           Initial configuration file
  -t, --template=TEMPLATE       Template to use to setup container
  -B, --bdev=BDEV               Backing store type to use
      --dir=DIR                 Place rootfs directory under DIR

  BDEV options for LVM (with -B/--bdev lvm):
  […]

TODO: lxc allow to work inside filesystem or to create it inisde a raw disk image, document it

Launch and use containers

lxc-start --name searxng

Warning: not all OS request their adresse IP par DHCP and some only ask for IPv6, this could make thing there are some trouble in the configuration.

As exemple, the case of alpine, archlinux and busybox:

$ lxc-ls --fancy
NAME       STATE   AUTOSTART GROUPS IPV4       IPV6                                   UNPRIVILEGED 
alpinerv   RUNNING 0         -      -          fc42:5009:ba4b:5ab0:216:3eff:fe57:c8ca true         
archlinux  RUNNING 1         -      10.0.3.50  fc42:5009:ba4b:5ab0:216:3eff:fe20:3d26 true         
busyboxarm RUNNING 0         -      -          fc42:5009:ba4b:5ab0:216:3eff:fea6:c02b true         
debrv      RUNNING 0         -      -          fc42:5009:ba4b:5ab0:216:3eff:feac:c39  true         
searxng    RUNNING 1         -      10.0.3.100 fc42:5009:ba4b:5ab0:216:3eff:febd:1dbd true         

Note: It looks like emulated architecture containers don't get their IPv4 address, I need to understand why for and address this problem.

Information: By default, containers are created in /var/lib/lxc/container_name Their configuration file is /var/lib/lxc/container_name/config And their rootfs is /var/lib/lxc/container_name/rootfs/

Autostart of the container:

echo "lxc.start.auto = 1" >>/var/lib/lxc/searxng/config

Password and sshd

you can set the root password by two ways here:

Directly from lxc-attach command

$ lxc-attach searxng passwd
New password: 

Or indirectly by connecting in shel via lxc-attach too:

$ lxc-attach searxng
[root@searxng /]# passwd    
New password: 

On Archlinux for ssh, the full path of the command must be passed if arguments are given a -- means next arguments will be the command followed by its own arguments:


lxc-attach searxng -- /usr/bin/pacman -Sy openssh
lxc-attach searxng -- /usr/bin/systemctl start sshd
lxc-attach searxng -- /usr/bin/systemctl enable sshd

In the case of Debian container, there are problems with DNS, it should be by default 10.0.3.1, need to found a good way to fix it:

lxc-attach debian -- /usr/bin/apt update
lxc-attach debian -- /usr/bin/apt install openssh-server

Containers as non-root user

You just need to create a default config file in your home directory, you can simply copy the one from /etc/lxc/default.conf or create a new one, but it's a better practiceto have separate ones:

It's a better practice to change the UID-GID ranges:

mkdir ~/.config/lxc
cat >~/.config/lxc/default.conf <<EOF
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
EOF

About uid/gid and their mappings, linuxcontainers show a bash example to obain them:

MS_UID="$(grep "$(id -un)" /etc/subuid | cut -d : -f 2)" ME_UID="$(grep "$(id -un)" /etc/subuid | cut -d : -f 3)" MS_GID="$(grep "$(id -un)" /etc/subgid | cut -d : -f 2)" ME_GID="$(grep "$(id -un)" /etc/subgid | cut -d : -f 3)" echo "lxc.idmap = u 0 $MS_UID $ME_UID" >> ~/.config/lxc/default.conf echo "lxc.idmap = g 0 $MS_GID $ME_GID" >> ~/.config/lxc/default.conf

This is just information about your user and group mapping in /etc/subuid and /etc/subgid.

'''TODO''': It could be maybe interesting to give a different mapping as 20000 for root started containers so.

Then you now can create a container as user, but that's a bit more complexe

systemd-run --unit=my-unit --user --scope -p "Delegate=yes" -- lxc-create \
  --name containername --template download   # this part is the same than for root.

And for starting it, it's the same method than complex way

systemd-run --unit=my-unit --user --scope -p "Delegate=yes" -- lxc-start --name containername

So all commands must be prefixed by systemd-run --unit=my-unit --user --scope -p "Delegate=yes" --.

Distrobuilder to make templates

Distrobuilder (Documentation) is used to make templates, with the build-lxc option. It uses a configuration file in YAML format. There are some examples in the examples directory of Distrobuilder and all lxc-ci .YAML used to build templates are available online. The official LXC builds are made with the help of Jenkins (doc), an automation server in Java, but any runner, can do the same automation job. For reference some self-hostable runners used for CI:

On Arch Linux you can simply install the binary with the package distrobuilder. Forgejo is very well packaged on Arch Linux too, but we will first only loot at the distrobuilder process.

pacman -S --needed distrobuilder

Distrobuild YAML file

This process is documented in the Use distrobuilder to create images tutorial of Distrobuilder.

Usage of Distrobuild itself

The whole help is available with:

distrobuilder build-lxc --help

To build it, you need to have root privileges to create nodes :

sudo distrobuilder build-lxc nuttx.yaml

Forgejo installation

On Arch Linux, its straight-forward for the base.

pacman -S --needed forgejo sqlite3
systemctl start forgejo

If you want it restart automatically after a reboot, but it's not useful on a working station if you only use it time to time.

systemctl enable forgejo

SQLite database

At the first connexion to the interface, id, pass will be asked, I suggest to use SQlite as DB, so no need to have to manage an heavy SQL server, complex dump etc. SQLite database is one single file, changed atomically. You can just copy it at anytime to have the dump of the base, or if you really prefer a text,SQL dump file you can use. For example you can put in crontab to dump everyday at 02h00, it will create the dump of the day as /path/to/my/dump/forgejo-20250109, the 9 january 2025. The path/to/my/dump should exists and be writeable by the user that has the crontab :

00 02 * * * /usr/bin/sqlite3 /var/lib/forgejo/data/forgejo.db .dump > /path/to/my/dump/forgejo-`/usr/bin/date +%Y%H%M`.dump

To restore the database of the 03 december:

WARNING, it will totally override your current database.

sqlite3 /var/lib/forgejo/data/forgejo.db < /path/to/my/dump/forgejo-20241203.dump

To install the runner you need to compile it yourself. the is totally outdated, this require go for compilation and github. Compilation could take some time and resources.

pacman -S --needed base-devel git go
git clone https://code.forgejo.org/forgejo/runner
cd runner
make
sudo cp -a forgejo-runner /usr/bin

You can remove go if you do not need anymore.

Add the forgejo-runner systemd service

cat >>/etc/systemd/user/forgejo-runner.service <<EOF
[Unit]
Description=Forgejo Runner

[Service]
Type=simple
ExecStart=/usr/bin/forgejo-runner daemon
Restart=on-failure

[Install]
WantedBy=default.target
EOF
systemctl daemon-reload

Creating the runner user and launch the service

WARNING, this part is a WIP, don't work for now. Don't use it

First, yYou can get a runner token in the forgejo interface in

Then create the user and register the service

useradd -m runner
loginctl enable-linger runner   # this allow the user to runs user systemd service
su - runner
forgejo-runner generate-config > config.yml
forgejo-runner register  # answer to questions

Now we want to enable the systemd service for runner

ssh-keygen -t ed25519
cp -a .ssh/id_ed25519.pub .ssh/authorized_keys   # this will be used for ssh in loopback to enable dbus
ssh 127.0.0.1    # accept the key here
systemctl --user start forgejo-runner

Verify the status:

systemctl --user status forgejo-runner

if not ok, verify why for with:

journalctl  --user -xeu forgejo-runner

else you can enable the service:

systemctl --user enable forgejo-runner

You need to register the service by adding it in Forgejo interface

Aucun exécuteur disponible Go to /admin/action/srunners, This comes from the documentation page:

Add systemd forgejo-runner.service:

cat >>/etc/systemd/user/forgejo-runner.service <<EOF
[Unit]
Description=Forgejo Runner

[Service]
Type=simple
ExecStart=/usr/bin/forgejo-runner daemon
Restart=on-failure

[Install]
WantedBy=default.target
EOF
systemctl --user daemon-reload
loginctl enable-linger runner

need to login at runner now and then:

sudo -u runner systemctl --user start forgejo-runner
sudo -u runner systemctl --user enable forgejo-runner
Tags: Alpine, Arch Linux, LXC, Linux, administration, container