Archive of January 2021

5 minute podman walkthrough

Machine - Gentoo amd64 + OpenRC.

Installing podman

We want to use app-emulation/crun instead of app-emulation/runc - Bug 723080.

$ emerge -av1 app-emulation/crun
$ emerge -av podman

Setup podman

The default configuration file provided (/etc/containers/registries.conf.example) does weird redirections, we don't want that.
Create the config file (/etc/containers/registries.conf) from scratch:

[registries.search]
registries = ['docker.io', 'quay.io', 'registry.fedoraproject.org']

[registries.insecure]
registries = []

#blocked (docker only)
[registries.block]
registries = []

Create the /etc/containers/policy.json file:

$ cp /etc/containers/policy.json{example,}

Optional: login to docker hub and quay.io:

$ podman login docker.io
Username:
Password:
$ podman login quay.io
Username:
Password:

Extra - rootless mode

To enable rootless mode, the tun kernel module must be loaded.
If the tun module is built in to the kernel, then no further steps are necessary.

Look at the Gentoo wiki article on kernel modules for an in-depth explanation.

For loading the tun module, create the file /etc/modules-load.d/networking.conf, with the single line:

tun

Basic terminology

Pods

For practical purposes, pods are going to be a collection of containers.
All containers inside a pod share a network namespace.

If running a pod in macvlan mode, this means all containers in the pod get the same IP address.

Containers

containers are lightweight emulation environments, designed for running programs in isolation from each other.
For our purposes, we will be running small services for our homeserver, with the specific example in this document for the jackett torrent tracker.

Extra - creating a podman network - macvlan

The default config provided only allows for publishing specific ports for containers/pods and provides the default network config podman. We will create a new network which will allow pods/containers to get real IPs on the LAN.

Create the /etc/cni/net.d/88-macvlan.conflist config file:

{
  "cniVersion": "0.4.0",
  "name": "macvlan",
  "plugins": [
    {
      "type": "macvlan",
      "master": "br0",
      "isGateway": true,
      "ipam": {
        "type": "dhcp"
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    },
    {
      "type": "firewall"
    },
    {
      "type": "tuning"
    }
  ]
}

Note: br0 can be substituted for any preconfigured bridge on the host, or can even be an ethernet interface such as enp5s0f0, without losing connectivity for the host machine.

Test the network config

To check that the configuration is sane:

$ podman network ls
NAME           VERSION  PLUGINS
podman         0.4.0    bridge,portmap,firewall,tuning
macvlan        0.4.0    macvlan,portmap,firewall,tuning

Virtual DHCP server for containers

Most containers are not going to have a dhcp client present and thus will not get IP addresses from the dhcp server.

Luckily, net-misc/cni-plugins provides a virtual dhcp server/client which assigns the IP addresses to them. It is pulled in automatically as a dependency.

Start the dhcp server with:

$ rc-update add cni-dhcp default
$ rc-service cni-dhcp start

Dangers of macvlan

Exposing a pod can also expose unwanted and unsafe ports to the network, such as mistakenly exposing a redis port, if redis is employed in a container.
Be careful in choosing the macvlan networking option for a pod/container.

Running a container

There are two ways of running a podman container

  • free floating containers - not inside a pod
  • containers inside a pod

Here, we will only do containers inside a pod.

Create a pod

Create a very simple, empty pod, with the macvlan network.

$ podman pod create --name homeserver --network macvlan

Testing the pod

Add an alpine container to the homeserver pod:

$ podman run --detach --tty --pod homeserver --name homeserver_alpine docker.io/library/alpine:latest top

man podman-run gives a more thorough review of all the command options:

  • --detach: Detached mode: run the container in the background and print the new container ID. The default is false.
  • --tty: Allocate a pseudo-TTY. The default is false.

This creates an alpine linux container inside the homeserver pod and runs the command top in it.

To view the IP address of this pod:

$ podman exec homeserver_alpine ifconfig
eth0      Link encap:Ethernet  HWaddr 82:82:CF:4F:6F:DD  
          inet addr:192.168.2.122  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::8082:cfff:fe4f:6fdd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1976 (1.9 KiB)  TX bytes:1552 (1.5 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Creating a container inside a pod

We can now run the jackett container in this pod with:

$ podman run --detach --tty --pod homeserver --name=homeserver_jackett ghcr.io/linuxserver/jackett

Test the container

From the above configuration, try to access the jackett webui - http://192.168.2.122:9117

Voila

We are done!
This setup gets rid of pesky port forwarding options and firewall configurations , though at the cost of a potential risk.
Be wise before choosing this configuration.

References

Official documentation - https://docs.podman.io/en/latest/index.html