NixOs + Docker Swarm

I can’t seem to figure out how to deploy a docker stack to NixOs on the VPS. The service fails to start with an error:

network sandbox join failed: subnet sandbox join failed for “10.0.0.0/24”: error creating vxlan interface: operation not supported

Docker-related nix config:

virtualisation.docker.enable = true;
virtualisation.docker.rootless.enable = false;

Container is a simple web app that uses a port binding.

any debug output? why is it using vxlan? how is it trying to use vxlan specifically?

I’m trying to run a container in a swarm, so I’m running

docker swarm init
docker stack deploy -c <docker-compose.yml> <stack_name>

Not sure why it is using vxlan. Where can I find debug logs in this case?

Not sure why it is using vxlan. Where can I find debug logs in this case?

Not a good answer if you want me to help you with this. :slight_smile: I’m not using Docker. Nobody in our core team is, that technology doesn’t make sense to us. I personally hate debugging the golang spaghetti. If you want to get this running, you’re going to need to invest more time in it. I will fix whatever I can fix, but I need to know what to fix - and I’m not spending days on debugging this when everyone else’s setup works, sorry.

[CT 26788] root@llamatest:~# ip link add vxlan0 type vxlan id 100 group 239.1.1.1 dev venet0 dstport 4789

[CT 26788] root@llamatest:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/tunnel6 :: brd :: permaddr 7e8c:933f:f1ad::
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
7: venet0@if136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:9e:82:6f:2f:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 02:e5:5e:d7:96:47 brd ff:ff:ff:ff:ff:ff
10: vxlan0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 2a:ae:09:fa:0a:2e brd ff:ff:ff:ff:ff:ff

Just verified that vxlan interface creation works as expected.

1 Like

Totally understand, no worries! I’ll keep digging.

What are you using to run apps on NixOS instead of Docker? I’m not particularly attached to it, just looking for something better than a pile of bash scripts I used to use :slight_smile:

I’m basically looking for a way to have a declarative and ideally isolated stack to run web apps. Each of the apps is the usual combo of a web server, database, cache etc.

can you share your configs with everything sensitive replaced with something non-sensitive? I’ll try to reproduce that vxlan error

we run everything in nixos containers - you can create more containers in vpsAdminOS or you can use LXC/LXD inside, that works too

1 Like

and this NixOS Containers - NixOS Wiki uses systemd-nspawn and works well too :slight_smile:

1 Like

can you share your configs with everything sensitive replaced with something non-sensitive? I’ll try to reproduce that vxlan error

Sure, thanks for looking into it!

Docker compose file & configuration.nix file changes: NixOS + Docker issue · GitHub

Minimum set of commands I run:

nixos-rebuild switch
docker swarm init
docker stack deploy -c ./docker-compose.prod.yml gnd_buzz_prod
docker service ps gnd_buzz_prod_web --no-trunc # to see the logs

I’ll check the alternative too, thank you for the pointers!