<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Qlem&#39;s thoughts</title>
    <link>https://blog.qlem.net/</link>
    <description></description>
    <pubDate>Sun, 03 May 2026 02:32:38 +0200</pubDate>
    <item>
      <title>Kubernetes : restart a deployment</title>
      <link>https://blog.qlem.net/kubernetes-restart-a-deployment</link>
      <description>&lt;![CDATA[The following command will restart all pods associated with a given deployment.&#xA;&#xA;kubectl --namespace [namespace-name] rollout restart deployment [deployment-name]&#xA;Pods are only restarted, not recreated.]]&gt;</description>
      <content:encoded><![CDATA[<p>The following command will restart all pods associated with a given deployment.</p>

<pre><code class="language-sh">kubectl --namespace [namespace-name] rollout restart deployment [deployment-name]
</code></pre>

<p>Pods are <strong>only restarted</strong>, not recreated.</p>
]]></content:encoded>
      <guid>https://blog.qlem.net/kubernetes-restart-a-deployment</guid>
      <pubDate>Sat, 03 Jan 2026 12:51:59 +0000</pubDate>
    </item>
    <item>
      <title>Set-up miniBOSS DAC on RPI Zero</title>
      <link>https://blog.qlem.net/set-up-miniboss-dac-on-rpi-zero</link>
      <description>&lt;![CDATA[Append the following line ate the end of /boot/firmware/config.txt on a rapsbianOS&#xA;dtoverlay=allo-boss-dac-pcm512x-audio&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>Append the following line ate the end of <code>/boot/firmware/config.txt</code> on a rapsbianOS</p>

<pre><code>dtoverlay=allo-boss-dac-pcm512x-audio
</code></pre>
]]></content:encoded>
      <guid>https://blog.qlem.net/set-up-miniboss-dac-on-rpi-zero</guid>
      <pubDate>Thu, 24 Jul 2025 20:09:36 +0000</pubDate>
    </item>
    <item>
      <title>Global and local-link IPv6 addresses on podman containers</title>
      <link>https://blog.qlem.net/global-and-local-link-ipv6-addresses-on-podman-containers</link>
      <description>&lt;![CDATA[I want to migrate some services from virtual-machine to containers and I want to use Podman to benefit from pods.&#xA;&#xA;From an IPv6 networking standpoint, I have two requirements : &#xA;&#xA;They need to access internet&#xA;They need to be accessible from the reverse proxy &#xA;&#xA;IPv6 Global Unicast&#xA;&#xA;My ISP provide me with a /48 subnet, however this subnet change every few weeks, so I cant assign a static address from it to the containers. &#xA;I rely on SLAAC (with a workaround to make it work) to address them, this solve the connection to internet, but not the communication with the reverse proxy.&#xA;&#xA;Communication with the reverse proxy&#xA;The reverse proxy run on a virtual-machine with a static handwritten configuration.&#xA;Either I find a way to have a DNS point to the container based on some static FQDN or I assign a static local-link IPv6 to the container.&#xA;&#xA;Podman have some sort on integrated dns for this purpose, but I know static address to be more reliable.&#xA;&#xA;Configuration&#xA;&#xA;The containers are hosted on a virtual machine with a dedicated network interface, the gateway is an opnsense firewall, I use a macvlan network to allow layer 2 communication with the firewall while isolating the containers from the host.&#xA;&#xA;Drawing of what the end result should look like&#xA;Drawing of what the end result should look like&#xA;&#xA;Create the network&#xA;podman network create \&#xA;    --driver macvlan \&#xA;    --opt parent=eth1 \&#xA;    --ipv6 \&#xA;    --ipam-driver host-local \&#xA;    --subnet 192.0.2.0/24 \&#xA;    --gateway 192.0.2.1 \&#xA;    --subnet fd00:db8::/64 \&#xA;    public-services&#xA;--ipv6 enable ipv6 on the network&#xA;--ipam-driver host-local select the address attribution mechanism (see the doc)&#xA;The host-local driver enable static address attribution for local-link ipv6 and ipv4 addresses.&#xA;&#xA;I do not declare a gateway in the ipv6 subnet because there is none, however podman will configure one by default.&#xA;podman network inspect public-services&#xA;[&#xA;     {&#xA;          &#34;name&#34;: &#34;public-services&#34;,&#xA;          &#34;id&#34;: &#34;6ce...58f&#34;,&#xA;          &#34;driver&#34;: &#34;macvlan&#34;,&#xA;          &#34;networkinterface&#34;: &#34;eth1&#34;,&#xA;          &#34;subnets&#34;: [&#xA;               {&#xA;                    &#34;subnet&#34;: &#34;192.0.2.0/24&#34;,&#xA;                    &#34;gateway&#34;: &#34;192.0.2.1&#34;&#xA;               },&#xA;               {&#xA;                    &#34;subnet&#34;: &#34;fd00:db8::/64&#34;,&#xA;                    &#34;gateway&#34;: &#34;fd00:db8::1&#34; &lt;- generated by podman&#xA;               }&#xA;          ],&#xA;          &#34;ipv6enabled&#34;: true,&#xA;          &#34;internal&#34;: false,&#xA;          &#34;dnsenabled&#34;: true,&#xA;          &#34;ipamoptions&#34;: {&#xA;               &#34;driver&#34;: &#34;host-local&#34;&#xA;          }&#xA;     }&#xA;]&#xA;&#xA;Then create a pod&#xA;podman pod create \&#xA;    --name service-pod \&#xA;    --network public-services:ip6=fd00:db8::100,ip=192.0.2.100 \&#xA;    --sysctl net.ipv6.conf.eth0.autoconf=1&#xA;--sysctl net.ipv6.conf.eth0.autoconf=1 enable SLAAC addressing for eth0 for this pod&#xA;&#xA;And create a container&#xA;podman create --pod service-pod docker.io/traefik/whoami&#xA;&#xA;Start the pod, and you should be able to access the container both from the local-link ipv6 and from a SLAAC generated IPv6.&#xA;&#xA;For debugging purpose, you can nicolaka/netshootimage&#xA;podman create \&#xA;    --pod service-pod \&#xA;    --cap-add NETRAW,NETADMIN \&#xA;    --interactive \&#xA;    nicolaka/netshoot&#xA;Start the pod, then open a sheel in the container : podman exec -it [container name] zsh&#xA;&#xA;Other considerations&#xA;While this setup work, two ipv6 default route are installed in the pod, one generated from the NDP RA, and one given by podman and staticly installed to fd00:db8::1. The latest does not exist, but it is installed with a lower metrics and has priority. In most case, if the mac address associated with the gateway can not be resolved with NDP, the container will try the next default route (the SLAAC/RA one, which work). But it is not ideal.&#xA;&#xA;Also, during my expirimentation, I learned that it is not possible to define custom network in k8s like yaml file to play with podman kube play. Nor is it possible to rattach pods to an existing network.&#xA;&#xA;This make podman with play kube a no go for my use case.&#xA;From this discussion on github I thinks podman is more oriented for developement than any kind of &#34;production&#34;. Maybe it&#39;s time to go the kubernetes ways ...  &#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I want to migrate some services from virtual-machine to containers and I want to use Podman to benefit from pods.</p>

<p>From an IPv6 networking standpoint, I have two requirements :</p>
<ol><li>They need to access internet</li>
<li>They need to be accessible from the reverse proxy</li></ol>

<h2 id="ipv6-global-unicast" id="ipv6-global-unicast">IPv6 Global Unicast</h2>

<p>My ISP provide me with a /48 subnet, however this subnet change every few weeks, so I cant assign a static address from it to the containers.
I rely on SLAAC (with a workaround to make it work) to address them, this solve the connection to internet, but not the communication with the reverse proxy.</p>

<h2 id="communication-with-the-reverse-proxy" id="communication-with-the-reverse-proxy">Communication with the reverse proxy</h2>

<p>The reverse proxy run on a virtual-machine with a static handwritten configuration.
Either I find a way to have a DNS point to the container based on some static FQDN or I assign a static local-link IPv6 to the container.</p>

<p>Podman have some sort on <a href="https://github.com/containers/aardvark-dns">integrated dns</a> for this purpose, but I know static address to be more reliable.</p>

<h2 id="configuration" id="configuration">Configuration</h2>

<p>The containers are hosted on a virtual machine with a dedicated network interface, the gateway is an opnsense firewall, I use a macvlan network to allow layer 2 communication with the firewall while isolating the containers from the host.</p>

<p><img src="https://raw.githubusercontent.com/QlemWasTaken/blog-static/refs/heads/main/blog-excalidraw-podman-ipv6.png" alt="Drawing of what the end result should look like">
<em>Drawing of what the end result should look like</em></p>

<p>Create the network</p>

<pre><code>podman network create \
    --driver macvlan \
    --opt parent=eth1 \
    --ipv6 \
    --ipam-driver host-local \
    --subnet 192.0.2.0/24 \
    --gateway 192.0.2.1 \
    --subnet fd00:db8::/64 \
    public-services
</code></pre>

<p><code>--ipv6</code> enable ipv6 on the network
<code>--ipam-driver host-local</code> select the address attribution mechanism (<a href="https://docs.podman.io/en/latest/markdown/podman-network-create.1.html#ipam-driver-driver">see the doc</a>)
The <code>host-local</code> driver enable static address attribution for local-link ipv6 and ipv4 addresses.</p>

<p>I do not declare a gateway in the ipv6 subnet because there is none, however podman will configure one by default.</p>

<pre><code>podman network inspect public-services
[
     {
          &#34;name&#34;: &#34;public-services&#34;,
          &#34;id&#34;: &#34;6ce...58f&#34;,
          &#34;driver&#34;: &#34;macvlan&#34;,
          &#34;network_interface&#34;: &#34;eth1&#34;,
          &#34;subnets&#34;: [
               {
                    &#34;subnet&#34;: &#34;192.0.2.0/24&#34;,
                    &#34;gateway&#34;: &#34;192.0.2.1&#34;
               },
               {
                    &#34;subnet&#34;: &#34;fd00:db8::/64&#34;,
                    &#34;gateway&#34;: &#34;fd00:db8::1&#34; &lt;- generated by podman
               }
          ],
          &#34;ipv6_enabled&#34;: true,
          &#34;internal&#34;: false,
          &#34;dns_enabled&#34;: true,
          &#34;ipam_options&#34;: {
               &#34;driver&#34;: &#34;host-local&#34;
          }
     }
]
</code></pre>

<p>Then create a pod</p>

<pre><code>podman pod create \
    --name service-pod \
    --network public-services:ip6=fd00:db8::100,ip=192.0.2.100 \
    --sysctl net.ipv6.conf.eth0.autoconf=1
</code></pre>

<p><code>--sysctl net.ipv6.conf.eth0.autoconf=1</code> enable SLAAC addressing for eth0 for this pod</p>

<p>And create a container</p>

<pre><code>podman create --pod service-pod docker.io/traefik/whoami
</code></pre>

<p>Start the pod, and you should be able to access the container both from the local-link ipv6 and from a SLAAC generated IPv6.</p>

<p>For debugging purpose, you can <code>nicolaka/netshoot</code>image</p>

<pre><code>podman create \
    --pod service-pod \
    --cap-add NET_RAW,NET_ADMIN \
    --interactive \
    nicolaka/netshoot
</code></pre>

<p>Start the pod, then open a sheel in the container : <code>podman exec -it [container name] zsh</code></p>

<h2 id="other-considerations" id="other-considerations">Other considerations</h2>

<p>While this setup work, two ipv6 default route are installed in the pod, one generated from the NDP RA, and one given by podman and staticly installed to <code>fd00:db8::1</code>. The latest does not exist, but it is installed with a lower metrics and has priority. In most case, if the mac address associated with the gateway can not be resolved with NDP, the container will try the next default route (the SLAAC/RA one, which work). But it is not ideal.</p>

<p>Also, during my expirimentation, I learned that it is not possible to define custom network in k8s like yaml file to play with <code>podman kube play</code>. Nor is it possible to rattach pods to an existing network.</p>

<p>This make podman with <code>play kube</code> a no go for my use case.
From this <a href="https://github.com/containers/podman/issues/12965">discussion on github</a> I thinks podman is more oriented for developement than any kind of “production”. Maybe it&#39;s time to go the kubernetes ways ...</p>
]]></content:encoded>
      <guid>https://blog.qlem.net/global-and-local-link-ipv6-addresses-on-podman-containers</guid>
      <pubDate>Sat, 01 Mar 2025 19:04:42 +0000</pubDate>
    </item>
    <item>
      <title>Attach to proxmox VM serial from host</title>
      <link>https://blog.qlem.net/attach-to-proxmox-vm-serial-from-host</link>
      <description>&lt;![CDATA[As easy as the follwing command :&#xA;&#xA;qm terminal [vmid]&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>As easy as the follwing command :</p>

<pre><code>qm terminal [vmid]
</code></pre>
]]></content:encoded>
      <guid>https://blog.qlem.net/attach-to-proxmox-vm-serial-from-host</guid>
      <pubDate>Wed, 29 Jan 2025 16:22:47 +0000</pubDate>
    </item>
    <item>
      <title>Import disk to vm in Proxmox</title>
      <link>https://blog.qlem.net/import-disk-to-vm-in-proxmox</link>
      <description>&lt;![CDATA[Tested on proxmox 8.2.4&#xA;qm disk import [vmid] [/path/to/.qcow2] [storage]&#xA;E.g : qm disk import 900 /tmp/disk.qcow2 localvm&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>Tested on proxmox 8.2.4</p>

<pre><code class="language-bash">qm disk import [vmid] [/path/to/.qcow2] [storage]
E.g : qm disk import 900 /tmp/disk.qcow2 localvm
</code></pre>
]]></content:encoded>
      <guid>https://blog.qlem.net/import-disk-to-vm-in-proxmox</guid>
      <pubDate>Sat, 19 Oct 2024 10:00:41 +0000</pubDate>
    </item>
    <item>
      <title>How to extend debian cloud partition</title>
      <link>https://blog.qlem.net/how-to-extend-debian-cloud-partition</link>
      <description>&lt;![CDATA[I often use debian cloud image both for lab and test virtual machines. By default, those images are only 2 GB and sometimes I need more space.&#xA;&#xA;Step 1 : Extend &#34;Physical disk&#34;&#xA;On proxmox :&#xA;[VM]   Hardware   [disk]   Disk Action   Resize&#xA;On UTM :&#xA;[VM]   Settings   [disk]   Resize&#xA;&#xA;Step 2 : Resize the partition&#xA;Start the VM and install cloud-guest-utils&#xA;apt install cloud-guest-utils&#xA;Resize the root partition with growpart&#xA;growpart [disk] [partition number]&#xA;growpart /dev/vda 1&#xA;&#xA;Restart and Voila !]]&gt;</description>
      <content:encoded><![CDATA[<p>I often use <a href="https://cloud.debian.org/images/cloud/">debian cloud image</a> both for lab and test virtual machines. By default, those images are only 2 GB and sometimes I need more space.</p>

<h2 id="step-1-extend-physical-disk" id="step-1-extend-physical-disk">Step 1 : Extend “Physical disk”</h2>

<p>On proxmox :</p>

<pre><code>[VM] &gt; Hardware &gt; [disk] &gt; Disk Action &gt; Resize
</code></pre>

<p>On UTM :</p>

<pre><code>[VM] &gt; Settings &gt; [disk] &gt; Resize
</code></pre>

<h2 id="step-2-resize-the-partition" id="step-2-resize-the-partition">Step 2 : Resize the partition</h2>

<p>Start the VM and install <code>cloud-guest-utils</code></p>

<pre><code>apt install cloud-guest-utils
</code></pre>

<p>Resize the root partition with growpart</p>

<pre><code class="language-bash"># growpart [disk] [partition number]
growpart /dev/vda 1
</code></pre>

<p>Restart and Voila !</p>
]]></content:encoded>
      <guid>https://blog.qlem.net/how-to-extend-debian-cloud-partition</guid>
      <pubDate>Sun, 13 Oct 2024 10:22:01 +0000</pubDate>
    </item>
  </channel>
</rss>